代写GC- 4100、代做Java/c++设计程序
MASTER OF SCIENCE IN MANAGEMENT AND SYSTEMS
Applied Project Capstone
MASY GC- 4100
MEMORANDUM

TO: Spring 2025 AP Capstone Students
DATE: January 3, 2025
RE: UNICC Phase 2 Capstone Projects for Your Participation

We are pleased to inform NYU Capstone students about a set of capstone projects available for their consideration. NYU SPS and The Digital Forge lab have been selected by the UNICC to work on a media analysis tool to detect inappropriate language usage in media communications. This is a continuation of the successful Phase 1 Fall 2024 projects.

The Sponsor:
The United Nations International Computing Centre (UNICC) has over 50 years of experience as the largest strategic partner for digital solutions and cybersecurity within the United Nations system. They are pleased to sponsor NYU MASY students in a project competition.

The Final Phase 2 Product
The product is an AI-driven media analysis tool designed to enhance the capacity of media outlets to report ethically and accurately on topics related to refugees, migrants, and other forcibly displaced populations. This tool will support the detection and prevention of xenophobic language, misinformation, and harmful content in media environments, ultimately fostering more informed and empathetic public discourse. The final tool will add multi-language and multimodal (written, audio and video) capabilities to the original prototype.

The Projects
We have divided the functions of the product into four capstone projects to be completed by a team of four students. Each student completes their project, and the team delivers the integrated product as their entry into the competition. We expect multiple teams of students to compete for the first prize: acceptance by the UNICC as the best.

How do you choose a project and get involved?
Please review the four project definitions and choose one that interests you. Coordinate with three other capstone students to select the remaining projects as part of a team. Once you have your team, present yourselves as a group with each student and their part of the project clearly identified. Give your selves a team name. Send applications to Dr. Fortino and copy Siri Kostanyan. We anticipate forming of three to four teams, but there's no limit to the number of teams that can be formed. Please note that individual applications for this project will not be considered; you must apply as part of a team.

Additional details
The projects are three-month engagements, and we have provided all the necessary details below, including information about the company and executive supervisor. Your direct client with whom you will be interfacing with directly will be the product manager, Mrs. Siri Kostanyan, who works for The Digital Forge. We understand that your time is valuable, but we assure you that this is a worthwhile experience, and the organizations and client lead contact have committed to supporting the project with the seriousness it deserves. Upon completion of the project, you are welcome to include it on your resume and use the results in your portfolio. If desired, we can also provide a reference for job applications. Additionally, successful completion of the project may lead to be invited to co-author a research paper with the client sponsors.

To apply for consideration for any of these projects, send a cover email to Dr. Fortino (agf249@nyu.edu), AS A TEAM, with the following:

1.Which capstone students will be doing which project for the product
2.All parts of the project must be covered; in other words, there have to be four members of your team.
3.Include the resume and NYU transcript of each team member.

MASY Clinical Associate Professor

Company and Sponsor Information

Company Names
The Digital Forge
NYU School of Professional Studies and the Management and Systems program (MASY), is a New York-based learning institution.

The UNICC (International Computing Center)
Ms. Anusha Dandapani, Center Director

Company Location
NYU School of Professional Studies is at 12 West 43rd Street, NY, NY.

Project Sponsor
The principal project sponsor for all projects will be Dr. Andres Fortino, Clinical Associate Professor, NYU (https://www.linkedin.com/in/afortino), and Mrs. Siri Kostanyan, MSPM, The Clients

Company and Sponsor's Location
Dr. Andres Fortino (agf249@nyu.edu) can be reached over virtual conference calls as per project requirements.
Mrs. Siri Kostanyan (sk11416@nyu.edu) is available for consultations and support via email or virtual meetings as needed to ensure project success.

Description of the Business
New York University (NYU) is a private research university based in New York City. The MASY degree is based on a unique curriculum that provides students with experiential learning opportunities to develop strong management and leadership skills and gain a comprehensive knowledge of current information technologies.

The United Nations International Computing Centre (UNICC) has over 50 years of experience as the largest strategic partner for digital solutions and cybersecurity within the United Nations system.

Relationship to the Client
The Client’s relationship with the Project Manager will be that of an independent contractor, and nothing in this sponsorship is intended to or should be construed to, create a partnership, agency, joint venture, or employment relationship.

Note: use this information to create your project proposal.


Project Elements and Deliverables

1.In consultation with the client, create a set of functional objectives with deliverables and due dates to break down your project.
2.A clearly defined modularization of the project.
3.At least four meetings with the client during the project:
a.Initial meeting to launch project
b.Second meeting no more than two weeks after launch to review objectives
c.Third meeting to review progress no more than two months after launch
d.A final meeting focused on presenting results and handing in deliverables.
e.These meetings are to be arranged by the project manager (that’s you!).
4.The final report for each project must conform to the template provided by the client.
5.All final project files and a README user document must be deposited in a public GitHub repository.
6.A Team Deliverable of the integrated product ready to present to UNICC
7.Presentation of your product to NYU and UNICC by your team during a day of competition at the end of the semester.

Additional Requirements

1.All steps in the project must be well documented as the project progresses.
2.Weekly written and emailed summary progress reports must be provided. They must include
a.what was just accomplished in the past week,
b.what you are working on in the coming week
c.and any problems you are encountering that need resolution and input from the client.


AI-Driven Media Analysis Tool (Phase 2)
Objective
The product is an AI-driven media analysis tool designed to enhance the capacity of media outlets to report ethically and accurately on topics related to refugees, migrants, and other forcibly displaced populations. This tool will support the detection and prevention of xenophobic language, misinformation, and harmful content in media environments, ultimately fostering more informed and empathetic public discourse. The final delivered product will incorporate multi-language capabilities, analyzing content in all six official UN languages, as well as multimodal functionalities to process text, audio, and video.
Scope of the Product

The AI-driven media analysis tool will consist of four integrated functions, each serving a specific purpose to ensure comprehensive analysis and support for journalists and media professionals. Teams will review previous prototypes (developed by Fall 2024 teams) and either extend or emulate one as their foundation.
The Tool’s Original Functions (Fall 2024) Phase 1
The tool builds upon the foundational functionalities developed in Fall 2024, which include the following key features:
1.Identification of Xenophobic Language and Mis/Disinformation
○Functionality: Detect and flag xenophobic language, racist attitudes, incorrect data, and stereotypes related to human mobility. Analyze media content in real-time to identify harmful narratives that perpetuate discrimination or hostility.
○Outcome: Assist media professionals in identifying and avoiding harmful language, fostering a respectful and accurate portrayal of refugees and migrants.
2.Fact and Language Checking
○Functionality: Verify the accuracy of language and data, focusing on terminology related to migrants and displaced populations. Cross-reference media content with a verified database of facts and terminology to prevent misinformation.
○Outcome: Provide journalists with a reliable resource for fact-checking and language verification, supporting ethical journalism practices.
3.Topic-Based Analysis on Harmful Content
○Functionality: Perform topic-based analysis of media content to identify and categorize harmful narratives related to migration and displacement. Highlight topics contributing to negative perceptions or misinformation about displaced communities.
○Outcome: Offer insights into harmful content, enabling media professionals to take corrective actions and promote balanced reporting.
4.Integration and Testing
○Functionality: Combine all components developed into a cohesive system and ensure functionality through rigorous testing.
○Outcome: Deliver a functional prototype that integrates xenophobic language detection, fact-checking, and topic-based analysis into a single reliable tool.
Expanded Capabilities for 2025 Phase 2
Building on these foundations, the Spring 2025 iteration introduces four new and enhanced functionalities to expand the tool’s capabilities:
1.Multi-language Capability
○Functionality: Analyze content in any of the six official UN languages (Arabic, Chinese, English, French, Russian, and Spanish), expanding the tool's inclusivity and global relevance.
○Outcome: Empower media professionals to work with diverse linguistic content, promoting ethical journalism across cultures and languages.
2.Audio Analysis
○Functionality: Process journalistic pieces in audio format, such as podcasts and radio programming, by transcribing and analyzing spoken content.
○Outcome: Enable media professionals to assess audio media with precision, detecting xenophobic language and misinformation in spoken-word formats.
3.Video Analysis
○Functionality: Analyze journalistic pieces in video format, including videocasts and television news, by processing both visual and auditory elements.
○Outcome: Equip media professionals to evaluate video content comprehensively, ensuring balanced and accurate reporting across multimedia platforms.
4.Integration, Design, and Testing
○Functionality: Integrate the multilingual, audio, and video analysis features into a unified, user-friendly system. Design an intuitive interface and conduct thorough testing to ensure the tool meets performance standards.
○Outcome: Deliver a fully operational and reliable tool that combines all new capabilities, providing media professionals with a seamless platform for ethical and accurate reporting.

Implementation and Deployment as NYU Capstone Projects
Development and Implementation Plan
The development of the AI-driven media analysis tool will be managed pro bono by Siranush 'Siri' Kostanyan, who will serve as the Product Manager. The tool will be developed by teams of four Capstone students from New York University, under the leadership and guidance of Dr. Andres Fortino and in collaboration with UN representatives. The student groups will compete to present the most effective solution, with the winning capstone project selected by UN representatives.
The project is structured to be completed over a three-month period, divided into the following phases with key deliverables:
1.Research and Definition
○Activities: Conduct initial research, define the project scope, and set up the development environment.
○Deliverable: Functional Requirements Specification (FRS).
2.Prototype Development
○Activities:
■Data Collection and Preprocessing: Gather and prepare data for the AI models.
■Model Development: Design, train, and validate machine learning models.
■User Interface Development: Design and develop a user interface that integrates with the AI models.
○Deliverable: Product Prototype.
3.Proof of Concept
○Activities: Integrate all components and conduct comprehensive testing to ensure the system functions as expected.
○Deliverable: Proof of concept through rigorous testing.
4.Final Documentation and Deployment
○Activities: Document the entire process, prepare user guides, and deploy the final product.
○Deliverable: Complete documentation and successful deployment of the tool.
Each phase will include specific deliverables, such as the development of multimodal and multilingual analysis features, the creation of a user-friendly interface, and comprehensive testing to ensure the tool’s reliability and effectiveness. Mrs. Siranush 'Siri' Kostanyan will oversee the entire process to ensure that the product meets its objectives and is delivered on time.
Expected Impact
The AI-driven media analysis tool is expected to empower media organizations and content creators to report more accurately and sensitively on issues concerning refugees and other forcibly displaced people. By leveraging advanced AI technology, the tool facilitates fact-based reporting and fosters mutual understanding between displaced and host populations. The tool aims to bridge the gap between communities, ensuring that media narratives are informed, inclusive, and conducive to building empathy and understanding across diverse audiences.
Breakdown of Functions to Capstone Projects for Spring 2025
Breakdown of Functions to Capstone Projects
The development of the AI-driven media analysis tool is an ongoing, multifaceted initiative designed to address xenophobic language and misinformation in media reporting. This initiative builds upon the foundational prototypes developed by Capstone teams in Fall 2024. These prototypes focused primarily on analyzing written content in English, providing a robust starting point for this semester's enhanced functionality.
For Spring 2025, we are expanding the scope of the tool to include multilingual capabilities, audio and video analysis, and comprehensive system integration. These enhancements are divided into four distinct Capstone projects, ensuring that each critical feature is fully developed, tested, and refined. This structure allows students to build on the achievements of Fall 2024 while addressing new challenges and advancing the tool’s capabilities.
The Capstone projects for Spring 2025 are as follows:
1.Multilingual Analysis
○Objective: Enable the tool to analyze content in all six official UN languages (Arabic, Chinese, English, French, Russian, and Spanish).
○Focus: Extend linguistic capabilities by implementing advanced natural language processing (NLP) techniques for multilingual input.
○Outcome: Equip media professionals with the ability to process diverse linguistic content, fostering inclusivity and broader usability.
2.Audio Analysis
○Objective: Develop the ability to process and analyze audio files, including radio programming, podcasts, and other spoken-word content.
○Focus: Train machine learning models to evaluate audio inputs for detecting xenophobic language and misinformation.
○Outcome: Allow media professionals to assess spoken content with the same precision as text analysis.
3.Video Analysis
○Objective: Extend the tool’s functionality to process and analyze video content, such as news broadcasts, videocasts, and social media posts.
○Focus: Integrate multimodal analysis to evaluate both visual and auditory components in video media.
○Outcome: Provide comprehensive insights into video-based narratives, ensuring balanced and ethical reporting across all media formats.
4.Integration, Design, and Testing
○Objective: Integrate all developed components into a seamless, unified system with a user-friendly interface.
○Focus: Perform system integration, intuitive user interface (UI) design, and thorough testing to ensure reliability and functionality.
○Outcome: Deliver a fully operational AI-driven tool that combines multilingual, audio, and video analysis for ethical media reporting.
Strategic Implementation:
●Teams will consist of four students: three members will focus on specific deliverables (multilingual, audio, and video analysis), while the fourth will manage integration and testing.
●Teams will utilize insights and reports from the Fall 2024 prototypes as a foundation, ensuring continuity and improvement in the tool’s development.
●The competition format remains the same, with multiple teams working on the same framework to produce the most effective and impactful solution.
This semester’s enhancements aim to deliver a sophisticated, multimodal, and multilingual AI-driven tool, addressing modern media’s ethical challenges and providing actionable solutions for journalists and media professionals.

Capstone Project 1: Multilingual Analysis (Phase 2 Spring 2025)
Project Overview
This project focuses on expanding the AI-driven media analysis tool's capabilities to support multilingual analysis. The tool will process and analyze content in all six official UN languages: Arabic, Chinese, English, French, Russian, and Spanish. By leveraging advanced natural language processing (NLP) techniques, the project aims to ensure accurate detection of xenophobic language, misinformation, and harmful narratives across diverse linguistic contexts.
Project Goals and Objectives
Goal: Develop a multilingual analysis feature that processes and analyzes media content in six languages, ensuring inclusivity and cultural sensitivity.
Objectives:
●Fine-tune NLP models to analyze media content in the six official UN languages.
●Develop a robust data pipeline to collect and preprocess multilingual datasets.
●Create a scalable architecture to accommodate language-specific nuances and complexities.
●Test and validate the multilingual functionality to ensure accuracy and reliability.
Project Roadmap
Phase 1: Initial Research and Setup (Weeks 1-3)
●Deliverable 1: Conduct a literature review on multilingual NLP models and techniques.
●Deliverable 2: Define the scope of multilingual analysis, including key criteria for language-specific challenges.
●Deliverable 3: Set up the development environment and tools for multilingual dataset collection and preprocessing.
Phase 2: Data Collection and Preprocessing (Weeks 4-6)
●Deliverable 4: Build a data pipeline to collect diverse datasets in the six UN languages.
●Deliverable 5: Preprocess the data to account for linguistic variations, such as syntax, grammar, and idiomatic expressions.
●Deliverable 6: Create balanced, labeled datasets tailored for model training and fine-tuning.
Phase 3: Model Development and Training (Weeks 7-9)
●Deliverable 7: Fine-tune NLP models for each language, focusing on detecting xenophobic language and misinformation.
●Deliverable 8: Optimize the models for precision, recall, and overall accuracy.
●Deliverable 9: Validate the performance of each model using language-specific test datasets.
Phase 4: Integration and Testing (Weeks 10-12)
●Deliverable 10: Integrate the multilingual functionality into the AI-driven tool’s existing architecture.
●Deliverable 11: Conduct end-to-end testing of the tool with multilingual datasets to ensure seamless operation.
●Deliverable 12: Refine the multilingual analysis feature based on feedback and test results.
Final Phase: Presentation and Documentation (Week 13)
●Deliverable 13: Prepare a final report detailing the development process, challenges, and outcomes.
●Deliverable 14: Present the multilingual analysis tool to stakeholders, showcasing its capabilities and real-world applications.
●Deliverable 15: Submit all code, documentation, and the final report to a public GitHub repository for future reference and potential further development.
Expected Outcomes
By the end of this Capstone project, the team will deliver a multilingual analysis feature capable of processing media content in six languages. The feature will be tested, validated, and ready for integration into the broader media analysis tool, enabling ethical, accurate, and inclusive media reporting across diverse linguistic contexts.

Capstone Project 2: Audio Analysis (Phase 2 Spring 2025)
Project Overview
This project focuses on developing the AI-driven media analysis tool’s capability to process and analyze audio content, including journalistic pieces such as podcasts, radio programming, and other spoken-word formats. By leveraging state-of-the-art audio processing and natural language processing (NLP) techniques, the project aims to detect xenophobic language, misinformation, and harmful narratives embedded in audio media.
Project Goals and Objectives
Goal: Develop an audio analysis feature that processes journalistic content in spoken formats to identify and address harmful narratives.
Objectives:
●Implement audio processing pipelines to transcribe and analyze spoken content.
●Fine-tune NLP models for detecting xenophobic language and misinformation in transcribed audio.
●Ensure the system accounts for variations in accents, dialects, and languages across diverse audio sources.
●Validate the tool’s performance with real-world audio datasets.
Project Roadmap
Phase 1: Initial Research and Setup (Weeks 1-3)
●Deliverable 1: Conduct a literature review on audio processing and speech-to-text technologies.
●Deliverable 2: Define the scope of audio analysis, including key challenges such as background noise and speaker variability.
●Deliverable 3: Set up the development environment and tools for processing audio files.
Phase 2: Audio Data Collection and Preprocessing (Weeks 4-6)
●Deliverable 4: Build a data pipeline to collect diverse audio datasets, including podcasts and radio content.
●Deliverable 5: Preprocess audio files by cleaning and normalizing sound quality for consistent transcription accuracy.
●Deliverable 6: Use speech-to-text models to create transcriptions, ensuring high accuracy for downstream analysis.
Phase 3: Model Development and Training (Weeks 7-9)
●Deliverable 7: Train and fine-tune NLP models to analyze transcribed audio for harmful language and misinformation.
●Deliverable 8: Optimize the models to handle speaker variations, accents, and context-specific language.
●Deliverable 9: Validate the models using real-world audio datasets and assess their accuracy and performance metrics.
Phase 4: Integration and Testing (Weeks 10-12)
●Deliverable 10: Integrate the audio analysis feature into the AI-driven media analysis tool’s architecture.
●Deliverable 11: Conduct end-to-end testing with audio content to ensure seamless functionality.
●Deliverable 12: Refine the audio analysis tool based on user feedback and test results.
Final Phase: Presentation and Documentation (Week 13)
●Deliverable 13: Prepare a final report documenting the development process, challenges, and outcomes.
●Deliverable 14: Present the audio analysis tool to stakeholders, demonstrating its capabilities and potential applications.
●Deliverable 15: Submit all code, documentation, and the final report to a public GitHub repository for future reference and potential further development.
Expected Outcomes
By the end of this Capstone project, the team will deliver an audio analysis feature capable of processing and analyzing journalistic audio content. The feature will be tested, validated, and ready for integration into the broader media analysis tool, empowering media professionals to assess spoken content with precision and ethical rigor.

Capstone Project 3: Video Analysis (Phase 2 Spring 2025)
Project Overview
This project focuses on expanding the AI-driven media analysis tool to process and analyze video content, including journalistic pieces such as videocasts, television news, and other video-based formats. By incorporating advanced computer vision and natural language processing (NLP) techniques, the project aims to detect xenophobic language, misinformation, and harmful narratives in both the visual and auditory components of video media.
Project Goals and Objectives
Goal: Develop a video analysis feature that processes journalistic content in video formats to identify harmful language, misinformation, and other unethical narratives.
Objectives:
●Implement computer vision models to analyze visual content, such as text overlays and imagery.
●Utilize speech-to-text technology to transcribe audio components of videos for further analysis.
●Fine-tune NLP models to evaluate transcribed audio and subtitles for harmful narratives.
●Validate the system’s performance across diverse video sources and contexts.
Project Roadmap
Phase 1: Initial Research and Setup (Weeks 1-3)
●Deliverable 1: Conduct a literature review on computer vision techniques and video processing technologies.
●Deliverable 2: Define the scope of video analysis, including challenges such as varying resolutions, languages, and media formats.
●Deliverable 3: Set up the development environment and tools for processing video files.
Phase 2: Video Data Collection and Preprocessing (Weeks 4-6)
●Deliverable 4: Build a data pipeline to collect a diverse set of video content, including television news and videocasts.
●Deliverable 5: Preprocess video files to ensure compatibility with analysis tools, including audio extraction and frame sampling.
●Deliverable 6: Use speech-to-text models to transcribe audio components and extract subtitles for downstream analysis.
Phase 3: Model Development and Training (Weeks 7-9)
●Deliverable 7: Develop and fine-tune computer vision models to analyze visual elements, including on-screen text and imagery.
●Deliverable 8: Train NLP models to evaluate transcribed audio and subtitle content for detecting harmful narratives.
●Deliverable 9: Validate the integrated video analysis models using diverse datasets to ensure accuracy and reliability.
Phase 4: Integration and Testing (Weeks 10-12)
●Deliverable 10: Integrate video analysis capabilities into the AI-driven media analysis tool’s existing architecture.
●Deliverable 11: Conduct end-to-end testing with real-world video content to assess functionality and performance.
●Deliverable 12: Refine the video analysis tool based on user feedback and test results.
Final Phase: Presentation and Documentation (Week 13)
●Deliverable 13: Prepare a final report documenting the development process, challenges, and outcomes.
●Deliverable 14: Present the video analysis tool to stakeholders, demonstrating its capabilities and real-world applications.
●Deliverable 15: Submit all code, documentation, and the final report to a public GitHub repository for future reference and potential further development.
Expected Outcomes
By the end of this Capstone project, the team will deliver a video analysis feature capable of processing and analyzing journalistic video content. The feature will be tested, validated, and ready for integration into the broader media analysis tool, enabling media professionals to assess video narratives with ethical and analytical precision.

Capstone Project 4: Integration, Design, and Testing (Phase 2 for Spring 2025)
Project Overview
This project focuses on integrating the distinct components of the AI-driven media analysis tool—multilingual, audio, and video analysis—into a unified, user-friendly system. The team will design a seamless user interface (UI) that enables media professionals to access and utilize all functionalities efficiently. Rigorous testing will ensure that the tool meets performance standards, is reliable, and delivers accurate results across diverse use cases.
Project Goals and Objectives
Goal: Create a fully integrated and tested media analysis tool that combines multilingual, audio, and video capabilities within a cohesive platform.
Objectives:
●Integrate multilingual, audio, and video analysis components into a single system.
●Design an intuitive and accessible UI that facilitates easy navigation and functionality for media professionals.
●Conduct comprehensive testing, including functional, performance, and user acceptance testing (UAT).
●Optimize the tool based on user feedback and test results to ensure reliability and usability.
Project Roadmap
Phase 1: System Integration (Weeks 1-4)
●Deliverable 1: Develop a system architecture plan to integrate multilingual, audio, and video analysis features.
●Deliverable 2: Implement APIs and backend services to unify the functionalities into a single system.
●Deliverable 3: Ensure compatibility and interoperability between all components.
Phase 2: UI/UX Design and Development (Weeks 5-8)
●Deliverable 4: Design an intuitive UI that incorporates all features, ensuring accessibility and ease of use.
●Deliverable 5: Develop the frontend interface and integrate it with the backend architecture.
●Deliverable 6: Test the UI for usability and accessibility, gathering feedback for iterative improvements.
Phase 3: Comprehensive Testing (Weeks 9-11)
●Deliverable 7: Conduct functional testing to verify the accuracy and reliability of each integrated component.
●Deliverable 8: Perform performance testing to ensure the tool operates efficiently under various workloads.
●Deliverable 9: Complete user acceptance testing (UAT) with media professionals, collecting feedback for refinement.
Phase 4: Refinement and Deployment (Weeks 12-13)
●Deliverable 10: Refine the tool based on testing results and user feedback, ensuring reliability and usability.
●Deliverable 11: Prepare the tool for deployment, including final checks and optimizations.
●Deliverable 12: Provide training materials and user guides to facilitate adoption by media professionals.
Final Phase: Presentation and Documentation (Week 13)
●Deliverable 13: Prepare a comprehensive final report documenting the integration process, challenges, and outcomes.
●Deliverable 14: Present the fully integrated tool to stakeholders, showcasing its capabilities and applications.
●Deliverable 15: Submit all code, documentation, and the final report to a public GitHub repository for future reference and potential further development.
Expected Outcomes
By the end of this Capstone project, the team will deliver a fully operational, integrated media analysis tool. The system will combine multilingual, audio, and video analysis capabilities within a cohesive platform, providing media professionals with a powerful resource for ethical and accurate reporting. The tool will be thoroughly tested, optimized, and ready for deployment in real-world media environments.

热门主题

课程名

mktg2509 csci 2600 38170 lng302 csse3010 phas3226 77938 arch1162 engn4536/engn6536 acx5903 comp151101 phl245 cse12 comp9312 stat3016/6016 phas0038 comp2140 6qqmb312 xjco3011 rest0005 ematm0051 5qqmn219 lubs5062m eee8155 cege0100 eap033 artd1109 mat246 etc3430 ecmm462 mis102 inft6800 ddes9903 comp6521 comp9517 comp3331/9331 comp4337 comp6008 comp9414 bu.231.790.81 man00150m csb352h math1041 eengm4100 isys1002 08 6057cem mktg3504 mthm036 mtrx1701 mth3241 eeee3086 cmp-7038b cmp-7000a ints4010 econ2151 infs5710 fins5516 fin3309 fins5510 gsoe9340 math2007 math2036 soee5010 mark3088 infs3605 elec9714 comp2271 ma214 comp2211 infs3604 600426 sit254 acct3091 bbt405 msin0116 com107/com113 mark5826 sit120 comp9021 eco2101 eeen40700 cs253 ece3114 ecmm447 chns3000 math377 itd102 comp9444 comp(2041|9044) econ0060 econ7230 mgt001371 ecs-323 cs6250 mgdi60012 mdia2012 comm221001 comm5000 ma1008 engl642 econ241 com333 math367 mis201 nbs-7041x meek16104 econ2003 comm1190 mbas902 comp-1027 dpst1091 comp7315 eppd1033 m06 ee3025 msci231 bb113/bbs1063 fc709 comp3425 comp9417 econ42915 cb9101 math1102e chme0017 fc307 mkt60104 5522usst litr1-uc6201.200 ee1102 cosc2803 math39512 omp9727 int2067/int5051 bsb151 mgt253 fc021 babs2202 mis2002s phya21 18-213 cege0012 mdia1002 math38032 mech5125 07 cisc102 mgx3110 cs240 11175 fin3020s eco3420 ictten622 comp9727 cpt111 de114102d mgm320h5s bafi1019 math21112 efim20036 mn-3503 fins5568 110.807 bcpm000028 info6030 bma0092 bcpm0054 math20212 ce335 cs365 cenv6141 ftec5580 math2010 ec3450 comm1170 ecmt1010 csci-ua.0480-003 econ12-200 ib3960 ectb60h3f cs247—assignment tk3163 ics3u ib3j80 comp20008 comp9334 eppd1063 acct2343 cct109 isys1055/3412 math350-real math2014 eec180 stat141b econ2101 msinm014/msing014/msing014b fit2004 comp643 bu1002 cm2030
联系我们
EMail: 99515681@qq.com
QQ: 99515681
留学生作业帮-留学生的知心伴侣!
工作时间:08:00-21:00
python代写
微信客服:codinghelp
站长地图