代做FIT5212 - Assignment 1代写留学生Python程序

FIT5212 - Assignment 1

Marks

Worth 50 marks, and 25% of all marks for the unit

Due Date

Due Week 7 – Lecture Date at 23:55pm

Extension

An extension could be granted under some circumstances. A special consideration application form. must be submitted. Please refer to the university webpage on special consideration.

Lateness

For all assessment items handed in after the official due date, and without an agreed extension, a 10% penalty applies to the student’s mark for each day after the due date (including weekends) for up to 10 days. Assessment items  handed in after 10 days without special consideration will not be considered.

Authorship

This is an individual assessment.  All work must be your own. All submissions will be placed through Turnitin. This makes plagiarism remarkably easy to identify for us.

Submission

Submission is 3 files:  one PDF discussion report, and one Jupyter notebook with a PDF print of it.  The three files must be submitted via Moodle. All files will go through Turnitin for plagiarism detection.

Programming

language

Python in Jupyter

Part 1:  Text Classification

The content has been gathered from the popular academic website arXiv.org for articles tagged as computer science content (though some of these are in mathematics or physics categories). This spans 2024-2016. You are given 3 csv files: train/dev/test sets. The fields in the csv files are:

. Title: the full title

. Abstract: the full abstract

. InformationTheory: a "1" if it is classified as an Information Theory article, otherwise "0".

. ComputerVision: a "1" if it is classified as a Computer Vision article, otherwise "0".

. ComputationalLinguistics: a "1" if it is classified as a Computational Linguistics article, otherwise "0".

The three classes are ComputationalLinguistics, InformationTheory and ComputerVision. These can occur in any combination, so an article could be all three at once, two, one or none. Your job is to build a text classifier that predicts the class ComputationalLinguistics using the Abstract field. Then repeat the same experiment using only the Titles. You should train different text classifiers using different configurations for this binary prediction task. The variations we would like to consider are:

1.   Task: 1 binary classification task (ComputationalLinguistics vs. Other two classes)

2.   Input: use Abstract, and Titles alone (separate configurations)

3.   Algorithm:  use 2 different algorithms from tutorials, use the RNN and then choose one of the statistical classifiers (logistic regressions, SVM, etc)

4.   Data size: train on the first 1000 cases in the training set, and then train on the full the training set.

5.   Pre-processing: Choose a data pre-processing procedure (i.e., lemmatization, stemming, removing stop words, etc) and stick with it in all your experiments.

So this makes 2 (i.e., abstract vs. title) by 2 (i.e., 2 algorithms) by 2 (i.e., 2 training sizes) different configurations.

For each configuration test the algorithm on the test set (Note: when testing for the model trained on the Abstracts, you should use only the Abstracts oftest set. Similarly for testing for the model trained on the Titles, you should use only the Titles oftest set.) provided and report the following results in your notebook

.    F1, precision, recall, accuracy

.    precision-recall curve

being creative about how you assemble the different values and plot the curves. The discussion of these results should be in its own 2 page discussion section in the PDF report. How well did the two algorithms work under different data size conditions, when and why?  How the model trained on title compared with the one trained on the abstracts? What insights do the various metrics and   plots give you?

Part 2:  Topic Modelling

The data used is the training data from Part 1.  Your job is to perform appropriate text pre-processing and preparation and then design two different variations for running LDA using the gensim.models.LdaModel()function call and pre-processing steps such as given in the tutorial. Select appropriate choice of pre-processing and parameters to develop model outputs that are informative. Choices you might make in differentiating the two variations are:

.    different pre-processing of text or vocabulary

.    use of bi-grams or not

.    different numbers of topics (e.g., K=10, K=40)

Now run these two on the first 1000 and the first 20,000 articles in the training data set. This means there are 2 by 2 different configurations for the LDA runs. Then make visualisations of some kind in the notebook. These should allow you to analyse and interpret the output of the topic models.

The actual discussion (analysis and interpretation) about the results should not appear in the notebook but be in the separate PDF discussion report. This is a 2 page discussion giving your analysis and findings that were presented in the notebook output. What sorts of topics do you see?

Are all top topic words comprehensible sets of words?  Perhaps find some articles that are exemplars and use them to illustrate key topics (but don't insert full articles in your report, not enough room, just extract a few lines or the title). Your analysis should serve three purposes:

1) to present what sorts of groupings there are about articles, and

2) to describe how the topic modelling presents this and any advantages or shortcomings of topic modelling for the role in 1), and

3) to explain how your two configurations and data set sizes (1000, 20000) compare.

This is a knowledge discovery task rather than a predictive task, so marks will be included for your ability to make novel findings from the topic models.

Submission by the due date

All Python code must be included in a single Jupyter notebook that must be submitted. This should have clear headings "Part 1: Text Classification" then followed by "Part 2:  Topic Modelling". It

should have the students name and ID embedded in the first comment (in markdown). The name of the file should be "code_012345678.ipynb" where "012345678" is replaced by your own student ID. An example/skeleton notebook file "code_012345678.ipynb" with appropriate headings is included with the datasets. To complete the submission, use the export option on the notebook system and export to PDF.  Save this as "code_012345678.pdf"

The notebook should:

.    be run on either Google Colab or your own Jupyter Notebook

.    have any special or unusual libraries indicated at the top of the file in commented out command form; they must be able to be installed from the standard Python repository,

。 e.g.,  "#  !pip3 install gensim"

.    assume the two datasets supplied exist in the current directory

.    have been run successfully to completion prior to submission, so the results are all embedded in the notebook

The PDF file matching the notebook should print the last version of the notebook submitted.

All discussion and analysis must be written up in a single separate PDF file.  This PDF report should have two discussion sections, "Part 1: Text Classification" and "Part 2:  Topic Modelling", each being two pages long. It is expected these will refer to plots and tables in the separate notebook. The name of the file must be "report_012345678.pdf" where "012345678" is replaced by your own student ID. The pages should be A4 size with standard margins and 11 point font.

Therefore, three files are to be submitted, "code_012345678.ipynb", "code_012345678.pdf" and "report_012345678.pdf" where "012345678" is replaced by your own student ID.




热门主题

课程名

omp9727 ddes9903 mgt253 fc021 int2067/int5051 bsb151 babs2202 mis2002s phya21 18-213 cege0012 math39512 math38032 mech5125 mdia1002 cisc102 07 mgx3110 cs240 11175 fin3020s eco3420 ictten622 comp9727 cpt111 de114102d mgm320h5s bafi1019 efim20036 mn-3503 comp9414 math21112 fins5568 comp4337 bcpm000028 info6030 inft6800 bcpm0054 comp(2041|9044) 110.807 bma0092 cs365 math20212 ce335 math2010 ec3450 comm1170 cenv6141 ftec5580 ecmt1010 csci-ua.0480-003 econ12-200 ectb60h3f cs247—assignment ib3960 tk3163 ics3u ib3j80 comp20008 comp9334 eppd1063 acct2343 cct109 isys1055/3412 econ7230 msinm014/msing014/msing014b math2014 math350-real eec180 stat141b econ2101 fit2004 comp643 bu1002 cm2030 mn7182sr ectb60h3s ib2d30 ohss7000 fit3175 econ20120/econ30320 acct7104 compsci 369 math226 127.241 info1110 37007 math137a mgt4701 comm1180 fc300 ectb60h3 llp120 bio99 econ7030 csse2310/csse7231 comm1190 125.330 110.309 csc3100 bu1007 comp 636 qbus3600 compx222 stat437 kit317 hw1 ag942 fit3139 115.213 ipa61006 econ214 envm7512 6010acc fit4005 fins5542 slsp5360m 119729 cs148 hld-4267-r comp4002/gam cava1001 or4023 cosc2758/cosc2938 cse140 fu010055 csci410 finc3017 comp9417 fsc60504 24309 bsys702 mgec61 cive9831m pubh5010 5bus1037 info90004 p6769 bsan3209 plana4310 caes1000 econ0060 ap/adms4540 ast101h5f plan6392 625.609.81 csmai21 fnce6012 misy262 ifb106tc csci910 502it comp603/ense600 4035 csca08 8iar101 bsd131 msci242l csci 4261 elec51020 blaw1002 ec3044 acct40115 csi2108–cryptographic 158225 7014mhr econ60822 ecn302 philo225-24a acst2001 fit9132 comp1117b ad654 comp3221 st332 cs170 econ0033 engr228-digital law-10027u fit5057 ve311 sle210 n1608 msim3101 badp2003 mth002 6012acc 072243a 3809ict amath 483 ifn556 cven4051 2024 comp9024 158.739-2024 comp 3023 ecs122a com63004 bms5021 comp1028
联系我们
EMail: 99515681@qq.com
QQ: 99515681
留学生作业帮-留学生的知心伴侣!
工作时间:08:00-21:00
python代写
微信客服:codinghelp
站长地图