代写COMP9444、代做Networks、Python程序设计调试、Python代写 代写Python程序|帮做R语言编程
COMP9444 Neural Networks and Deep Learning
Term 2, 2020
Project 1 - Japanese Characters and Intertwined Spirals
Due: Sunday 12 July, 23:59 pm
Marks: 30% of final assessment
In this assignment, you will be implementing and training various neural network models for two different classification tasks, and
analysing the results.
You are to submit two Python files kuzu.py and spiral.py, as well as a written report hw1.pdf (in pdf format).
Provided Files
Copy the archive hw1.zip into your own filespace and unzip it. This should create a directory hw1 with the data file spirals.csv as well
as four Python files kuzu.py, spiral.py, kuzu_main.py and spiral_main.py.
Your task is to complete the skeleton files kuzu.py, spiral.py and submit them, along with your report.
Part 1: Japanese Character Recognition
For Part 1 of the assignment you will be implementing networks to recognize handwritten Hiragana symbols. The dataset to be
used is Kuzushiji-MNIST or KMNIST for short. The paper describing the dataset is available here. It is worth reading, but in short:
significant changes occurred to the language when Japan reformed their education system in 1868, and the majority of Japanese
today cannot read texts published over 150 years ago. This paper presents a dataset of handwritten, labeled examples of this oldstyle
script (Kuzushiji). Along with this dataset, however, they also provide a much simpler one, containing 10 Hiragana characters
with 7000 samples per class. This is the dataset we will be using.
Text from 1772 (left) compared to 1900 showing the standardization of written Japanese.
1. [1 mark] Implement a model NetLin which computes a linear function of the pixels in the image, followed by log softmax. Run
the code by typing:
python3 kuzu_main.py --net lin
Copy the final accuracy and confusion matrix into your report. Note that the rows of the confusion matrix indicate the target
character, while the columns indicate the one chosen by the network. (0="o", 1="ki", 2="su", 3="tsu", 4="na", 5="ha",
6="ma", 7="ya", 8="re", 9="wo"). More examples of each character can be found here.
2. [2 marks] Implement a fully connected 2-layer network NetFull, using tanh at the hidden nodes and log softmax at the
output node. Run the code by typing:
python3 kuzu_main.py --net full
Try different values (multiples of 10) for the number of hidden nodes and try to determine a value that achieves high
accuracy on the test set. Copy the final accuracy and confusion matrix into your report.
3. [2 marks] Implement a convolutional network called NetConv, with two convolutional layers plus one fully connected layer, all
using relu activation function, followed by the output layer. You are free to choose for yourself the number and size of the
filters, metaparameter values, and whether to use max pooling or a fully convolutional architecture. Run the code by typing:
python3 kuzu_main.py --net conv
Your network should consistently achieve at least 93% accuracy on the test set after 10 training epochs. Copy the final
accuracy and confusion matrix into your report.
2020/6/29 COMP9444 Project 1
https://www.cse.unsw.edu.au/~cs9444/20T2/hw1/index.html 2/3
4. [7 marks] Discuss what you have learned from this exercise, including the following points:
a. the relative accuracy of the three models,
b. the confusion matrix for each model: which characters are most likely to be mistaken for which other characters, and
why?
c. you may wish to experiment with other architectures and/or metaparameters for this dataset, and report on your
results; the aim of this exercise is not only to achieve high accuracy but also to understand the effect of different
choices on the final accuracy.
Part 2: Twin Spirals Task
For Part 2 you will be training on the famous Two Spirals Problem (Lang and Witbrock, 1988). The supplied code spiral_main.py
loads the training data from spirals.csv, applies the specified model and produces a graph of the resulting function, along with
the data. For this task there is no test set as such, but we instead judge the generalization by plotting the function computed by
the network and making a visual assessment.
1. [2 marks] Provide code for a Pytorch Module called PolarNet which operates as follows: First, the input (x,y) is converted to
polar co-ordinates (r,a) with r=sqrt(x*x + y*y), a=atan2(y,x). Next, (r,a) is fed into a fully connected neural network with one
hidden layer using tanh activation, followed by a single output using sigmoid activation. The conversion to polar coordinates
should be included in your forward() method, so that the Module performs the entire task of conversion followed by network
layers.
2. [1 mark] Run the code by typing
python3 spiral_main.py --net polar --hid 10
Try to find the minimum number of hidden nodes required so that this PolarNet learns to correctly classify all of the training
data within 20000 epochs, on almost all runs. The graph_output() method will generate a picture of the function computed by
your PolarNet called polar_out.png, which you should include in your report.
3. [1 mark] Provide code for a Pytorch Module called RawNet which operates on the raw input (x,y) without converting to polar
coordinates. Your network should consist of two fully connected hidden layers with tanh activation, plus an output layer.
You should not use Sequential but should instead build the network from individual components as shown in the program
xor.py from Exercises 5 (repeated in slide 4 of lecture slides 3b on PyTorch). The number of neurons in both hidden layers
should be determined by the parameter num_hid.
4. [1 mark] Run the code by typing
python3 spiral_main.py --net raw
Keeping the number of hidden nodes in each layer fixed at 10, try to find a value for the size of the initial weights (--init)
such that this RawNet learns to correctly classify all of the training data within 20000 epochs, on almost all runs. Include in
your report the number of hidden nodes, and the values of any other metaparameters. The graph_output() method will
generate a picture of the function computed by your RawNet called raw_out.png, which you should include in your report.
5. [1 mark] Provide code for a Pytorch Module called ShortNet which again operates on the raw input (x,y) without converting
to polar coordinates. This network should again consist of two hidden layers plus an output layer, but this time should
include short-cut connections between every pair of layers (input, hid1, hid2 and output) as depicted on slide 10 of lecture
slides 3a on Hidden Unit Dynamics. The number of neurons in both hidden layers should be determined by the parameter
num_hid.
6. [1 mark] Run the code by typing
python3 spiral_main.py --net short
2020/6/29 COMP9444 Project 1
https://www.cse.unsw.edu.au/~cs9444/20T2/hw1/index.html 3/3
You should experiment to find a good value for the initial weight size, and try to find the mininum number of hidden nodes
per layer so that this ShortNet learns to correctly classify all of the training data within 20000 epochs, on almost all runs.
Include in your report the number of hidden nodes per layer, as well as the initial weight size and any other
metaparameters. The graph_output() method will generate a picture of the function computed by your ShortNet called
short_out.png, which you should include in your report.
7. [2 marks] Using graph_output() as a guide, write a method called graph_hidden(net, layer, node) which plots the activation (after
applying the tanh function) of the hidden node with the specified number (node) in the specified layer (1 or 2). (Note: if net is
of type PolarNet, graph_output() only needs to behave correctly when layer is 1).
Hint: you might need to modify forward() so that the hidden unit activations are retained, i.e. replace hid1 = torch.tanh(...)
with self.hid1 = torch.tanh(...)
Use this code to generate plots of all the hidden nodes in PolarNet, and all the hidden nodes in both layers of RawNet and
ShortNet, and include them in your report.
8. [9 marks] Discuss what you have learned from this exercise, including the following points:
a. the qualitative difference between the functions computed by the hidden layer nodes of the three models, and a brief
description of how the network uses these functions to achieve the classification
b. the effect of different values for initial weight size on the speed and success of learning, for both RawNet and ShortNet
c. the relative "naturalness" of the output function computed by the three networks, and the importance of
representation for deep learning tasks in general
d. you may like to also experiment with other changes and comment on the result - for example, changing batch size
from 97 to 194, using SGD instead of Adam, changing tanh to relu, adding a third hidden layer, etc.
Submission
You should submit by typing
give cs9444 hw1 kuzu.py spiral.py hw1.pdf
You can submit as many times as you like - later submissions will overwrite earlier ones. You can check that your submission has
been received by using the following command:
9444 classrun -check
The submission deadline is Sunday 12 July, 23:59. 15% penalty will be applied to the (maximum) mark for every 24 hours late
after the deadline.
Additional information may be found in the FAQ and will be considered as part of the specification for the project. You should
check this page regularly.
Plagiarism Policy
Group submissions will not be allowed for this assignment. Your program must be entirely your own work. Plagiarism detection
software will be used to compare all submissions pairwise and serious penalties will be applied, particularly in the case of repeat
offences.
DO NOT COPY FROM OTHERS; DO NOT ALLOW ANYONE TO SEE YOUR CODE
Please refer to the UNSW Policy on Academic Integrity and Plagiarism if you require further clarification on this matter.
Good luck!

热门主题

课程名

mktg2509 csci 2600 38170 lng302 csse3010 phas3226 77938 arch1162 engn4536/engn6536 acx5903 comp151101 phl245 cse12 comp9312 stat3016/6016 phas0038 comp2140 6qqmb312 xjco3011 rest0005 ematm0051 5qqmn219 lubs5062m eee8155 cege0100 eap033 artd1109 mat246 etc3430 ecmm462 mis102 inft6800 ddes9903 comp6521 comp9517 comp3331/9331 comp4337 comp6008 comp9414 bu.231.790.81 man00150m csb352h math1041 eengm4100 isys1002 08 6057cem mktg3504 mthm036 mtrx1701 mth3241 eeee3086 cmp-7038b cmp-7000a ints4010 econ2151 infs5710 fins5516 fin3309 fins5510 gsoe9340 math2007 math2036 soee5010 mark3088 infs3605 elec9714 comp2271 ma214 comp2211 infs3604 600426 sit254 acct3091 bbt405 msin0116 com107/com113 mark5826 sit120 comp9021 eco2101 eeen40700 cs253 ece3114 ecmm447 chns3000 math377 itd102 comp9444 comp(2041|9044) econ0060 econ7230 mgt001371 ecs-323 cs6250 mgdi60012 mdia2012 comm221001 comm5000 ma1008 engl642 econ241 com333 math367 mis201 nbs-7041x meek16104 econ2003 comm1190 mbas902 comp-1027 dpst1091 comp7315 eppd1033 m06 ee3025 msci231 bb113/bbs1063 fc709 comp3425 comp9417 econ42915 cb9101 math1102e chme0017 fc307 mkt60104 5522usst litr1-uc6201.200 ee1102 cosc2803 math39512 omp9727 int2067/int5051 bsb151 mgt253 fc021 babs2202 mis2002s phya21 18-213 cege0012 mdia1002 math38032 mech5125 07 cisc102 mgx3110 cs240 11175 fin3020s eco3420 ictten622 comp9727 cpt111 de114102d mgm320h5s bafi1019 math21112 efim20036 mn-3503 fins5568 110.807 bcpm000028 info6030 bma0092 bcpm0054 math20212 ce335 cs365 cenv6141 ftec5580 math2010 ec3450 comm1170 ecmt1010 csci-ua.0480-003 econ12-200 ib3960 ectb60h3f cs247—assignment tk3163 ics3u ib3j80 comp20008 comp9334 eppd1063 acct2343 cct109 isys1055/3412 math350-real math2014 eec180 stat141b econ2101 msinm014/msing014/msing014b fit2004 comp643 bu1002 cm2030
联系我们
EMail: 99515681@qq.com
QQ: 99515681
留学生作业帮-留学生的知心伴侣!
工作时间:08:00-21:00
python代写
微信客服:codinghelp
站长地图