代写Problem 1: Building a convolutional neural network (ConvNet) to classify images of fruits and vege

Problem 1: Building a convolutional neural network (ConvNet) to classify images of fruits and vegetables into their respective classes. (5 points)

You ar encouraged to use PyTorch,TensorFlow and the kerasAPl,but any other deplearning librar (in Python,Juli or Matlob)would be aceptable for hishomework asinment.

Data:The Fruits-360 dataset contains 100×100 images for 131 different varieties of fruits and vegetables.This dataset with 90,483 images could be downloaded from Kaggle datasets (https://www.kaggle.com/moltean/fruits B³).The training and testing images can be extracted from the downloaded fle.Please follow the steps below for data processing.

·Tensorflow and Keras:The tensorflow.keras.utils.image dataset from directory class from Keras generates batches of tensor image data and is capable of real-time data transformation.An image generator should be created for both training and testing dataset followed by an iterator that reads and processes image data in an iterative manner.The generators should read all classes of fruit images located in the Training and Testing.

·PyTorch:Use torchvision.datasets.ImageFolder to load images from the Training and Testing directories.

·tensofou,keres.preprocesing-ing.Tmoge.Dotacenerato o torcbvisio.tronsfors.compose can be employed for image normalization and data augmentation (optional)

·The original 100×100 images should be scaled down into 75×75 resolution images with these generators.

·Divide the training set into training and validation sets with 85%and 15%images of the training set in each,respectively.

·The entire training and testing dataset should also be divided into mini-batches of size 1000 and shufled using the seed value of 42.

·The training and testing data generators are used to train and evaluate the ConvNet.

Architecture:Define a Sequential model,wherein the layers are stacked sequentially and each layer has exactly one input tensor and one output tensor.Please build a ConvNet by adding the layers to the Sequential model using the configuration below.For each of the layers,initialize the kernel weights from a Glorot uniform distribution and set the random seed to 99.Additionally,initialize the bias vector as a zero vector.In this architecture,you may use different dropout values [0.1,0.3,0.5] and report the impact of dropout values on model performance.

Conv2D

Filters:64 Kernel size:(3,3)Strides:(1,1)Padding:no padding Activation: ReLU

MaxPooling2D

Pool size:(2,2)Strides:None Padding:no padding

· Conv2D

Filters:128 Kernel size:(3,3)Strides:(1,1)Padding:no padding Activation: ReLU

· BatchNormalization

Momentum:0.99 Epsilon:0.001

Dropout

Rate:[0.1,0.3,0.5]

MaxPooling2D

Pool size:(2,2)Strides:None Padding:no padding

Flatten

Dense

Units:256 Activation:ReLU

Dense

Units:131 Activation:Softmax

The performance of the CNN model is notably impacted by the number of convolutional layers it employs.In the preceding design,two convolutional layers were integrated.Kindly introduce an additional convolutional layer(as depicted in the updated architecture below)and elaborate on the roles of convolutional layers.

Conv2D

Filters:64 Kernel size:(3,3)Strides:(1,1)Padding:no padding Activation: ReLU

MaxPooling2D

Pool size:(2,2)Strides:None Padding:no padding

Conv2D

Filters:128 Kernel size:(3,3)Strides:(1,1)Padding:no padding Activation: ReLU

MaxPooling2D

Pool size:(2,2)Strides:None Padding:no padding

Conv2D

Filters:256 Kernel size:(3,3)Strides:(1,1)Padding:no padding Activation: ReLU

BatchNormalization

Momentum:0.99 Epsilon:0.001

· Dropout

Rate:0.3

MaxPooling2D

Pool size:(2,2)Strides:None Padding:no padding

Flatten

Dense

Units:512 Activation:ReLU

Dense

Units:131 Activation:Softmax

Training:The model is compiled by specifying the optimizer,the loss function and metrics to be recorded at each step of the training process.The ADAM optimizer should minimize the categorical cross entropy.The ConvNet model can be trained and evaluated with the previously created data generators.The training step size can be calculated by dividing the number of images in the generator with the batch size for training and testing data,respectively.

Deliverables:Please report the training and validation accuracy after the training process is carried out for 50 epochs (you can train for 20 epochs if the training is time consuming),in addition to the achieved accuracy levels on the test dataset.Also, plot the loss curves for both training and validation datasets.Discuss the functions of dropout values and the number of convolutional layers in relation to the CNN model performance.Please make sure to submit your working code files along with the final results and the plots.

Bonus(+1):A skip connection in a neural network is a connection that skips one or more layers and connects to a later layer.Residual Networks(ResNets)have popularized the use of skip connections to address the vanishing gradient problem,and  hence enabling the training of deeper networks.Your task for tthis bonus part is to integrate such a skip connection,any types of skip connections are acceptable.For instance,linking the output of the first layer convolutional directly to the input of the last convolutional layer in your model architecture.Based on your results,analyze and discuss any improvements or effects this change has on the model's performance


热门主题

课程名

mktg2509 csci 2600 38170 lng302 csse3010 phas3226 77938 arch1162 engn4536/engn6536 acx5903 comp151101 phl245 cse12 comp9312 stat3016/6016 phas0038 comp2140 6qqmb312 xjco3011 rest0005 ematm0051 5qqmn219 lubs5062m eee8155 cege0100 eap033 artd1109 mat246 etc3430 ecmm462 mis102 inft6800 ddes9903 comp6521 comp9517 comp3331/9331 comp4337 comp6008 comp9414 bu.231.790.81 man00150m csb352h math1041 eengm4100 isys1002 08 6057cem mktg3504 mthm036 mtrx1701 mth3241 eeee3086 cmp-7038b cmp-7000a ints4010 econ2151 infs5710 fins5516 fin3309 fins5510 gsoe9340 math2007 math2036 soee5010 mark3088 infs3605 elec9714 comp2271 ma214 comp2211 infs3604 600426 sit254 acct3091 bbt405 msin0116 com107/com113 mark5826 sit120 comp9021 eco2101 eeen40700 cs253 ece3114 ecmm447 chns3000 math377 itd102 comp9444 comp(2041|9044) econ0060 econ7230 mgt001371 ecs-323 cs6250 mgdi60012 mdia2012 comm221001 comm5000 ma1008 engl642 econ241 com333 math367 mis201 nbs-7041x meek16104 econ2003 comm1190 mbas902 comp-1027 dpst1091 comp7315 eppd1033 m06 ee3025 msci231 bb113/bbs1063 fc709 comp3425 comp9417 econ42915 cb9101 math1102e chme0017 fc307 mkt60104 5522usst litr1-uc6201.200 ee1102 cosc2803 math39512 omp9727 int2067/int5051 bsb151 mgt253 fc021 babs2202 mis2002s phya21 18-213 cege0012 mdia1002 math38032 mech5125 07 cisc102 mgx3110 cs240 11175 fin3020s eco3420 ictten622 comp9727 cpt111 de114102d mgm320h5s bafi1019 math21112 efim20036 mn-3503 fins5568 110.807 bcpm000028 info6030 bma0092 bcpm0054 math20212 ce335 cs365 cenv6141 ftec5580 math2010 ec3450 comm1170 ecmt1010 csci-ua.0480-003 econ12-200 ib3960 ectb60h3f cs247—assignment tk3163 ics3u ib3j80 comp20008 comp9334 eppd1063 acct2343 cct109 isys1055/3412 math350-real math2014 eec180 stat141b econ2101 msinm014/msing014/msing014b fit2004 comp643 bu1002 cm2030
联系我们
EMail: 99515681@qq.com
QQ: 99515681
留学生作业帮-留学生的知心伴侣!
工作时间:08:00-21:00
python代写
微信客服:codinghelp
站长地图