代写FIT5196-S2-2024 Assessment 2调试Python程序

FIT5196-S2-2024 Assessment 2

This is a group assessment and is worth 40% of your total mark for FIT5196.

Due date: Friday 18 October 2024, 11:55pm

Task 1. Data Cleansing (50%)

For this assessment, you are required to write Python code to analyse your dataset, find and fix the problems in the data. The input and output of this task are shown below:

Table 1. The input and output of task 1

 

Input files

Submission

Output files

Other Deliverables

Group<group_id>_dirty_data

Group<group_id>_dirty_data_sol

Group<group_id>_ass2_task

.csv

ution.csv

1.ipynb

Group<group_id>_outlier_da

Group<group_id>_outlier_data_s

Group<group_id>_ass2_task

ta.csv

olution.csv

1.py

Group<group_id>_missing_d

Group<group_id>_missing_data_

 

ata.csv

solution.csv

 

warehouse.csv

 

 

Note1: All files must be zipped into a file named Group<group_id>_ass2.zip (please use zip not rar, 7z, tar, etc.)

Note2: Replace <group_id> with your group id (do not include <>)

Note3: You can find your three input files from the folder with your group numberhere. Using the wrong files will result in zero marks.

Note4:  Please strictly follow the instructions in the appendix to generate the .ipynb and .py files.

Exploring and understanding the data is one of the most important parts of the data wrangling process. You are required to perform. graphical and/or non-graphical EDA methods to understand the data first and then find the data problems. In this assessment, you have been provided with three  data  inputs  along  with  the  additional  file:  warehouse.csv  here.  Due  to  an  unexpected scenario, a portion of the data is missing or contains anomalous values. Thus, before moving to the nextstep in data analysis, you are required to perform. the following tasks:


1.   Detect and fix errors in <group_id>_dirty_data.csv

2.   Impute the missing values in <group_id>_missing_data.csv

3.   Detect and remove outlier rows in <group_id>_outlier_data.csv

○    (w.r.t. the  delivery_charges attribute only)

Project Background

As a starting point, here is what we know about the dataset in hand:

The dataset contains transactional retail data from an online electronics store (DigiCO) located in Melbourne, Australia. The store operation  is exclusively online, and it has three warehouses around Melbourne from which goods are delivered to customers.

Each instance of the data represents a single order from DigiCO store. The description of each data column is shown in Table 2.

Table 2. Description of the columns

COLUMN

DESCRIPTION

order_id

A unique id for each order

customer_id

A unique id for each customer

date

The date the order was made, given in YYYY-MM-DD format

nearest_warehouse

A string denoting the name of the nearest warehouse to the customer

shopping_cart

A list of tuples representing the order items: first element of the tuple is the item ordered, and the second element is the quantity ordered for     that item

order_price

A float denoting the order price in AUD. The order price is the price of items before any discounts and/or delivery charges are applied.

customer_lat

Latitude of the customer’s location

customer_long

Longitude of the customer’s location

coupon_discount

An integer denoting the percentage discount to be applied to the order_price.

distance_to_nearest_wa rehouse

A float representing the arc distance, in kilometres, between the customer and the nearest warehouse to him/her.

(radius of earth: 6378 KM)

delivery_charges

A float representing the delivery charges of the order

Notes:

1.   The output csv files must have the exact same columns as the respective input files. Any misspelling or mismatch will lead to a malfunction of the auto-marker which will in turn lead to losing marks.

2.   In the file  Group<group_id>_dirty_data.csv, any  row can carry no more than one anomaly. (i.e. there can only be up to one issue in a single row.)

3.  All anomalies in dirty data have one and only one possible fix.

4.   There are no data anomalies in the file Group<group_id>_outlier_data.csv except for outliers. Similarly,   there  are  only  coverage  data  anomalies  (i.e.  no  other  data  anomalies)  in Group<group_id>_missing_data.csv.

5.   The  retail  store  has three different warehouses in Melbourne (see warehouse.csv for their locations)

6.   The retail store focuses only on 10 branded items and sells them at competitive prices7.   In order to get the item unit price, a useful python package to solve multivariable equations is

numpy.linalg

8.   The distance is calculated as Haversine Distance (with radius of earth = 6378 KM) like here  .

9.   The  store  has  different  business  rules  depending  on  the  seasons  to  match  the  different demands of each season. For example, delivery charge is calculated using a linear model which differs depending on the season. The model depends linearly (but in different ways for each season) on:

○    Distance between customer and nearest warehouse

   Whether the customer wants an expedited delivery

○   Whether the customer was happy with his/her last purchase (if no previous purchase, it is assumed that the customer is happy)

10. It is recommended to use sklearn.linear_model.LinearRegression for solving the linear model as demonstrated in the tutorials.

11.  Using proper data for model training is crucial to have a good linear model (i.e. R2  score over 0.97 and very close to 1) to validate the delivery charges. The better your model is, the more   accurate your result will be.

12. To check whether a customer is happy with their last order, the customer's latest review is classified    using    a    sentiment    analysis    classifier.    SentimentIntensityAnalyzer   from nltk.sentiment.vader is used to obtain the polarity score. A sentiment is considered positive if it has a 'compound' polarity score of 0.05 or higher and is considered negative otherwise.Refer to this link for more details on how to use this module.

13. If the customer provided a coupon during purchase, the coupon discount percentage will be applied to the order price before adding the delivery charges (i.e. the delivery charges will    never be discounted).

14. The below columns are error-free (i.e. don’t look for any errors in dirty data for them):

    coupon_discount

    delivery_charges

○    The ordered quantity values in the shopping_cart attribute

    order_id

    customer_id

    latest_customer_review

15. For missing data imputation, you are recommended to try all possible methods to impute

missing values and keep the most appropriate one that could provide the best performance.

16. As EDA is part of this assessment, no further information will be given publicly regarding the    data. However, you can brainstorm with the teaching team during tutorials or on the Ed forum.

17. No libraries/packages restriction.

Methodology (10%)

The report <group_id>_ass2_task1.ipynb should demonstrate the methodology (including all steps) to achieve the correct results.

You need to demonstrate your solution using correct steps.

●    Your solution should be presented in a proper way including all required steps.

●    You need to select and use the appropriate Python functions for input, process and output.

●    Your   solution   should   be   an   efficient   one   without   redundant   operations   and unnecessary reading and writing the data.

Task 2: Data Reshaping (15%)

You need to complete task 2 with the suburb_info.xlsx file ONLY. With the given property and suburb related data, you need to study the effect of different normalisation/transformation (e.g. standardisation, min-max normalisation, log, power, box-cox transformation) methods on these columns:        number_of_houses,       number_of_units,        population,        aus_born_perc, median_income, median_house_price. You need to observe and explain their effect assuming we want to develop a linear model to predict the “median_house_price” using the 5 attributes mentioned above.

When reshaping the data, we normally have two main criteria.

●    Second, we want our features to  have as much linear relationship as possible with the target variable (i.e., median_house_price).

You need to first explore the data to see if any scaling or transformation is necessary (if yes why? and  if not, also why?) and then  perform. appropriate actions and document your  results and observations.  Please  note  that the aim for this task  is to  prepare the  data for   a  linear regression model, it’s not building the linear model. That is, you need to record all your steps from load the raw data to complete all the required transformations if any.

Input files

Submission

suburb_info.xlsx

Group<group_id>_ass2_task2.ipynb

You could consider the scenario of task 2 to be an open exploratory project: Jackie and Kiara have got some funding to do an exploratory consulting project on the property market. We wish to understand any interesting insights from the relevant features in different suburbs of Melbourne. Before we step into the final linear regression modelling stage, we wish to hire you to prepare the data for us and tell us if any transformation/normalisation is required? Will those data satisfy the assumptions of linear regression?  How  could we  make  our data  more suitable for the  latter modelling stage.

As  an  exploratory  task,  you  only  need  to  put  your  journey   of  exploration  in  proper documentation in your .ipynb file, no other output file to be submitted for task 2. We will mark based on the .ipynb content for task 2.

Table3. Description of the suburb_info.xlsx file.

suburb

The suburb name, which serves as the index of the data

number_of_houses

The number of houses in the property suburb

number_of_units

The number of units in the property suburb

municipality

The municipality of the property suburb

aus_born_perc

The percentage of the Australian-born population in the property suburb

median_income

The median income of the population in the property suburb

median_house_price

The median ‘house’ price in the property suburb

population

The population in the property suburb


Task 3: Project Reflective Report (15%)

Input files

Submission

N/A

Group<group_id>_report.pdf

3.1 Feedback Session During Week 10 Applied Session

Tasks :Please attend the week 10 applied session and present your working progress to your TA for some feedback. You need to:

1.   Present your current progress

2.  Any future planning you wish to undertake

3.   Record/Noted the TA’s suggestions

4.   Continue your work with tailored solution/follow-ups based on the suggestions

Details:

   Time/Date: Week 10, during your allocated Applied sessions

●    Duration: Approximate 5-8 minutes per group

●    Location: Normal location of allocated applied sessions in your Allocate+ records

   Criterion: Please refer to A2 marking rubrics

3.2 Group Reflection Presentation (Hurdle)

There will be a reflective presentation for your A2. The aim for the presentation is to check    your understanding of your A2 project and make sure all submissions are compliant with the academic integrity requirements of Monash.

Details:

   Time/Date: Week 12, during your allocated Applied sessions

●    Duration: Approximate 5-10 minutes per group

●    Location: Normal location of allocated applied sessions in your Allocate+ records

●   Arrangement: We will provide a time schedule for every group during their

allocated session, please arrive at your allocated time slot. If you arrive earlier, please wait patiently outside the room.

●   Content: Please briefly describe your methodology/ logic of A2 (at least for 80% of A2, detailed subtasks please refer to A2 marking rubrics) and answer questions if any

   Criterion: Please refer to A2 marking rubrics




热门主题

课程名

mktg2509 csci 2600 38170 lng302 csse3010 phas3226 77938 arch1162 engn4536/engn6536 acx5903 comp151101 phl245 cse12 comp9312 stat3016/6016 phas0038 comp2140 6qqmb312 xjco3011 rest0005 ematm0051 5qqmn219 lubs5062m eee8155 cege0100 eap033 artd1109 mat246 etc3430 ecmm462 mis102 inft6800 ddes9903 comp6521 comp9517 comp3331/9331 comp4337 comp6008 comp9414 bu.231.790.81 man00150m csb352h math1041 eengm4100 isys1002 08 6057cem mktg3504 mthm036 mtrx1701 mth3241 eeee3086 cmp-7038b cmp-7000a ints4010 econ2151 infs5710 fins5516 fin3309 fins5510 gsoe9340 math2007 math2036 soee5010 mark3088 infs3605 elec9714 comp2271 ma214 comp2211 infs3604 600426 sit254 acct3091 bbt405 msin0116 com107/com113 mark5826 sit120 comp9021 eco2101 eeen40700 cs253 ece3114 ecmm447 chns3000 math377 itd102 comp9444 comp(2041|9044) econ0060 econ7230 mgt001371 ecs-323 cs6250 mgdi60012 mdia2012 comm221001 comm5000 ma1008 engl642 econ241 com333 math367 mis201 nbs-7041x meek16104 econ2003 comm1190 mbas902 comp-1027 dpst1091 comp7315 eppd1033 m06 ee3025 msci231 bb113/bbs1063 fc709 comp3425 comp9417 econ42915 cb9101 math1102e chme0017 fc307 mkt60104 5522usst litr1-uc6201.200 ee1102 cosc2803 math39512 omp9727 int2067/int5051 bsb151 mgt253 fc021 babs2202 mis2002s phya21 18-213 cege0012 mdia1002 math38032 mech5125 07 cisc102 mgx3110 cs240 11175 fin3020s eco3420 ictten622 comp9727 cpt111 de114102d mgm320h5s bafi1019 math21112 efim20036 mn-3503 fins5568 110.807 bcpm000028 info6030 bma0092 bcpm0054 math20212 ce335 cs365 cenv6141 ftec5580 math2010 ec3450 comm1170 ecmt1010 csci-ua.0480-003 econ12-200 ib3960 ectb60h3f cs247—assignment tk3163 ics3u ib3j80 comp20008 comp9334 eppd1063 acct2343 cct109 isys1055/3412 math350-real math2014 eec180 stat141b econ2101 msinm014/msing014/msing014b fit2004 comp643 bu1002 cm2030
联系我们
EMail: 99515681@qq.com
QQ: 99515681
留学生作业帮-留学生的知心伴侣!
工作时间:08:00-21:00
python代写
微信客服:codinghelp
站长地图