代写FIT5202 - Data processing for Big Data (SSB 2025) Assignment 1: Analysing Food Delivery Data代做留学生P

FIT5202 - Data processing for Big Data (SSB 2025)

Assignment 1: Analysing Food Delivery Data

Due Date: 23:55 Friday 17/Jan/2025 (End of week 3)

Weight: 10% of the final marks

Background

Food   delivery    services   have    become   an    integral   part    of   modern    society, revolutionizing the way we consume meals and interact with the food industry. These platforms, accessible through websites and mobile apps, provide a convenient bridge between restaurants and consumers, allowing users to browse menus, place orders, and  have  food delivered directly to their doorstep with just a few taps.  In today's fast-paced world, where time is a precious commodity, food delivery services offer an invaluable solution, catering to busy lifestyles, limited mobility, and the ever-present desire  for  convenience.  They  empower  individuals  to  enjoy  a  diverse  range  of cuisines  without   leaving  their   homes   or  offices,  support  local  restaurants  by expanding their reach, and have even become a crucial lifeline during times of crisis, such as lockdowns and emergencies, ensuring access to essential sustenance and supporting  the  economy.  As  a  result  of  its  convenience,  and  the  increasing preference for on-demand services, food delivery has become a very important part of modern life, impacting everything from our daily routines to the broader economic landscape.

In the food delivery industry, accurate on-time delivery prediction is paramount. Big data  processing  allows  companies  to  achieve  this   by  analyzing  vast   datasets encompassing order details, driver performance, real-time traffic, and even weather.

Sophisticated algorithms leverage this data to build predictive models. These models learn  from  historical  trends,  for example,  a  restaurant's  longer  preparation  times during peak hours or a driver's faster navigation in specific areas. Real-time data, like driver GPS location and live traffic, further refine these predictions, enabling dynamic adjustments to estimated delivery times.

The benefits are substantial.  Firstly, customer satisfaction improves with reliable delivery  estimates  and  transparent  communication  regarding  delays.  Secondly, operational  efficiency  increases  through  optimized  driver  scheduling  and  route planning,  leading  to   reduced  costs  and  faster  deliveries.  Furthermore,  accurate predictions empower proactive measures to mitigate delays. The system can alert customers  of  potential  issues,  offer  compensation,  and  trigger  interventions  like expediting  order   preparation.   If   an  order  is  not  delivered  on  time,  a  quality after-service should be followed, such as offering refunds, providing future discounts, or simply offering a sincere apology.

By mastering on-time delivery prediction through big data, food delivery companies gain a crucial competitive edge. They can meet and exceed customer expectations, foster loyalty, and drive sustainable growth in a demanding market. As the industry evolves,  leveraging  big  data  for  accurate  delivery  forecasting  will  remain  a  key differentiator for success.

This  series  of  assignments  will  immerse  you  in  the  world  of  big  data  analytics, specifically  within  the  context  of  a  modern,  data-driven  application:  food  delivery services.  We  will  explore  the  entire  lifecycle  of  data  processing,  from  analyzing historical information to building and deploying real-time machine learning models. Each assignment builds upon the last, culminating in a comprehensive understanding of how big data technologies can be leveraged to optimize performance and enhance user experience.

In the first assignment(A1), we will delve into historical datasets, performing data analysis to uncover key trends and patterns related to delivery times, order volumes, and  other  crucial  metrics.  This  foundational  understanding  will  pave  the  way  for assignment  2A,  where  we  will  harness  the  power  of  Apache  Spark's  MLLib  to construct and train machine learning models, focusing on predicting delivery times with accuracy and efficiency. Finally, assignment 2B will elevate our analysis to the real-time domain, utilizing Apache Spark Structured Streaming to process live data streams and dynamically adjust predictions, providing a glimpse into the cutting-edge techniques driving modern, responsive applications. Through this hands-on journey, you will gain practical experience with industry-standard tools and develop a strong conceptual understanding of how big data powers the dynamic world of on-demand services.

In A1, we will perform historical data analysis using Apache Spark. We will use RDD, DataFrame. and SQL API learnt from topics 1-4.

The Dataset

The dataset can be downloaded from Moodle.

You will find the following files after extracting the zip file:

1)  delivery_order.csv: Contains food order records.

2)  geolocation.csv: Contains geographical information about restaurants and delivery locations

3)  delivery_person.csv: Contains basic driver information, their rating and vehicle information.

The metadata of the dataset can be found in the appendix at the end of this document.   (Note:  The  dataset  is  a  mixture  of  real-life  and  synthetic  data,  therefore  some anomalies   may   exist   in   the  dataset.   Data  cleansing   is   not   mandatory   in   this assignment.)

Assignment Information

The   assignment   consists   of   three   parts:   Working   with   RDD  ,   Working   with Dataframes,  and  Comparison  of  three  forms  of  Sparkabstractions.   In  this assignment, you are required to implement various solutions based on RDDs and


DataFrames in PySpark for the given queries related to eCommerce data analysis.

Getting Started

 Download your dataset from Moodle.

 Download a template file for submission purposes:

●   A1_template.ipynb  file  in  Jupyter  notebook  to  write  your solution.    Rename    it    into    the    format    (for    example: A1_xxx0000.ipynb.     This      file     contains     your     code solution(xxx0000 is your authcode).

●   For this assignment, you will use Python 3+ and PySpark 3.5.0. (The environment is provided as a Docker image, the same you use in labs.)

Part 1: Working with RDDs (30%)

In  this  section,  you  need  to   create  RDDs  from  the   given  datasets,   perform partitioning in these RDDs and use various RDD operations to answer the queries.

1.1 Data Preparation and Loading (5%)

1.  Write the code to create a SparkContext object using SparkSession. To create a  SparkSession,  you  first  need  to  build  a  SparkConf  object  that  contains information  about  your  application.  Use  Melbourne  time  as  the  session timezone. Give your application an appropriate name and run Spark locally with 4 cores on your machine.

2.  Load the CSV files into multiple RDDs.

3.  For each RDD, remove the header rows and display the total count and first

10 records.

4.  Drop records with invalid information(NaN or Null) in any column.

1.2 Data Partitioning in RDD (15%)

1.  For each RDD, using Spark’s default partitioning, printout the total number of partitions and the number of records in each partition (5%).

2.  Answer the following questions:

a.  How many partitions do the above RDDs have?

b.  How is the data in these RDDs partitioned by default, when we do not explicitly specify any partitioning strategy? Can you explain why it is partitioned in this number?

c.  Assuming we  are querying the dataset based on order timestamp, can you think of a better strategy for partitioning the data based on your available hardware resources?

Write your explanation in Markdown cells. (5%)

3.  Create  a  user-defined  function   (UDF)  to  transform.   a  timestamp  to  ISO format(YYYY-MM-DD   HH:mm:ss),   then   call   the   UDF   to   transform.   two timestamps(order_ts       and      ready_ts)      to       order_datetime       and ready_datetime(5%)



1.3 Query/Analysis (10%)

For this part, write relevant RDD operations to answer the following questions.

1.  Extract weekday (Monday-Sunday) information from orders and print the total number of orders each weekday.  (5%)

2.  Show a list of type_of_order and average preparation time in minutes (ready_ts - order_ts)  (5%)

Part 2. Working with DataFrames (45%)

In   this   section,   you   need   to   load   the   given   datasets   into   PySpark DataFrames and use DataFrame. functions to answer the queries.

2.1 Data Preparation and Loading (5%)

1.  Load  the  CSV  files  into  separate  dataframes.  When  you  create  your dataframes, please refer to the metadata file and think about the appropriate data type for each column.

2.  Display the schema of the dataframes.

When  the  dataset  is  large,  do you need all columns?  How to optimize  memory usage? Do you need a customized data partitioning strategy? (Note: Think about those questions but you don’t need to answer these questions.)

2.2 Query/Analysis (40%)

Implement the following queries using dataframes. You need to be able to perform operations like transforming, filtering, sorting, joining and group by using the functions provided by the DataFrame. API.

1.  Write a function to encode/transform weather conditions to Integers and drop the original string. You can decide your own encoding scheme. (i.e. Sunny=0, Cloudy = 1, Fog = 2, etc.)  (5%)

2.  Calculate the amount of order for each hour. Show the results in a table and plot a bar chart. (5%)

3.  Join the delivery_order with geolocation data frame, calculate the distance between a restaurant and the delivery location, and store the distance in a new column named delivery_distance. (hint: You may need to install an additional library like GeoPandas to calculate the distance between two points). (5%)

4.  Using the data from 3, find the top 10 drivers travelling the longest distance. (5%)

5.  For each type of order, plot a histogram of meal preparation time. The plot can be done with multiple legends or sub-plots. (note: you can decide your bin size). (10%)


6.  (Open Question) Explore the dataset and use a delivery person’s rating as a performance indicator. Is a lower rating usually correlated to a longer delivery time? What might be the contributing factors to the low rate of drivers? Please include one plot and discussion based on your observation (no word limit but please keep it concise). (10%)

Part3: RDDs vs DataFrame. vs Spark SQL (25%)

Implement the following queries using RDDs, DataFrame in SparkSQL separately. Log the time taken for each query in each approach using the “%%time” built-in magic command in Jupyter Notebook and discuss the performance difference between these 3 approaches.

(Complex Query) Calculate the time taken on the road (defined as the total time taken  minus  restaurants’  order  preparation  time,   i.e.,  total  time  -   (ready_ts  - order_ts)).  For each road_condition, using a 10-minute bucket size of time on the road(e.g. 0-10, 10-20, 20-30, etc.), show the percentage of each bucket.

(note: You can reuse the loaded data/variables from part 1&2.)

(hint: You may create intermediate RDD/dataframes for this query.)

1)  Implement  the  above  query  using  RDDs,  DataFrame and SQL separately and print the results. (Note: The three different approaches should have the same results). (15%)

2)  Which  one  is the easiest to implement in your opinion? Log the time taken for each  query,  and  observe  the  query execution time, among RDD, DataFrame, and SparkSQL, which is the fastest and why? Please include proper references. (Maximum 500 words.) (10%)

Submission

You should submit your final version of the assignment solution online via Moodle. You must submit the files created:

-    Your jupyter notebook file (e.g., A1_authcate.ipynb).

-    A pdf file saved from jupyter notebook with all output following the file naming format as follows: A1_authcate.pdf



热门主题

课程名

mktg2509 csci 2600 38170 lng302 csse3010 phas3226 77938 arch1162 engn4536/engn6536 acx5903 comp151101 phl245 cse12 comp9312 stat3016/6016 phas0038 comp2140 6qqmb312 xjco3011 rest0005 ematm0051 5qqmn219 lubs5062m eee8155 cege0100 eap033 artd1109 mat246 etc3430 ecmm462 mis102 inft6800 ddes9903 comp6521 comp9517 comp3331/9331 comp4337 comp6008 comp9414 bu.231.790.81 man00150m csb352h math1041 eengm4100 isys1002 08 6057cem mktg3504 mthm036 mtrx1701 mth3241 eeee3086 cmp-7038b cmp-7000a ints4010 econ2151 infs5710 fins5516 fin3309 fins5510 gsoe9340 math2007 math2036 soee5010 mark3088 infs3605 elec9714 comp2271 ma214 comp2211 infs3604 600426 sit254 acct3091 bbt405 msin0116 com107/com113 mark5826 sit120 comp9021 eco2101 eeen40700 cs253 ece3114 ecmm447 chns3000 math377 itd102 comp9444 comp(2041|9044) econ0060 econ7230 mgt001371 ecs-323 cs6250 mgdi60012 mdia2012 comm221001 comm5000 ma1008 engl642 econ241 com333 math367 mis201 nbs-7041x meek16104 econ2003 comm1190 mbas902 comp-1027 dpst1091 comp7315 eppd1033 m06 ee3025 msci231 bb113/bbs1063 fc709 comp3425 comp9417 econ42915 cb9101 math1102e chme0017 fc307 mkt60104 5522usst litr1-uc6201.200 ee1102 cosc2803 math39512 omp9727 int2067/int5051 bsb151 mgt253 fc021 babs2202 mis2002s phya21 18-213 cege0012 mdia1002 math38032 mech5125 07 cisc102 mgx3110 cs240 11175 fin3020s eco3420 ictten622 comp9727 cpt111 de114102d mgm320h5s bafi1019 math21112 efim20036 mn-3503 fins5568 110.807 bcpm000028 info6030 bma0092 bcpm0054 math20212 ce335 cs365 cenv6141 ftec5580 math2010 ec3450 comm1170 ecmt1010 csci-ua.0480-003 econ12-200 ib3960 ectb60h3f cs247—assignment tk3163 ics3u ib3j80 comp20008 comp9334 eppd1063 acct2343 cct109 isys1055/3412 math350-real math2014 eec180 stat141b econ2101 msinm014/msing014/msing014b fit2004 comp643 bu1002 cm2030
联系我们
EMail: 99515681@qq.com
QQ: 99515681
留学生作业帮-留学生的知心伴侣!
工作时间:08:00-21:00
python代写
微信客服:codinghelp
站长地图