代写Google Cloud Services代做Python语言

P8 (4% of grade): Google Cloud Services

Overview

In this project, you'll be using four Google cloud services to analyze a dataset describing all the schools in Wisconsin. You'll write your code in Jupyter, which will run on a cloud VM, and you'll upload your data to a GCS bucket. You'll use Dataform. to create a pipeline that brings this data (in combination with public geographic data) into a BigQuery dataset. Finally, you'll use BigQuery to answer questions about data.

Learning objectives:

create a VM on a public cloud and connect to it via SSH

create a GCS bucket and upload data to it

write a DataForm. pipeline to bring data from GCS to BigQuery storage

write BigQuery queries to manipulate geographic data

Before starting, please review the general project directions.

Clarifications/Correction

April 28: added autobadger info for p8 (version 0.1.21)

April 30: updated autobadger (version 0.1.22), minor fix, now accepts both single and double quotes (e.g., ${ref('wi_counties')} and ${ref("wi_counties")} ) in wi_county_schools.sqlx

Part 1: Virtual Machine Setup

Educational Credits

We'll use Google's Cloud (GCP) for our virtual machines. They've generously given each 544 student $50 in credits. You can obtain the credits here: https://canvas.wisc.edu/courses/450562/discussion_topics/2122215

Launch VM

You likely have some experience in creating VMs in a prior course (320 or 400), so these directions will be brief, and you can decide things like what geographic region to create it in (I picked Iowa since it's nearby), but here are some highlights:

you can launch and see VMs from here: https://console.cloud.google.com/compute/instances

Be sure to choose e2-medium for machine type.

Update your boot disk settings to use the Ubuntu 24.04 LTS public image -- select the x86/64 version (not Arm64) — with a boot disk size of 25 GB. Once you select these settings, the monthly estimate should be about $25/month for the VM (if it’s not, you probably selected something wrong, and might run out of free credits before the end of the project).

you may have modified firewall settings for other courses, but that's not necessary for 544

you'll need to setup an SSH key so you can connect from your laptop: https://console.cloud.google.com/compute/metadata? tab=sshkeys (the browser-based SSH client won't work for what we need to do in this class)

There's a bit more explanation about SSH keys here.

(Optional) If you're worried about exhausting the free tier and your educational credits, you might also want to setup a quota here.

GitLab

SSH authentication to GitLab is only enabled from machines on the same network (like your CSL VM). That means you cannot use SSH keys to clone/pull/push your repo while working on your Google VM. Instead, set a strong password here:

https://git.doit.wisc.edu/-/user_settings/password/edit

Then, when you clone your repo to your VM, use the HTTPS address instead of the SSH address.

Of course, if you're running into trouble, a short-term solution is to have your repo on a different machine (your laptop, or CSL VM) and scp files around as necessary.

Jupyter Setup

Setup a Python venv for this project and install some packages:

sudo apt install python3-venv

python3 -m venv venv

source venv/bin/activate

pip3 install jupyterlab google-cloud-bigquery google-cloud-bigquery-storage bigquery-magics pyarrow tqdm ipywidgets pandas matplotlib db-dtypes pandas-gbq google-cloud-dataform. google-cloud-storage

With your venv still active, run:

python3 -m jupyterlab --no-browser

1. Setup an SSH tunnel and connect (as done in previous projects)

2. You'll be prompted to copy/paste a token from the terminal to the browser.

3. Finally, create a notebook named p8.ipynb inside of src/ in your project repo.

Create a p8.ipynb notebook inside of src/ . You'll answer 10 questions in the notebook.

Authentication

You'll also need to give your VM permission to access BigQuery. You can do so by pasting the following into the terminal on your VM and following the directions. Please read the following cautions before running this command.

gcloud auth application-default login --scopes=openid,https://www.googleapis.com/auth/cloud-platform.

Caution

1. While running the command, it will ask you to paste some link to your browser. If you have multiple Google accounts in your browser, and do not want this to select the default one, then do the following:

paste the link in an incognito mode

login to the Google account of your choice

2. Be careful, because if a malicious party were to gain access to your VM, they would have free access to all your cloud services. For example, if your Jupyter is listening publicly (i.e., 0.0.0.0 instead of localhost) and you have a weak password (or no password), someone could gain access to your VM, and then these other resources.

3. You may need to run this command again if you run into an error like "Reauthentication is needed" .

When you're not actively working, you may want to revoke (take away) the permissions your VM has to minimize risk:

gcloud auth application-default revoke

Questions

The following questions will help you check that you created your VM correctly.

Q1: What is the OS release?

Write some Python code in a cell starting with #q1 that reads /etc/os-release to a string that is used for the cell output (last line of the cell -- don't use print).

Q2: What CPUs does your VM have?

Use the subprocess module that comes with Python to programmatically run the lscpu program and capture the output in a string that you'll use for the output.

Part 2: GCS Bucket

Navigate to Google Cloud Storage WebUI. Manually create a private GCS bucket. Bucket names must be globally unique, so you'll need to think of a name that nobody else in the world has already used. For location, choose Multi-region , us (multiple regions in the United States ).

Use pyarrow (https://arrow.apache.org/docs/python/generated/pyarrow.fs.GcsFileSystem.html) in your Jupyter notebook to connect to your bucket. Read the bytes from the local wi-schools-raw.parquet file and use pyarrow to write it to a new GCS file in your bucket.

Q3: What are the paths of all the files/directories in the top-level of your bucket?

Answer with a Python list.

Q4: What is the modification time of wi-schools-raw.parquet, in nanoseconds?

Part 3: Dataform

You'll use Dataform. to create a pipeline that brings data from your GCS bucket into a BigQuery dataset. Create a private BigQuery dataset named p8 within your project. You can decide to do this manually via the cloud console or programatically using a BigQuery client in your notebook. The dataset should have "US" as the data location.

Dataform. Setup

Now, let's create a Dataform. workspace:

1. Navigate to BigQuery

2. Click Dataform. in the menu on the left

3. Click Create Repository

4. Choose any "Repository ID" you like, and select "us-central1" for the "Region".

5. Click "Grant all" so that Dataform. can run BigQuery queries

6. Navigate to the newly created repo. Click Create Development Workspace . Give it an ID and navigate to it.

7. Click Initialize Workspace

8. Navigate to the Workspace and edit the "workflow_settings.yaml" file. defaultLocation should be "US", and defaultDataset should be "p8".

Your Dataform. code will be run by a default service account; you can think of a service account as basically being a non-human user with some permissions. Your service account will need some additional permissions:

1. go to IAM (https://console.cloud.google.com/iam-admin/iam) to update permissions

2. click "Include Google-provided role grants

3. click the edit icon by the "Dataform. Service Account"

4. it may already have some roles, but add roles as necessary to make sure it has these four: BigQuery Admin , BigQuery Job User , Dataform. Service Agent , and Storage Object Viewer

Pipeline Code

Now, create 3 .sqlx files on your VM defining Dataform. actions: wi_counties.sqlx , schools.sqlx , and wi_county_schools.sqlx .

Your notebook should contain code that uses a dataform.DataformClient to call .write_file(...) to upload each of these to the workspace that you created manually. Your code should also call create_compilation_result to compile your sqlx files.

wi_counties.sqlx: select from bigquery-public-data.geo_us_boundaries.counties to materialize a table of counties within Wisconsin (you can hardcode FIPS code of 55 for Wisconsin).

schools.sqlx: use a LOAD DATA OVERWRITE... query to create a table .p8.schools from the wi-schools-raw.parquet file in your GCS bucket. The config should specify type: "operations" and hasOutput: true because the SQL doesn't return results, so Dataform. cannot infer these details.

wi_county_schools.sqlx: this should geographically join schools to counties to produce a table with (a) all the columns of schools, (b) a location column which is an ST_GEOGPOINT based on coordinates of the school, and (c) a county_name column indicating the county in which the school is located. This should use Dataform. references to the other tables ( wi_counties and schools ), like ${ref('INPUT_TABLE')} .

If you did the above correctly, the "Compiled graph" (which you can view for Dataform. in the cloud console) should look like this:

Questions

The following questions will help you check that you created your pipeline correctly.

Q5: How many counties are in Wisconsin?

Query your new wi_counties table to calculate the answer. Do your query in your notebook, using either Jupyter magic ( %%bigquery ) or pure Python ( bq.query(...) where bq is a bigquery.Client ).

Q6: How many public schools are there in Wisconsin?

Use your wi_county_schools table. Find schools where agency_type is "Public school".

Q7: What are the dependencies between all the Dataform. tables?

Answer with a dictionary of lists, where each key is an action, and the value is a list of dependencies for that action. It should look like the following (possibly with additional entries if you have created other actions for practice):

{'schools': [],

'wi_counties': [],

'wi_county_schools': ['schools', 'wi_counties'],

...

}

Hint: your previous create_compilation_result should have returned a response object from which you can extract a compilation name: .name . You can use this to lookup details about the pipeline in the compilation results:

response = df_client.query_compilation_result_actions(

request={"name": compilation_result_name}

)

You can loop over response.compilation_result_actions . Each action will have an action.target.name and action.relation.dependency_targets .

Part 4: BigQuery

Q8: How many schools are there in each county?

Manipulate your query results to produce a dictionary of counts, like this:

{'Milwaukee': 420, 'Dane': 217, 'Waukesha': 177, 'Brown': 123,

'Outagamie': 96, 'Rock': 72, 'Marathon': 72, 'Racine': 68,

'Winnebago': 65, 'Sheboygan': 63, 'La Crosse': 59, 'Kenosha': 56,

'Washington': 56, 'Walworth': 56, 'Fond du Lac': 55, 'Eau Claire':

54, 'Jefferson': 52, 'Dodge': 51, 'Wood': 47, 'Manitowoc': 44,

'Sauk': 42, 'Columbia': 41, 'Barron': 40, 'Grant': 39, 'Portage': 38,

'Ozaukee': 38, 'Clark': 38, 'Waupaca': 36, 'St. Croix': 35,

'Chippewa': 35, 'Calumet': 30, 'Shawano': 29, 'Pierce': 29,

'Marinette': 29, 'Vernon': 29, 'Polk': 28, 'Monroe': 28,

'Trempealeau': 27, 'Juneau': 26, 'Green': 23, 'Dunn': 22, 'Oconto':

20, 'Lafayette': 18, 'Door': 18, 'Douglas': 18, 'Crawford': 17,

'Oneida': 17, 'Iowa': 16, 'Lincoln': 16, 'Washburn': 16, 'Green

Lake': 15, 'Kewaunee': 15, 'Richland': 14, 'Langlade': 13, 'Taylor':

13, 'Vilas': 13, 'Sawyer': 13, 'Waushara': 13, 'Ashland': 12,

'Bayfield': 12, 'Buffalo': 11, 'Rusk': 11, 'Marquette': 10,

'Jackson': 10, 'Price': 9, 'Burnett': 9, 'Pepin': 7, 'Forest': 7,

'Menominee': 4, 'Adams': 3, 'Florence': 3, 'Iron': 3}

Q9: How many times can you do a query like Q8 before exhausting the BigQuery free tier?

Round down to produce an int.

Q10: For each public middle school in Dane County, what is the geographically closest public high school in Dane County?

Answer with a dictionary, like this:

{'Badger Ridge Middle': 'Verona Area High', 'Badger Rock Middle':

'West High', 'Belleville Middle': 'Belleville High', 'Black Hawk

Middle': 'Shabazz-City High', 'Central Heights Middle': 'Prairie

Phoenix Academy', 'Cherokee Heights Middle': 'Capital High', 'De

Forest Middle': 'De Forest High', 'Deerfield Middle': 'Deerfield

High', 'Ezekiel Gillespie Middle School': 'Vel Phillips Memorial High

School', 'Glacial Drumlin School': 'LaFollette High', 'Glacier CreekMiddle':

'Middleton High', 'Hamilton Middle': 'Capital High', 'Indian

Mound Middle': 'McFarland High', 'Innovative and Alternative Middle':

'Innovative High', 'James Wright Middle': 'West High', 'Kromrey

Middle': 'Middleton High', 'Marshall Middle': 'Marshall High', 'Mount

Horeb Middle': 'Mount Horeb High', 'Nikolay Middle': 'Koshkonong

Trails School', "O'Keeffe Middle": 'Innovative High', 'Oregon Middle':

'Oregon High', 'Patrick Marsh Middle': 'Prairie Phoenix Academy',

'Prairie View Middle': 'Sun Prairie West High', 'River Bluff Middle':

'Stoughton High', 'Savanna Oaks Middle': 'Capital High', 'Sennett

Middle': 'LaFollette High', 'Sherman Middle': 'Shabazz-City High',

'Spring Harbor Middle': 'Capital High', 'Toki Middle': 'Vel Phillips

Memorial High School', 'Waunakee Middle': 'Waunakee High', 'Whitehorse

Middle': 'Monona Grove High', 'Wisconsin Heights Middle': 'Wisconsin

Heights High'}

Given you need to give the High School with the minimum distance, you may want to use the MIN_BY aggregate (https://cloud.google.com/bigquery/docs/reference/standard-sql/aggregate_functions#min_by) instead of MIN .

You are not required to report the actual distances to the nearest schools, but it may help to know for debugging (the below distances are in kilometers):

{'Badger Ridge Middle': 1.8392052150991804, 'Badger Rock Middle':

5.29072039608733, 'Belleville Middle': 0.0, 'Black Hawk Middle':

3.2208653727303473, 'Central Heights Middle': 0.0, 'Cherokee Heights

Middle': 3.0891335611917463, 'De Forest Middle': 0.7104369448866054,

'Deerfield Middle': 0.0, 'Ezekiel Gillespie Middle School':

0.20996167824816614, 'Glacial Drumlin School': 8.422858770109299,

'Glacier Creek Middle': 10.380886949942708, 'Hamilton Middle':

1.6888876951751441, 'Indian Mound Middle': 0.3181930065207676,

'Innovative and Alternative Middle': 0.0, 'James Wright Middle':

3.099031509318725, 'Kromrey Middle': 0.633645628693037, 'Marshall

Middle': 0.3692934702053087, 'Mount Horeb Middle': 0.2247088214705937,

'Nikolay Middle': 2.7825509153862193, "O'Keeffe Middle":

1.1801078611747235, 'Oregon Middle': 2.199089113737385, 'Patrick Marsh

Middle': 2.995356862757154, 'Prairie View Middle': 1.4469952774362715,

'River Bluff Middle': 1.491373618930982, 'Savanna Oaks Middle':

7.439620069162148, 'Sennett Middle': 0.1614363991779063, 'Sherman

Middle': 0.0, 'Spring Harbor Middle': 3.0043175181011414, 'Toki

Middle': 3.9513342414992176, 'Waunakee Middle': 0.23741587309479018,

'Whitehorse Middle': 2.252781418278708, 'Wisconsin Heights Middle':

0.0}

Submission

When you are all done, commit and push to GitLab.

Verify that your submission repository has the following structure with at least these files committed:

└── src

├── definitions

│ ├── wi_counties.sqlx

│ ├── schools.sqlx

│ └── wi_county_schools.sqlx

└── p8.ipynb

Do not forget to revoke the permission to access your cloud account:

gcloud auth application-default revoke




热门主题

课程名

mktg2509 csci 2600 38170 lng302 csse3010 phas3226 77938 arch1162 engn4536/engn6536 acx5903 comp151101 phl245 cse12 comp9312 stat3016/6016 phas0038 comp2140 6qqmb312 xjco3011 rest0005 ematm0051 5qqmn219 lubs5062m eee8155 cege0100 eap033 artd1109 mat246 etc3430 ecmm462 mis102 inft6800 ddes9903 comp6521 comp9517 comp3331/9331 comp4337 comp6008 comp9414 bu.231.790.81 man00150m csb352h math1041 eengm4100 isys1002 08 6057cem mktg3504 mthm036 mtrx1701 mth3241 eeee3086 cmp-7038b cmp-7000a ints4010 econ2151 infs5710 fins5516 fin3309 fins5510 gsoe9340 math2007 math2036 soee5010 mark3088 infs3605 elec9714 comp2271 ma214 comp2211 infs3604 600426 sit254 acct3091 bbt405 msin0116 com107/com113 mark5826 sit120 comp9021 eco2101 eeen40700 cs253 ece3114 ecmm447 chns3000 math377 itd102 comp9444 comp(2041|9044) econ0060 econ7230 mgt001371 ecs-323 cs6250 mgdi60012 mdia2012 comm221001 comm5000 ma1008 engl642 econ241 com333 math367 mis201 nbs-7041x meek16104 econ2003 comm1190 mbas902 comp-1027 dpst1091 comp7315 eppd1033 m06 ee3025 msci231 bb113/bbs1063 fc709 comp3425 comp9417 econ42915 cb9101 math1102e chme0017 fc307 mkt60104 5522usst litr1-uc6201.200 ee1102 cosc2803 math39512 omp9727 int2067/int5051 bsb151 mgt253 fc021 babs2202 mis2002s phya21 18-213 cege0012 mdia1002 math38032 mech5125 07 cisc102 mgx3110 cs240 11175 fin3020s eco3420 ictten622 comp9727 cpt111 de114102d mgm320h5s bafi1019 math21112 efim20036 mn-3503 fins5568 110.807 bcpm000028 info6030 bma0092 bcpm0054 math20212 ce335 cs365 cenv6141 ftec5580 math2010 ec3450 comm1170 ecmt1010 csci-ua.0480-003 econ12-200 ib3960 ectb60h3f cs247—assignment tk3163 ics3u ib3j80 comp20008 comp9334 eppd1063 acct2343 cct109 isys1055/3412 math350-real math2014 eec180 stat141b econ2101 msinm014/msing014/msing014b fit2004 comp643 bu1002 cm2030
联系我们
EMail: 99515681@qq.com
QQ: 99515681
留学生作业帮-留学生的知心伴侣!
工作时间:08:00-21:00
python代写
微信客服:codinghelp
站长地图