代写program、代做python设计程序
Assignment 6
Due Wednesday by 11:59pm
Points 80
Submitting a file upload
Available Oct 28 at 12pm - Dec 31 at 11:59pm
Start Assignment
Assignment 6 (80 Points)
Due November 6 at 11:59 PM
In this assignment, you will create distributed solutions (i.e., with multiple processes) for the two
programs you developed in Assignment 2 (Curve Area Calculation and Heat Transfer) using Message
Passing Interface (MPI). MPI is different from the shared memory model we have used so far, and
requires using APIs to communicate between processes.
Before starting this assignment, you should have completed the Slurm Tutorial
(https://canvas.sfu.ca/courses/84236/pages/slurm-tutorial) , which walks you through how to use our
servers for your code development. Additionally, you should also have completed the MPI tutorial
(https://canvas.sfu.ca/courses/84236/pages/slurm-mpi-tutorial) , which gives an overview of MPI and how
to correctly run MPI programs using slurm.
General Instructions:
1. You are given the serial implementations here (https://canvas.sfu.ca/courses/84236/files/24448350?
wrap=1) (https://canvas.sfu.ca/courses/84236/files/24448350/download?download_frd=1) .
2. MPI permits various communication strategies to pass data between processes. This assignment
uses the point-to-point communication strategy.
3. For simplicity, we only use one thread per process in this assignment. Make sure you
use MPI_Finalize (https://www.open-mpi.org/doc/current/man3/MPI_Finalize.3.php) before exiting
the main() function.
4. MPI uses the distributed model where each process is completely independent and has its own
separate memory space. Remember to set the --mem option appropriately in your script.
5. While testing your solutions, make sure that --cpus-per-task is set to 1 in your slurm job
script, and the --ntasks and --nodes is set based on number of MPI processes and nodes you
want.
#!/bin/bash
#
#SBATCH --cpus-per-task=1
#SBATCH --nodes=1
Assignment 6
1/8
#SBATCH --ntasks=4
#SBATCH --partition=slow
#SBATCH --mem=10G
srun ./curve_area_parallel
6. You will be asked to print the time spent by different processes on specific code regions. The time
spent by any code region can be computed as follows:
timer t1;
t1.start();
/* ---- Code region whose time is to be measured --- */
double time_taken = t1.stop();
If you need to time a sub-section inside a loop, you can do that as follows:
double time_taken = 0.0;
timer t1;
while(True){
/* ---- Code region whose time should not be measured --- */
t1.start();
/* ---- Code region whose time is to be measured --- */
time_taken += t1.stop();
/* ---- Code region whose time should not be measured --- */
}
std::cout << "Time spent on required code region : " << time_taken << "\n";
7. The output of the two programs can be tested by comparing the serial output to the parallel program
output. You can also modify the scripts provided with assignment 2 to test your code.
8. Since each MPI process is independent, use these rules to print your outputs:
Use printf() to avoid garbled logs. You can also concatenate the information as a string and
use std::cout to print a single line of output. To add a new line, use "\n" as part of the
concatenated string instead of std::endl .
You can check the rank of the process before printing as shown below:
if (world_rank == 0)
printf("Time taken (in seconds): %g\n", time_taken);
The root process should print most of the output logs.
Non-root processes should only print the process statistics in a single line.
1. Monte Carlo Curve Area Estimation using MPI
Similar to Assignment 2 (https://canvas.sfu.ca/courses/84236/assignments/1006849) , you will develop a
parallel solution for curve area estimation using MPI. Here, the work is distributed among P processes.
The total number of points should be divided evenly between processes. Use the following pseudocode
for determining the subset of vertices for each process:
// Dividing up n vertices on P processes.
// Total number of processes is world_size. This process rank is world_rank
min_points_per_process = n / world_size
Assignment 6
2/8
excess_points = n % world_size
if (world_rank < excess_points)
points_to_be_generated = min_points_per_process + 1
else
points_to_be_generated = min_points_per_process
// Each process will work on points_to_be_generated and estimate curve_points.
Each process will compute the number of curve points from the total points allocated to it. Process 0
(henceforth, referred as the root process) aggregates (i.e., sums up) the local counts from other
processes (henceforth, referred as non-root processes) and computes the final curve area calculation.
The pseudocode for question 1 is given below:
for each process P in parallel {
local_curve_count = 0
for each point allocated to P {
? ? ? ?x_coord = (2.0 * get_random_coordinate(&random_seed)) - 1.0);
y_coord = (2.0 * get_random_coordinate(&random_seed)) - 1.0);
if ( (a * (x^2)) + (b * (y^4)) ) <= 1.0)
local_curve_count++;
}
}
// --- synchronization phase start ---
if(P is root process){
global_count = Sum of local counts of all the processes
}
else {
// Use appropriate API to send the local_curve_count to the root process
} // --- synchronization phase end -----
if(P is root process){
? ? area = 4.0 * (double)global_curve_points / (double)n;
// print process statistics and other results
}
else{
// print process statistics
}
}
You should use point-to-point communication, i.e., MPI_Send() (https://www.open?mpi.org/doc/current/man3/MPI_Send.3.php) and MPI_Recv() (https://www.open?mpi.org/doc/current/man3/MPI_Recv.3.php) , and do the following:
Non-root processes will send their local_curve_count to the root process.
Root process receives the information from other processes and aggregates them to get the final
area.
INote that the MPI function calls we use in this assignment are synchronous calls. So make sure that
the MPI_Send() (https://www.open-mpi.org/doc/current/man3/MPI_Send.3.php) and MPI_Recv()
(https://www.open-mpi.org/doc/current/man3/MPI_Recv.3.php) are called in the correct order in every
process.
Output Format for Question 1:
1. Your solution should be named curve_area_parallel.cpp and your Makefile should produce
curve_area_parallel binary. Command line parameters to be supported:
Assignment 6
3/8
nPoints: Total number of points used for estimating the area. This number should be divided
equally among processes (with the remainder r=nPoints % world_size going to processes 0,...,r-1
)
coeffA: Value of coefficient a.
coeffB: Value of coefficient b.
rSeed: Seed of the random number generator that you use in the program.
2. Your parallel solution must output the following information:
World size (i.e., number of processes) (only root process).
For each process: the number of random points generated, the number of points inside the curve,
and the time taken to generate and process these points (your processes should be numbered
between [0, world_size-1) ).
The total number of points generated.
The total number of points within the curve.
The total time taken for the entire execution. This should include the communication time and
decomposition time (only root process).
Please note that the output format should strictly match the expected format (including "spaces" and
"commas"). You can test your code using the test script as shown below. You can run the python script
only with slurm. Remember to invoke the script without srun . A sample output file is provided
under sample_outputs/curve_area_parallel.txt
2. Heat Transfer using MPI
You will implement Heat Transfer (from Assignment 2) with MPI. Here, the work is distributed
among P processes. For simplicity, every process will create the whole grid in its local memory, but will
only compute for a vertical slice of the grid, similar to Assignment 2. The following pseudocode can be
used to compute the start and end column for each process:
min_columns = size / world_size;
excess_columns = size % world_size;
? if (world_rank < excess_columns) {
? ? startx = world_rank * (min_columns + 1);
? ? endx = startx + min_columns;
? }
? else {
? ? startx = (excess_columns * (min_columns + 1)) + ((world_rank-excess_columns) * min_columns);
? ? endx = startx + min_columns - 1;
? }
The heat transfer pseudocode is given below:
for each process P in parallel {
for(local_stepcount = 1; local_stepcount <= tSteps; local_stepcount++) {
? ? ? ?Compute the Temperature Array values Curr[][] in the slice allocated to this process from Pr
ev[][]
// --- synchronization: Send and Receive boundary columns from neighbors
// Even processes communicate with right proces first
// Odd processes communicate with left process first
if (world_rank % 2 == 0) { // even rank
Assignment 6
4/8
if (world_rank < world_size - 1) { // not last process
Send my column "end" to the right process world_rank+1
Receive column "end+1" from the right process world_rank+1, populate local Curr Arra
y
}
if (world_rank > 0) { // not first process
Receive column "start-1" from the left process world_rank-1, populate local Curr Arr
ay
Send my column "start" to the left process world_rank-1
}
} // even rank
else { // odd rank
if (world_rank > 0) { // not first process
Receive column "start-1" from the left process world_rank-1, populate local Curr Arr
ay
Send my column "start" to the left process world_rank-1
}
if (world_rank < world_size - 1) { // not last process
Send my column "end" to the right process world_rank+1
Receive column "end+1" from the right process world_rank+1, populate local Curr Arra
y
}
? ? ? ?} // odd rank
// --- synchronization end -----
} // end for local_stepcount
if(P is root process){
// print process statistics and other results
}
else{
// print process statistics and relevant point temperatures
}
}
Key things to note:
1. A key difference between Heat Transfer and Curve Area is that you will need to continuously
communicate boundary values for each loop iterations (instead of just communicating the local
curve_points at the end of each process). This continuous synchronization/communication makes it a
much harder problem to reason about and debug.
2. You should use point-to-point communication, i.e., MPI_Send() (https://www.open?mpi.org/doc/current/man3/MPI_Send.3.php) and MPI_Recv() (https://www.open?mpi.org/doc/current/man3/MPI_Recv.3.php) to communicate between processes.
Note that the MPI function calls we use in this assignment are synchronous calls. So make sure that
the MPI_Send() (https://www.open-mpi.org/doc/current/man3/MPI_Send.3.php) and MPI_Recv()
(https://www.open-mpi.org/doc/current/man3/MPI_Recv.3.php) are called in the correct order in every
process.
Please observe the order in which processes send and receive data. This order has to be correct
with every MPI_Send having a corresponding MPI_Receive in the same order; otherwise the
program will stop making progress.
3. The above pseudocode has opposite and complimentary orders for even and odd processes. Please
examine the order of send/receive operations carefully and reason about why this order is correct.
Assignment 6
5/8
Please note that this order is not the only correct order. If you wish, you can use a different
communication order provided that it produces correct results.
4. Please read the syntax for MPI_Send and MPI_Receive in the MPI tutorial. You need to figure out
parameter values and how to coordinate a send to a corresponding receive and vice-versa. You need
to use the correct size and data type for the message on both sender and receiver.
5. A key parameter for both MPI_Send and MPI_Receive is the message tag. Tags of the sender and
receiver have to match. If two messages with the same tag are sent, a send/receive mismatch could
occur. A possible strategy is to use the column number being sent or received as the message tag,
so both sender and receiver sync up on the same tag.
6. Only the time spent on synchronization phase 1 is used for calculating communication time.
7. Printing the program output has to be synchronized as well. The exact strategy is up to you. For
example, process 0 can print its output first then send a message to process 1 which is blocked on
MPI_Receive, then process 1 prints its output and sends a message to process 2, ...etc.
8. While it is better to have the order of output statements match the sample output exactly, your
grade will not be affected if the output lines are printed correctly but out of order.
Output Format for Question 2:
1. Your solution should be named heat_transfer_parallel.cpp and your Makefile should
produce heat_transfer_parallel binary. Command line parameters to be supported:
gSize: Grid size. The size of the temperature array is gSize x gSize.
mTemp: Temperature values in the middle of the array, from [gSize/3 , gSize/3] to [2*gSize/3 ,
2*gSize/3].
iCX: Coefficient of horizontal heat transfer.
iCY: Coefficient of vertical heat transfer.
tSteps: Time steps of the simulation
2. Your parallel solution must output the following information:
World size (i.e., number of processes) (only root process).
Grid size.
Values of iCX, iCY, mTemp and tSteps
For each process: process id, start column, end column, time taken.
Temperatures at end of simulation for points at [0,0], [gSize/6, gSize/6], [gSize/3, gSize/3],
[gSize/2, gSize/2], [2*gSize/3, 2*gSize/3], [5*gSize/6, 5*gSize/6].
Temperatures at the right boundary of all processes: [endx[0], endx[0]], [endx[1], endx[1]], ...,
[[endx[world_size-1],endx[world_size-1]].
The total time taken for the entire execution. This should include the communication time (only
root process).
Please note that the output format should strictly match the expected format (including "spaces" and
"commas"). The sample console output can be found in sample_outputs/heat_transfer_parallel.txt .
3. Assignment Report
Assignment 6
6/8
In addition to your parallel code, you need to submit a report (in pdf format) that answers the following
questions:
Q1. Run your curve_area_parallel program from part 1 with 1, 2, 4, and 8 processes and the following
parameters: coeffA=1.2, coeffB=0.8, rSeed=129, nPoints=4,000,000,000 (i.e., 4 billion). Each of your
parallel programs should run 3 times. {Total number of runs is 4 (different process counts) x 3 (number of
runs for each process count) = 12 runs]
Plot a graph with average execution time on the y-axis, process count on the x-axis.
Q2. From the plot in Q1, what is the parallel speedup for 2, 4, and 8 processes (compared to 1 process)?
Is this problem embarrassingly parallel?
Q3. Run your heat_transfer_parallel program from part 2 with 1, 2, 4, and 8 processes and the following
parameters: gSize=4000, mTemp=600, iCX=0.15, iCY=0.1, tSteps = 1000 (if you already used 2000, that
is also acceptable). Each of your parallel programs should run 3 times. {Total number of runs is 4
(different process counts) x 3 (number of runs for each process count) = 12 runs]
Plot a graph with average execution time on the y-axis, process count on the x-axis.
Q4. From the plot in Q3, what is the parallel speedup for 2, 4, and 8 processes (compared to 1 process)?
Is this problem embarrassingly parallel?
Submission Guidelines
Make sure that your solutions folder has the following files and sub-folders. Let's say your solutions
folder is called my_assignment6_solutions . It should contain:
core/ -- The folder containing all core files. It is already available in the assignment 6 package.
Do not modify it or remove any files.
Makefile -- Makefile for the project. This is the same Makefile provided in the serial package. Do
not modify it.
curve_area_parallel.cpp
heat_transfer_parallel.cpp
report.pdf -- A pdf file that includes answers to questions in the previous section.?
To create the submission file, follow the steps below:
1. Enter in your solutions folder, and remove all the object/temporary files.
$ cd my_assignment6_solutions/
$ make clean
2. Create the tar.gz file.
$ tar cvzf assignment6.tar.gz *
which creates a compressed tar ball that contains the contents of the folder.
3. Validate the tar ball using the submission_validator.py script.
Assignment 6
7/8
$ python scripts/submission_validator.py --tarPath=/assignment6.tar.gz
Submit via Canvas by the deadline.
Assignment 6
8/8

热门主题

课程名

mktg2509 csci 2600 38170 lng302 csse3010 phas3226 77938 arch1162 engn4536/engn6536 acx5903 comp151101 phl245 cse12 comp9312 stat3016/6016 phas0038 comp2140 6qqmb312 xjco3011 rest0005 ematm0051 5qqmn219 lubs5062m eee8155 cege0100 eap033 artd1109 mat246 etc3430 ecmm462 mis102 inft6800 ddes9903 comp6521 comp9517 comp3331/9331 comp4337 comp6008 comp9414 bu.231.790.81 man00150m csb352h math1041 eengm4100 isys1002 08 6057cem mktg3504 mthm036 mtrx1701 mth3241 eeee3086 cmp-7038b cmp-7000a ints4010 econ2151 infs5710 fins5516 fin3309 fins5510 gsoe9340 math2007 math2036 soee5010 mark3088 infs3605 elec9714 comp2271 ma214 comp2211 infs3604 600426 sit254 acct3091 bbt405 msin0116 com107/com113 mark5826 sit120 comp9021 eco2101 eeen40700 cs253 ece3114 ecmm447 chns3000 math377 itd102 comp9444 comp(2041|9044) econ0060 econ7230 mgt001371 ecs-323 cs6250 mgdi60012 mdia2012 comm221001 comm5000 ma1008 engl642 econ241 com333 math367 mis201 nbs-7041x meek16104 econ2003 comm1190 mbas902 comp-1027 dpst1091 comp7315 eppd1033 m06 ee3025 msci231 bb113/bbs1063 fc709 comp3425 comp9417 econ42915 cb9101 math1102e chme0017 fc307 mkt60104 5522usst litr1-uc6201.200 ee1102 cosc2803 math39512 omp9727 int2067/int5051 bsb151 mgt253 fc021 babs2202 mis2002s phya21 18-213 cege0012 mdia1002 math38032 mech5125 07 cisc102 mgx3110 cs240 11175 fin3020s eco3420 ictten622 comp9727 cpt111 de114102d mgm320h5s bafi1019 math21112 efim20036 mn-3503 fins5568 110.807 bcpm000028 info6030 bma0092 bcpm0054 math20212 ce335 cs365 cenv6141 ftec5580 math2010 ec3450 comm1170 ecmt1010 csci-ua.0480-003 econ12-200 ib3960 ectb60h3f cs247—assignment tk3163 ics3u ib3j80 comp20008 comp9334 eppd1063 acct2343 cct109 isys1055/3412 math350-real math2014 eec180 stat141b econ2101 msinm014/msing014/msing014b fit2004 comp643 bu1002 cm2030
联系我们
EMail: 99515681@qq.com
QQ: 99515681
留学生作业帮-留学生的知心伴侣!
工作时间:08:00-21:00
python代写
微信客服:codinghelp
站长地图