代写CSCI-UA.0480-003 Parallel Computing Exam 2 (Final) Spring 2020代写数据结构程序

CSCI-UA.0480-003

Parallel Computing

Exam 2 (Final)

Spring 2020

[Total: 100 points]

1. Suppose that MPI_COMM_WORLD consists of the four processes 0,1, 2, and 3, and suppose the following code snippet is executed (my_rank contains the rank of the executing  process in MPI_COMM_WORLD) [Note: The code shown here is just part of a bigger program. It is not a full- fledged program.].

int x, y, z;

MPI_Comm COMM; int new_rank;

MPI_Comm_Split(MPI_COM_WORLD, my_rank%2,  my_rank,  &COMM); MPI_Comm_rank(COMM, &new_rank);

switch(new_rank) {

case 0:

x=11; y=12; z=10;

MPI_Reduce(&x, &y, 1, MPI_INT, MPI_SUM, 0, COMM); MPI_Allreduce(&y, &z, 1, MPI_INT, MPI_SUM,  COMM); break;

case 1:

x=3; y=8; z=5;

MPI_Reduce(&x, &z, 1, MPI_INT, MPI_SUM, 0, COMM); MPI_Allreduce(&y, &x, 1, MPI_INT, MPI_SUM, COMM); break;

default:

x=8; y=9; z=11;

MPI_Reduce(&z, &y, 1, MPI_INT, MPI_SUM, 0, COMM); MPI_Allreduce(&x, &y, 1, MPI_INT, MPI_SUM, COMM); MPI_Bcast(&y, 1, MPI_INT, 1, COMM);

break; }

a. [5 points] Is there a possibility of deadlock in the above code? If yes, describe the scenario that leads to the deadlock. If not, prove that all collective calls called by the processes will not be blocked forever.

b.[12 points] For each one of the four processes (IDs 0, 1, 2, 3, and 4 in the original MPI_COMM_WORLD) what will be the values of x, y, and z after the execution of the above code.

c.  [5 points] How many processes will execute the “default” part of the switch case? What are their process IDs in the original MPI_COMM_WORLD? Justify your answer.

d. [4 points] Can a process have more than one ID (i.e. multiple ranks)? If yes, describe a scenario. If not, explain why not.

e. [4 points] After the execution of the above code, how many communicators exist? What are they?

2. [5 points] Can two processes share the same cache memory? Justify your answer.

3. [10 points] If we run two processes on a single core, we expect that the sequential version of the program will be faster than the parallel version where two MPI processes run on that single core. Describe two scenarios where two MPI processes running on a single core give a better performance than the sequential program.

4. [10 points] Suppose we have two MPI processes and we run them on a processor with two cores. Describe two scenarios where we can get better performance if we run these two processes on a processor with four cores instead of two.

5. [10 points] Suppose we have the following code snippet for OpenMP

#pragma omp parallel for

for( i = 0; i < 600; i++) compute(i);

The following figure shows the amount of computations done in compute(i) for each value of i. The code as indicated above uses the default schedule. Modify the code to use better schedule(s) given the information about computations shown in the figure. Add  1-2 lines explaining your logic for picking the schedule(s) that you used.

 

6. [5 points] Assume we are multiplying an 8000x8000 matrix A with vector y. Each element of the matrix and vector is double float. We are parallelizing this multiplication operation using four threads in OpenMP, such that thread 0 will be responsible for the first 2000 elements of the solution vector, thread 1 for the following 2000 elements, and so on. Also assume that each thread will execute on a separate core. Each core has its own private L1 cache. If the  cache line has a size 64 bytes. Is it possible for false sharing to occur at any time between threads 0 and 2? Explain.

7. [5 points] We have seen that threads in a warp in CUDA execute instructions in a lockstep. Despite that, there may be scenarios where threads belonging to the same warp can finish before other threads belonging to that same warp. Describe one such scenarios.

8. For the  following vector multiplication kernel  and the  corresponding kernel  launch  code, answer each of the questions below. Assume Ad, Bd, and Cd have been declared earlier. For each question below, show the steps you used to reach your answer.

1 __global__ void vecMultKernel (float* Ad,float* Bd,float* Cd, int n)

2 {

3         int i = threadIdx.x + (blockDim.x * blockIdx.x * 2);

4

5         if (i < n) { Cd[i] = Ad[i] * Bd[i]; }

6                     i += blockDim.x;

7          if (i < n) { Cd[i] = Ad[i] * Bd[i]; }

8 }

9

10 int vectMult (float* A, float* B, float* C, int n)

11 {

12        /*  n  is the length of arrays A, B, and C.

13        int size = n * sizeof (float);

14        cudaMalloc ((void **)&Ad, size);

15        cudaMalloc ((void **)&Bd, size);

16        cudaMalloc ((void **)&Cd, size);

17        cudaMemcpy (Ad, A, size, cudaMemcpyHostToDevice);

18        cudaMemcpy (Bd,B, size, cudaMemcpyHostToDevice);

19

20        vecMultKernel<<<ceil (n / 2048), 1024>>> (Ad, Bd, Cd, n);

 

21        cudaMemcpy (C, Cd, size, cudaMemcpyDeviceToHost);

22 }

a. [3 points] If the number of elements  n of the A, B, and C arrays is 10,000 elements each, how many warps are there in each block? Show your calculations to get full credit.

b. [3 points] If the number of elements  n of the A, B, and C arrays becomes 100,000 elements each, how many warps are there in each block? Show your calculations to get full credit.

c. [3 points] What is the CGMA of line 5?  Show how you calculate it. You can disregard the computations involved in condition evaluation if (i < n)” .

d. [4 points] Explain the effect of using shared memory in the above code to reduce global memory access.

e. [4 points] If n = 2048, how many warps will suffer from branch divergence in lines 5 and 7? Explain.

f. [3 points] If another kernel is launched in that same program. Will that kernel be able to access the data in Cd? Justify your answer.

9. [5 points] We know that CUDA does not allow synchronization among threads in different blocks. Suppose CUDA allows this.  State one potential problem that may arise and explain clearly.




热门主题

课程名

mktg2509 csci 2600 38170 lng302 csse3010 phas3226 77938 arch1162 engn4536/engn6536 acx5903 comp151101 phl245 cse12 comp9312 stat3016/6016 phas0038 comp2140 6qqmb312 xjco3011 rest0005 ematm0051 5qqmn219 lubs5062m eee8155 cege0100 eap033 artd1109 mat246 etc3430 ecmm462 mis102 inft6800 ddes9903 comp6521 comp9517 comp3331/9331 comp4337 comp6008 comp9414 bu.231.790.81 man00150m csb352h math1041 eengm4100 isys1002 08 6057cem mktg3504 mthm036 mtrx1701 mth3241 eeee3086 cmp-7038b cmp-7000a ints4010 econ2151 infs5710 fins5516 fin3309 fins5510 gsoe9340 math2007 math2036 soee5010 mark3088 infs3605 elec9714 comp2271 ma214 comp2211 infs3604 600426 sit254 acct3091 bbt405 msin0116 com107/com113 mark5826 sit120 comp9021 eco2101 eeen40700 cs253 ece3114 ecmm447 chns3000 math377 itd102 comp9444 comp(2041|9044) econ0060 econ7230 mgt001371 ecs-323 cs6250 mgdi60012 mdia2012 comm221001 comm5000 ma1008 engl642 econ241 com333 math367 mis201 nbs-7041x meek16104 econ2003 comm1190 mbas902 comp-1027 dpst1091 comp7315 eppd1033 m06 ee3025 msci231 bb113/bbs1063 fc709 comp3425 comp9417 econ42915 cb9101 math1102e chme0017 fc307 mkt60104 5522usst litr1-uc6201.200 ee1102 cosc2803 math39512 omp9727 int2067/int5051 bsb151 mgt253 fc021 babs2202 mis2002s phya21 18-213 cege0012 mdia1002 math38032 mech5125 07 cisc102 mgx3110 cs240 11175 fin3020s eco3420 ictten622 comp9727 cpt111 de114102d mgm320h5s bafi1019 math21112 efim20036 mn-3503 fins5568 110.807 bcpm000028 info6030 bma0092 bcpm0054 math20212 ce335 cs365 cenv6141 ftec5580 math2010 ec3450 comm1170 ecmt1010 csci-ua.0480-003 econ12-200 ib3960 ectb60h3f cs247—assignment tk3163 ics3u ib3j80 comp20008 comp9334 eppd1063 acct2343 cct109 isys1055/3412 math350-real math2014 eec180 stat141b econ2101 msinm014/msing014/msing014b fit2004 comp643 bu1002 cm2030
联系我们
EMail: 99515681@qq.com
QQ: 99515681
留学生作业帮-留学生的知心伴侣!
工作时间:08:00-21:00
python代写
微信客服:codinghelp
站长地图