### HPC Project 7: A Simple Problem

#### Overview

In this exercise, we want to compare MPI and OpenMP. The problem we'll be solving is a simple one: sum the values in an array. The basic idea is to parallelize this summation by distributing it across N different PEs/cores. To make the problem more interesting, we'll be using some rather large arrays,

#### Exercise

The file arraySum.c contains a sequential program to sum the values in an array. The files 10k.txt, 100k.txt, 1m.txt, and 10m.txt (in the directory /home/cs/374/exercises/07) can be used to test this program, either by using their absolute pathnames, or by creating symbolic links to them. Please access the files directly from there -- don't copy them, to avoid wasting space. Compile arraySum.c, and execute it using each of these data files, recording the sums you get for each.

#### Homework

This week's assignment is to write two parallel versions of this program -- a distributed-memory version in MPI and a shared-memory version in OpenMP. Both versions should solve the problem using a combination of the Master-Worker, Parallel For Loop, and Reduction patterns.

1. Your first task is to write a parallel version of arraySum.c using MPI -- mpiArraySum.c -- in which the master process reads in the array and uses the Scatter pattern to distribute a piece of the array to every process. (Alternatively, you can have each process open the file and use MPI parallel I/O to have each process read its piece directly from the file, in parallel with the others.)

Once each process has its piece of the array, each process should sum its piece; then your program should use the Reduction pattern to sum these local sums.

In addition to the result, your program should compute and display: the total time taken by the program; the time spent in I/O; the time to scatter the values; and the time to sum the array. Use MPI's timing mechanism for this.

2. Part II is to copy your program to a new name -- ompArraySum.c -- and revise it to use OpenMP and shared-memory parallelism. As before, the master should read in the array, or you can have each thread read a different piece in parallel.

Once each thread has its piece, The threads should then sum the values in their pieces, using the Parallel Loop and Reduction patterns.. (If the array is in shared memory, no Scatter is needed.) At the end, the master thread should compute and display: the total time, the time spent in I/O, and the time spent summing the array. Use OpenMP's timing mechanism for this.

In each version, the actual work of summing the array should all be done by writing a parallel version of the sumArray() function that uses the parallel patterns we have discussed.

Test your programs in the ulab using small arrays. When your programs seem to be working properly, scp them to the cluster and test them on the data files there, recording your execution times. On Dahl, the data files are in the directory /home/cs/374/exercises/07. Access the files directly from there -- please don't copy them, to avoid wasting space.

This is a one week project.

#### Hand In

• Your source code for the different versions of the program.
• Four spreadsheet charts -- one for each input file: 10k.txt, 100k.txt, 1m.txt, and 10m.txt -- comparing the execution times of your programs on Dahl using 1, 2, 4, 8, 16, and 32 PEs. For each PE value, your chart should have two columns, one for the MPI program and one for the OpenMP program. The columns on your charts should be "stacked" columns, in which the entire length of each column is the total time for your program, the top segment of each column is the I/O time, the middle segment of each column is the scatter time, and the bottom segment of each column is the sum+reduce time.
• A 1-2 page analysis of your results. Explain the timing measurements you've observed. Compare and contrast your MPI and OpenMP versions of the program. Where are your computations "losing" time? How is this computation different from those we have done in previous assignments? How do Amdahl's and Gustafson's laws relate to your observations?