HPC OpenMP Exercise: Hands-On Lab


Introduction to OpenMP

Open MultiProcessing (OpenMP) is an open standard for multithreading in C, C++, and Fortran. Most modern compilers, including GNU's gcc, Intel's icc, and Microsoft's Visual Studio include built-in support for OpenMP, so it provides a means of writing portable multithreaded programs.

Last week, we saw that pthreads supports explicit multithreading, meaning the programmer must explicitly manage the threads. By contrast, OpenMP supports implicit multithreading, meaning the OpenMP library handles most of the thread management. Because of this, OpenMP is significantly easier to use than pthreads, and it has become the de facto standard for C multithreading.

Part I: The Fork-Join Pattern in OpenMP

As in pthreads, the Fork-Join pattern plays a central role in OpenMP. To let you explore it, download and run forkJoin and forkJoin2. Read the comment at the beginning of each source file to see what you should do with it. Things to look for include:

Each example includes a Makefile to simplify building it, so create a separate folder for each program and its Makefile.

Once you have explored these examples and are confident you understand how OpenMP's use of the Fork-Join pattern works, you may continue to the next part of today's exercise.

Part II: The SPMD Pattern in OpenMP

Like the other platforms we have used, OpenMP uses the Single Program Multiple Data (SPMD) pattern. To explore it, download and run spmd and spmd2. As before, read the comment at the beginning of each source file to see what you should do with it.

When you are confident you understand how OpenMP uses the SPMD pattern, continue on to the next part.

Part III: The Barrier Pattern in OpenMP

Unlike pthreads, OpenMP provides built-in support for the Barrier pattern. To explore it, download and run barrier. As before, read the comment at the beginning of the source file to see what you should do with it.

When you are confident you understand how to use OpenMP's version of the Barrier pattern, continue on to the next part.

Part IV: The Master-Worker Pattern in OpenMP

OpenMP provides functions by which a thread can discover how many threads there are, and its identity. These (combined with the Fork-Join pattern) are sufficient to implement the Master-Worker pattern in OpenMP. To explore this pattern, download and run masterWorker. As before, read the comment at the beginning of the source file to see what you should do with it.

When you are confident you understand how to implement the Master-Worker pattern in OpenMP, continue on to the next part.

Part V: The Parallel For Loop and Reduction Patterns in OpenMP

As we have seen before, there are two versions of the Parallel For Loop pattern:

You may explore this pattern by downloading and running parallelForLoop-blocks, and parallelForLoop-stripes. As before, read the comment at the beginning of the source file to see what you should do with it.

In OpenMP, the Reduction pattern is an optional clause with the Parallel For Loop. To see how it works, download and run reduction, and follow the directions at the beginning of the source file.

OpenMP's parallel for loop supports many other features, including dynamic scheduling (aka work stealing) and the ability to control the size of the blocks in that version of the pattern. We will not explore them here, but you should feel free to investigate them on your own.

Part VI: Race Conditions and the Mutual Exclusion Pattern in OpenMP

When threads start accessing shared variables, it is all too easy to create race conditions. As a first example, download and run private, and follow the directions at the beginning of the source file. As you work through them, find the answers to these questions:

  1. What shared variable is the source of the conflict?
  2. Is it a write-write conflict, a read-write conflict, or both?
  3. How does OpenMP's private clause resolve the conflict?

For race conditions where the code contains a critical section, the Mutual Exclusion pattern can be used. OpenMP provides two different mechanisms for this pattern: the atomic mechanism and the critical mechanism.

You can explore the atomic mechanism by downloading and running atomic. As usual, follow the directions at the beginning of the file, and map the behavior you observe to the source code producing it.

You can explore the critical mechanism by downloading and running critical and critical2. By now, you should know the drill: follow the directions at the beginning of the file, and figure out how the source code is producing the behavior you observe.

If you compare the Mutual Exclusion pattern in OpenMP against the same pattern in pthreads, it should be evident that OpenMP's mechanisms are much simpler and easier to use than those of pthreads. This simplicity and comparative ease of use are what have made OpenMP so popular.

Part VII: The Parallel Tasks Pattern in OpenMP

Our final pattern is the Parallel Task pattern, which OpenMP implements using its sections directive. To see how it works, download and run sections. Note that each section can be performing a different function, so if a problem can be decomposed into tasks that can run in parallel, the sections directive provides a way to solve that problem.

When you are finished, you may continue to this week's project.


CS > 374 > Exercise > 07 > Hands-On Lab


This page maintained by Joel Adams.