OpenMP Tutorials at SC23

Similar to previous years, I am involved in two OpenMP-related tutorials at SC23 in Denver. This year, we produced two short videos outlining the content of these tutorials.

Our Advanced OpenMP (see the SC23 program page) tutorial with the subtitle Performance and 5.2 Features focuses on explaining how to achieve performance on modern HPC architectures and presenting the latest features of OpenMP 5.x. This half-day tutorial will be given on Monday afternoon and has the following content:

SC23 Tutorial Overview: Advanced OpenMP

The focus of our Mastering Tasking (see the SC program page) tutorial is to teach and cover all aspects of task parallelism in OpenMP with many code examples. This half-day tutorial will be given on Monday morning and has the following content:

SC23 Tutorial Overview: Mastering Tasking

Given our backgrounds in OpenMP development, in the past instances of this tutorial, we used the breaks and discussion time to answer all the questions people ever had on OpenMP. Really, you are invited to ask us anything :-).

OpenMP Tutorials at SC22

As in previous years, several OpenMP tutorial proposals have been accepted for SC22. I am really looking forward to being in the USA again, and – among other things – to teach OpenMP to real people, instead of black tiles. In this summary, I would like to highlight the two tutorials in which I am involved.

And by the way: in addition to the content itself, I believe these tutorials provide the extra value of direct access to members of the OpenMP Language Committee. That means we are approachable beyond the tutorial outline to discuss any topics, or any issues, you have with OpenMP.

Mastering Tasking with OpenMP

Since version 3.0 released in 2008, OpenMP offers tasking to support the creation of composable parallel software blocks and the parallelization of irregular algorithms. Mastering the tasking concept of OpenMP requires a change in the way developers reason about the structure of their code and how to expose the parallelism of it. Our tutorial addresses this critical aspect by examining the tasking concept in detail and presenting patterns as solutions to many common problems.

Presenters: Christian Terboven, Michael Klemm, Xavier Teruel and Bronis R. de Supinski

Content summary:

  • OpenMP Overview (high-level summary, synchronization, memory model)
  • OpenMP Tasking Model (overview, data sharing, taskloop)
  • Improving Tasking Performance (if + final + mergeable clauses, cut-off strategies, task dependencies, task affinity)
  • Cancellation Construct
  • Future OpenMP directions

Advanced OpenMP: Host Performance and 5.2 Features

Developers usually find OpenMP easy to learn. However, they are often disappointed with the performance and scalability of the resulting code. This stems not from shortcomings of OpenMP, but rather from the lack of depth with which it is employed. Our “Advanced OpenMP Programming” tutorial addresses this critical need by exploring the implications of possible OpenMP parallelization strategies, both in terms of correctness and performance.

Presenters: Christian Terboven, Michael Klemm, Ruud van der Pas, and Bronis R. de Supinski

Content summary:

  • OpenMP Overview (high-level summary, synchronization, memory model)
  • Techniques to obtain High Performance with OpenMP: memory access (memory placement, binding, NUMA) and vectorization (understanding SIMD, vectorization in OpenMP)
  • Advanced Language Features (doacross loops, user-defined reductions, atomics)
  • Future OpenMP directions

For a complete list of SC22 activities around OpenMP and associated with the OpenMP organization, please see this page listing tutorials, the Bof, and booth talks.

OpenMP in Small Bites (online tutorial with quizzes)

As a member of the hpc.nrw regional network, I have recorded 10 video sessions for an online OpenMP tutorial. Each part consists of a short video on one selected aspect of OpenMP, followed by a couple of quiz questions for self-control. The tutorial has been designed to be platform-independent and to work with every operating system with an OpenMP-compatible compiler available. However, my examples are limited to C/C++.

All material is provided under a Creative Commons license. The topics that are currently available are:

Overview

This part provides a brief history of OpenMP and then introduces the concept of the parallel region: find it here.

Worksharing

This part introduces the concept of OpenMP worksharing, loop scheduling, and the first synchronization mechanisms: find it here.

Data Scoping

This part provides an overview of one of the most challenging parts (well, probably at first sight) of OpenMP: data scoping. It discusses the differences between private, firstprivate, lastprivate and shared variables and also explains the reduction operation: find it here.

False Sharing

This part explains the concept of caches in parallel computer architectures, discusses the problem of false sharing, and shows how to avoid it: find it here.

Tasking

This part introduces task parallelism in OpenMP. This concept enables the programmer to parallelize code regions with non-canonical loops or regions which do not use loops at all (including recursive algorithms): find it here.

Tasking and Data Scoping

This part deepens the knowledge of OpenMP task parallelism and data scoping by using an artificial example: find it here.

Tasking and Synchronization

This session discusses different synchronization mechanisms for OpenMP task parallelism: find it here.

Loops and Tasks

This part presents the taskloop construct in OpenMP: find it here.

Task Scheduling

This part explains how task scheduling works in OpenMP: find it here.

Non-Uniform Memory Access

This part explains a non-uniform memory access (NUMA) architecture may influence the performance of OpenMP programs. It illustrates how to distribute data and bind threads across NUMA domains and how to avoid uncontrolled data or thread migration: find it here.

What is missing? Please let me know which aspects of OpenMP you would like to see covered in one of the next small bites. Just to let you know, some parts on GPU programming with OpenMP are already in preparation and will hopefully be released in the next lecture-free period.

Excellent price-performance of SC20 tutorials

You are probably aware that SC20 will be a virtual (= online) event. It will start in about two weeks with the Tutorials (November 9 to 11), followed by the Workshops (November 11 to 13), the Keynotes and Awards and Top500 (and more, November 16) and finally the Technical Program and Invited Talks (and more, November 17 to 19).

However, the switch to an online format brings a great advantage for the SC20 tutorial format that I only became aware of very recently: Tutorials will be recorded and available online on-demand for 6 months. This will give you the unique chance to attend all tutorials you are possibly interested in!

If you are interested in OpenMP, there are three tutorials to choose from. The OpenMP web presence has a nice overview. As usual, I am part of the Advanced OpenMP: Host Performance and 5.0 Features tutorial. Our focus is on performance aspects, e.g., data/thread locality, false sharing, and exploitation of vector units. All topics are accompanied by case studies and we will discuss the corresponding OpenMP language features in-depth. Please note that we will solely cover performance programming for multi-core architectures (not accelerators):

Title Slide: Advanced OpenMP tutorial at SC20
Our title slide: Advanced OpenMP tutorial at SC20

Using C++ with OpenMP in jupyter notebooks

Many people might know jupyter notebooks as an interactive, web-based environment for python programming, often in the context of data analysis. However, if the cling kernel is used, it becomes possible to interactively work with C/C++ code. The so-called kernel is the interpreter used for the evaluation of code cells and cling is an interactive C++ interpreter. This could look like this:

C++ in jupyter notebook

In the IkapP project funded by the Stifterverband and state of North Rhine-Westphalia, one goal is to remove entry barriers faced by students using HPC systems in lectures. One step towards this goal is the creation of a virtual lab environment for parallel programming that can also be used for interactive experiments with different parallelization approaches. Users can, e.g., interactively experience performance results of code changes on real HPC systems. There are many parallel programming models in use and of relevance for our lectures, but we wanted to start with OpenMP and MPI. However, cling does not support OpenMP out of the box.

At the time of this writing, the current version of xeus-cling is 0.8.1, which is not based on a recent version of clang. So in principle, OpenMP Version 3.1 should be supported, which means tasking will be available, but offloading will not be available. OpenMP “in action” in a jupyter notebook could look like this:

C++ with OpenMP in jupyter notebook

In order for such notebooks to work correctly, we had to fix a few things in the xeus-cling code, in particular to ensure correct output from multiple threads. The corresponding patches were created and submitted by Jonas Hahnfeld, a student worker in the IkapP project at RWTH. They have been accepted to mainline (#314, #315, #320, #332, #319, #316, #324, #325 (also submitted to xeus-python) and #329), but since our submission there has been no new release.

Compiling and Installing xeus-cling on CentOS 7.7

The production environment on RWTH’s HPC systems is CentOS 7.7. The build instructions were compiled by Jonas. In order to build xeus-clang for a jupyter environment, do for each of the following projects (in this order):

https://github.com/jarro2783/cxxopts, https://github.com/nlohmann/json, https://github.com/zeux/pugixml, https://github.com/xtensor-stack/xtl, https://github.com/zeromq/libzmq, https://github.com/zeromq/cppzmq, https://github.com/jupyter-xeus/xeus, https://github.com/jupyter-xeus/xeus-cling

$ git clone https://github.com/org/repo src
$ mkdir build
$ cd build
$ cmake -DCMAKE_BUILD_TYPE=Release \
-DCMAKE_INSTALL_PREFIX=/path/to/install/xeus-cling/ \
-DCMAKE_C_COMPILER=/path/to/install/xeus-cling/bin/clang \
-DCMAKE_CXX_COMPILER=/path/to/install/xeus-cling/bin/clang++ \  ../src
$ make -j32
$ make install

After that, activate the kernels via

for k in xcpp11 xcpp14 xcpp17; do
cp -r ../../../../xeus-cling/share/jupyter/kernels/$k /path/to/install/jupyter/share/jupyter/kernels/;
done

and add -fopenmp to each kernel.json to enable OpenMP. Finally, let cling find the runtime libraries by adding to jupyterhub_config.py:

c.Spawner.environment = {
  'LD_LIBRARY_PATH': '/path/to/install/xeus-cling/lib/',
}

The Ongoing Evolution of OpenMP

Usually, I do not use this blog to talk directly about my work. I want to make one exception to point to the following article titles The Ongoing Evolution of OpenMP. It appeared online at IEEE and is accessible here: https://ieeexplore.ieee.org/document/8434208/.

From the abstract:
This paper presents an overview of the past, present and future of the OpenMP application programming interface (API). While the API originally specified a small set of directives that guided shared memory fork-join parallelization of loops and program sections, OpenMP now provides a richer set of directives that capture a wide range of parallelization strategies that are not strictly limited to shared memory. As we look toward the future of OpenMP, we immediately see further evolution of the support for that range of parallelization strategies and the addition of direct support for debugging and performance analysis tools. Looking beyond the next major release of the specification of the OpenMP API, we expect the specification eventually to include support for more parallelization strategies and to embrace closer integration into its Fortran, C and, in particular, C++ base languages, which will likely require the API to adopt additional programming abstractions

Webinar: Using OpenMP Tasking

With the increasing prevalence of multi-core processors, shared-memory programming models are essential. OpenMP is a popular, portable, widely supported and easy-to-use shared-memory model. Since version 3.0, released in 2008, OpenMP offers tasking to support the creation of composable parallel software blocks and the parallelization of irregular algorithms. However, the tasking concept requires a change in the way developers reason about the structure of their code and hence expose the parallelism of it. In this webinar, we will give an overview about the OpenMP tasking language features and performance aspects, such as introducing cut-off mechanisms and exploiting task dependencies.

The recording from the webinar is now available here: https://youtu.be/C8ekL2x4hZk.

Book: Using OpenMP – The Next Step

If everything goes according to plan, the book Using OpenMP – The Next Step will appear in time for SC17 (November 2017). The book is already available for pre-order on amazon: https://www.amazon.de/Using-Openmp-Next-Step-Accelerators/dp/0262534789/ref=sr_1_1?ie=UTF8&qid=1504249007&sr=8-1&keywords=using+openmp.

Book Cover
Book Cover: Using OpenMP – The Next Step

From the book’s blurb:

This book offers an up-to-date, practical tutorial on advanced features in the widely used OpenMP parallel programming model. Building on the previous volume, Using OpenMP: Portable Shared Memory Parallel Programming (MIT Press), this book goes beyond the fundamentals to focus on what has been changed and added to OpenMP since the 2.5 specifications. It emphasizes four major and advanced areas: thread affinity (keeping threads close to their data), accelerators (special hardware to speed up certain operations), tasking (to parallelize algorithms with a less regular execution flow), and SIMD (hardware assisted operations on vectors).

As in the earlier volume, the focus is on practical usage, with major new features primarily introduced by example. Examples are restricted to C and C++, but are straightforward enough to be understood by Fortran programmers. After a brief recap of OpenMP 2.5, the book reviews enhancements introduced since 2.5. It then discusses in detail tasking, a major functionality enhancement; Non-Uniform Memory Access (NUMA) architectures, supported by OpenMP; SIMD, or Single Instruction Multiple Data; heterogeneous systems, a new parallel programming model to offload computation to accelerators; and the expected further development of OpenMP.

Webinar: Getting Performance from OpenMP Programs on NUMA Architectures

Most contemporary shared memory systems expose a non-uniform memory architecture (NUMA) with implications on application performance. However, the OpenMP programming model does not provide explicit support for that. This 30-minute live webinar will discuss the approaches to getting the best performance from OpenMP applications on NUMA architecture.

The recording from the webinar is now available here: https://pop-coe.eu/blog/2nd-pop-webinar-getting-performance-from-openmp-programs-on-numa-architectures.