Tag Archives: ScaleMP

The Design of OpenMP Thread Affinity

Exascale machines will employ significantly more threads than today, but even on current architectures controlling thread affinity is crucial to fuel all the cores and to maintain data affinity, but both MPI and OpenMP lack a solution to this problem – this is the first sentence of our IWOMP 2012 paper with the same title as this blog post. The need for thread affinity in OpenMP has been demonstrated several times at several occasions. Inside the OpenMP Language Committee we formed the Affinity Subcommittee and we are working on this topic since several years now. Meanwhile almost all vendors have introduced their own extensions to support thread affinity, but they are all different and thus offer a clearly suboptimal user experience. Furthermore, they do not support nested OpenMP and in general they are static, meaning that only one affinity setting can be used for the whole program. For OpenMP 4.0, which is expected to be released as a draft in November 2012, we have a good thread affinity proposal on the table that not only standardizes existing vendor extensions, but also will add additional capabilities. This blog post will present this proposal along with some information why things are the way the are. I welcome any comments or questions via email.

When we started thinking about Affinity in general, we first tried to define a machine model or rather a machine abstraction and intended to use that to bind threads to cores as well as to possibly define a data layout. Over time I got convinced that this is not the right approach. Whatever method we used to describe the machine topology, we always envisioned systems that would be very complicated to be described. But furthermore, describing the system could end up being a task to be performed by the user, which I think is too complicated for most of them. We also do not want to enforce users to think about an explicit mapping of threads to cores, because for 95 % of the OpenMP programmers we think this is too low level. And last but not least, when there would be a new machine that could not be comfortably described by our method, OpenMP develops too slowly to be extended to support that.

To overcome this problem, the current proposal as developed by Alexandre E. Eichenberger, myself and the members of the OpenMP Language Committee Affinity Subcommittee, introduced the concepts of a place and a place-list. A place is defined as a set of execution units capable of executing OpenMP threads. For now you may think of a place like a set of cores.  A place-list is an ordered list of places, the ordered attribute is important. It can be defined by either using abstract names or rather constructing the places by enumerating the cores. The place-list will be used together with an affinity policy to bind the OpenMP threads in a team of a parallel region to the places in the list. It can be specified via the new environment variable OMP_PLACES (the name might still change). Lets illustrate that with an example: The figure below depicts a very standard system (node 0) with two sockets (socket 0 and socket 1), every socket having four cores (core 0 to core 3 on socket 0) and finally every core has two hardware-threads (t0 and t1), i.e. every core can execute two threads simultaneously.

System Topology Example: 2 sockets, 4 cores, 2 hw-threads

System Topology Example: 2 sockets, 4 cores, 2 hw-threads

Lets construct a place-list consisting of eight places, every place to be a physical core consisting of two hardware-threads (I often call those logical-threads). All of the following methods are equivalent, but we expect almost all users to use the first option:

OMP_PLACES=cores
OMP_PLACES="(0,1),(2,3),(4,5),(6,7),(8,9),(10,11),(12,13),(14,15)"
OMP_PLACES="(0:2):2:8"

As for now we will define three abstract names to describe the place-list: hwthreads, cores and sockets. It is up to the implementation to define what is meant to be a “core” for instance, but of course we will provide some hints. The wording on that is not yet completed, but it will be something along the lines of hwthreads := smallest unit of execution capable of executing an OpenMP thread; cores := set of execution units in which more than one hardware-thread share some resources such as caches; sockets := physical package of multiple cores.

Of course defining a place-list does not lead to any thread affinity. As I said above, the place list is just used to define the places the threads of a parallel region can be bound to. In our proposal, the user does not have to define an explicit mapping of threads to places (or execution units in a place) – instead, the user can specify a so-called affinity policy via the new affinity clause which can be put on a parallel region. Our proposal consists of currently three affinity policies that allow to exploit the place-list in several possible ways (the names might still change):

  • SPREAD: spread OpenMP threads as evenly as possible among the places. The place-list will be partitioned, so that subsequent threads (i.e. nested OpenMP) will only be allocated within the partition. Given the place-list outlined above, this policy would provide most dedicated hardware resources to the OpenMP program.
  • CLOSE: pack OpenMP threads near to the master thread. There is no partitioning. Given the place-list from above, this policy would be used if sharing of resources among threads is desirable.
  • MASTER: collocate OpenMP threads with the master thread (in the same place). This will ensure maximum locality to the master thread.

It is important to understand that these affinity policies influence the allocation of threads to places – not directly to the system topology. In my example the (ordered!) place-list was designed so that two threads far apart from each other also end up on physical cores far apart in the system. Although we expect this to be the standard use case, it does not necessarily have to be this way.

Lets take a closer look at what the affinity policies do by looking at some examples. The figure below shows what SPREAD will do. The green box denotes the place-list, and for every number of threads >=2 the place-list will be partitioned when a parallel region with this affinity clause is encountered. This will support nested OpenMP, as we will see later on. Every thread will receive its own sub-place-list. If there are more threads than places, more than one thread has to be allocated per place. This will occur so that if threads i and i+1 are put together in one place, this will also be the case for the OpenMP thread ids i and i+1 (in this example with 16 threads: threads with OpenMP thread id 0 and 1 are on place 0).

Affinity Example: SPREAD

Affinity Example: SPREAD

Lets also take a brief look at the two other affinity policies we are proposing, namely CLOSE and MASTER. Both are exampled in the figure below. For CLOSE, threads i and i+1 are meant to reside on place j and j+1, unless more than one thread will be allocated per place. For MASTER, all threads will be put into the same place the master thread is running on, unless this cannot be fulfilled by the implementation for any reason.

Affinity Example: CLOSE and MASTER

Affinity Example: CLOSE and MASTER

When discussing the proprietary support offered by OpenMP implementers, I said that their solutions are static for the whole program lifetime. In our proposal the initial place-list is fixed, but the affinity policy might of course be set dynamically. Furthermore, the figure below shows how nested OpenMP is supported. The outer parallel region uses the SPREAD affinity policy to create partitions and to maximize resource usage. The inner parallel region uses CLOSE to stay within the respective partition.

Affinity Example with Nested OpenMP: SPREAD + CLOSE

Affinity Example with Nested OpenMP: SPREAD + CLOSE

Whenever a new feature is intended to go into the OpenMP specification, we require the existence of at least one reference implementation to not only prove implementability, but also to get an estimation of the effort it takes to be implemented. The reference implementation for this proposal was done by Alexandre E. Eichenberger in an experimental OpenMP runtime for the IBM BlueGene/Q system. Our proposal does not affect performance critical parts of the implementation, “just” the thread selection and allocation parts. According to Alexandre’s findings the total overhead was less than 1 %, which is in the order of system noise.

Finally, let me summarize a few important properties / implications that I did not discuss in detail so far:

  • If the place-list is constructed by enumerating the cores, it will be done with the same naming scheme as used by the operating system. This approach is also used by all vendor-proprietary extensions and removes the need to define an explicit naming scheme, which might confuse users if it is different from the operation system and also might become inappropriate for future system topologies that we would not foresee today.
  • Every implementation will provide a default place-list to an OpenMP program. It has to document what the default place-list is. I guess that implementations will provide something like cores or hwthreads as a default. This corresponds to the behavior that the number of threads to be used if not specified by the user is also implementation defined (some implementations use just 1 thread, others as many as there are cores in the system).
  • When one (or more) threads are allocated to a place, they are allowed to migrate within this place if it contains more than one execution unit (i.e. physical core). This will allow for both an explicit thread-to-core binding as well as a more flexible as threads to a socket, for example, depending on how the place-list is constructed as well as which affinity policy is used.
  • The binding of the initial thread may occur as early as the runtime decides to be appropriate, but not later than when the first parallel region is encountered.

Thanks for reading until down here. More details can be found in the paper which is published by Springer in IWOMP 2012. Again, I welcome any comments or questions via email.

Upcoming Events in June 2009

Let me point you to some HPC events in June 2009.

5th International Workshop on OpenMP (IWOMP 2009) in Dresden, Germany. The IWOMP workshop series focuses on the development and usage of OpenMP. This year’s conference is titled Evolving OpenMP in an Age of Extreme Parallelism – I think this phrase is a but funny, but nevertheless one can clearly observe a trend towards Shared-Memory parallelization on the node of even the extremely parallel machines. Attached to the conference is a two day meeting of the OpenMP language committee. The language committee is currently discussing a long list of possible items for a future OpenMP 3.1 or 4.0 specification, including but not limited to my favorites Composability (especially for C++) and Performance on cc-NUMA system. Bronis de Supinski, the recently appointed Chair of the OpenMP Language Committee, will give a talk on the current activities of the LC and how the future of OpenMP might look like – I hope the slides will be made public soon after the talk. Right before the conference there will also be a one day tutorial for all people interested in learning OpenMP (mainly given by Ruud van der Pas – strongly recommended).

High Performance Computing Symposium 2009 (HPCS) in Kingston, Canada. HPCS is a multidisciplinary conference that focuses on research involving High Performance Computing and this year it takes place in Kingston. I’ve never been at that conference series, so I am pretty curious how it will look like. Attached to the conference are a couple of workshops, including Using OpenMP 3.0 for Parallel Programming on Multicore Systems – run again by Ruud van der Pas and us, and Parallel Programming in Visual Studio 2008 on Windows HPC Server 2008 – organized by us as well. Here in Aachen, the interest in our Windows-HPC compute service is still growing fine and thus we have usually around 50 new participants in our bi-yearly training events. The HPCVL people asked explicitly to cover parallel programming on Windows in the OpenMP workshop, so we separated this aspect out without further ado to serve it well. The workshop program can be found here.

International Supercomputing Conference (ISC 2009) in Hamburg, Germany. ISC titles itself as Europe’s premier HPC event – while this is probably true it is of course smaller than the SC events in the US, but usually better organized. Without question you will find numerous interesting exhibits and can listen to several talks (mostly by invited speakers), so please excuse the self-marketing of me pointing to the Jülich Aachen Research Alliance (JARA) booth in the research space where we will show an interactive visualization of large-scale numerical simulation (damage of blood cells by a ventricular device – pretty cool) as well as give an overview of our research activities focused on Shared-Memory parallelization (we will distribute OpenMP syntax references again). If you are interested in HPC software development on Windows, feel invited to stop by at our demo station at the Microsoft booth where we will have many demos regarding HPC Application Development on Windows (Visual Studio, Allinea DDTlite and Vampir are confirmed, maybe more …). And if you are closely monitoring the HPC market, you have probably heard about ScaleMP already, the company aggregating multiple x86 system into a single (virtual) system over InfiniBand – obviously very interesting for Shared-Memory parallelization. If you are interested, you can hear about our experiences with this architecture for HPC.

If you want to meet up during any of these events just drop me an email.