Category Archives: University

PPCES Video Lectures on OpenMP, MPI and Xeon Phi release

Since 2001 already, the IT Center (formerly: Center for Computing and Communication) of RWTH Aachen University offers a one week HPC workshop on Parallel Programming during spring time. This course is not restricted to scientists and engineers from our university, in fact we have about 30% of external attendees each time. This year we were very happy about a record attendance of up to 85 persons for the OpenMP lectures on Wednesday. As usual we publish all course materials online, but this year we also created screencasts from all presentations. That means you see the slides and the live demos and you hear the presenter talk. This blog post contains links to both the screencasts as well as the other course material, sorted by topic.


We have three talks as an introduction to OpenMP from Wednesday and two talks on selected topics from Thursday, which were vectorization and tools.

Introduction to OpenMP Programming (part 1), by Christian Terboven:


Getting OpenMP up to Speed, by Ruud van der Pas:


Introduction to OpenMP Programming (part 2), by Christian Terboven:


Vectorization with OpenMP, by Dirk Schmidl:


Tools for OpenMP Programming, by Dirk Schmidl:



We have two talks as an introduction to MPI and one on using the Vampir toolchain, all from Tuesday.

Introduction to MPI Programming (part 1), by Hristo Iliev:


Introduction to MPI Programming (part 2), by Hristo Iliev:


Introduction to VampirTrace and Vampir by Hristo Iliev:


Intel Xeon Phi

We put a special focus on presenting this architecture and we have one overview talk and one talk on using OpenMP 4.0 constructs for this architecture.

Programming the Intel Xeon Phi Coprocessor Overview, by Tim Cramer:


OpenMP 4.0 for Accelerators, by Christian Terboven:


Other talks

Some more talks, for instance on using our cluster or basics of parallel computer architectures, can be found in the youtube channel:

Gaussian Elimination Squad wins Intel Parallel Universe Computing Challenge at SC13

The German team with the glorious name Gaussian Elimination Squad made the first rank in the Intel Parallel Universe Computing Challenge! Each round of the challenge consisted of two parts: the first was a trivia challenge with 20 questions about computing, computer history, programming languages, and the SC conference series; the second part was a coding challenge, which gave each team ten minutes to speed up a piece of code they had never seen before as much as possible. On top of it all, the audience could watch what the teams were doing on two giant screens. Georg Hager, our team caption, has a blog post with all the details.

The competition was really a lot of fun and a nice distraction from an otherwise pretty busy SC13. There is a short video capturing the atmosphere during the final competition and also a brief article on insideHPC.

The Gaussian Elimination Squad represented the German HPC community, with members from RWTH Aachen (Christian Terboven and Joachim Protze), Jülich Supercomputing Center (Damian Alvarez), ZIH Dresden (Michael Kluge and Guido Juckeland), TU Darmstadt (Christian Iwainsky), Leibniz Supercomputing Center (Michael Ott), and Erlangen Regional Computing Center (Gerhard Wellein and Georg Hager). As only four team members were allowed per match, I was lucky to play together with Gerhard and Georg in all rounds, but the others helped us by shouting advice and answers they thought were correct.

Upcoming Events in March 2011

Let me point you to some HPC events in March 2011.

3rd Parallel Programming in Computational Engineering and Science (PPCES) Workshop. This event will continue the tradition of previous annual week-long events taking place in Aachen every spring since 2001, this year from March 21st to March 25th. This year, the agenda is – as always – a little different from the previous one. Beginning with a series of overview presentations on Monday afternoon, we are very happy to announce the upcoming RWTH Compute Cluster to be delivered by Bull. Throughout the week, we will cover serial and parallel programming using OpenMP and MPI in Fortran and C / C++ as well as performance tuning addressing both, Linux and Windows platforms. Due to the positive experience of last year, we are happy to present a renowned speaker to give an introduction into GPGPU architectures and programming on Friday: Michael Wolfe from PGI. All further information can be found at the event website:

4th Meeting of the German Windows-HPC User Group. The fourth meeting of the German Windows HPC User Group will take place in Karlsruhe on March 31st and April 1st, kindly hosted by the KIT. As in the previous years, we will learn about and discuss Microsoft’s current and future products, as well as users presenting their (good and not so good) experiences in doing HPC on Windows. This year, we will have an Expert Discussion Panel for which the audience is invited to ask (tough) question to fire up the discussion.

RWTH Aachen gets a new 300 Teraflops HPC system from Bull

While I usually do not repeat press releases in my blog, this one I do since we all are a little proud of the achievement: RWTH Aachen University orders Bull supercomputer to support its scientific, industrial and environmental research. Getting this system was a lot of work, and preparing for it still is. The compute power of that machine totals 300 Teraflops. The focus of our center is not just running this machine, but to provide HPC-specific support and to ensure efficient operation. We are confident that in Bull we found a competent partner to investigate these and other topcis in close collaboration.

1.5 hours for an Introduction to Parallel Programming

Whenever Prof. Christian Bischof, the head of our institute, is on duty to give the Introduction to Programming (de) lecture for first-year Computer Science students, he is keen on giving the students a glimpse on parallel programming. Same as in 2006, I was the guest lecturer this task has been assigned to. While coping with parallelism in various aspects consumes most time of my work day, these students just started to learn Java as their first programming language. Same as in 2006, I worried about how to motivate the students and what level of detail would be reasonable, and what tools and techniques to present within a timeframe of just 1.5 hours. In the following paragraphs I briefly explain what I did, and why. The slides used in the lecture are available online: Introduction to Parallel Programming; and my student Christoph Rackwitz made a screen cast of the lecture available here (although the slides are English, I am speaking German).

Programming Language: As stated above, the target audience are first-year Computer Science students attending the Introduction to Programming course. The programming language taught in the course is Java. In a previous lecture we once tried to present the examples and exercises in C, assuming that C is very similar to Java, but the students did not like that very much. Although they were able to follow the lecture and were mostly successful in the exercises, C just felt kind of foreign to most of them. Furthermore, C is not well-used in other courses later on in the studies, except for System Programming. The problem with Java is, however, that it is not commonly used in technical computing and the native approach to parallel programming in Java is oriented more towards building concurrent (business) applications than reasoning about parallel algorithms. Despite this issue we decided to use Java in the Introduction to Parallel Programming lecture in order to keep the students comfortable, and to not mess around with the example and exercise environment already provided for them. The overall goal of the lecture was to give the students an idea of the fundamental change towards parallelism, to explain the basic concepts, and to motivate them to develop an interest in this topic. We thought this is independent from the programming language.

Parallelization Model: We have Shared-Memory parallelization and Message-Passing for Clusters. It would be great to teach both, and of course we do that in advanced courses, but I do not think it is reasonable to cover both in an introductory session. In order to motivate the growing need for parallel programming at all, the trend towards Multi-Core and Many-Core is an obvious foundation. Given that, and the requirement to allow the students to work on examples and exercises on their systems at home, we decided to discuss multicore architectures and  present one model for Shared-Memory parallel programming in detail, and just provide an overview of what a Cluster is. Furthermore, we hoped that the ability to speed-up the example programs by experiencing parallelism on their very own desktops or laptops would add some motivation. This feels more real than logging in to a remote system in our cluster. In addition, providing instructions to set up a Shared-Memory parallelization tool on a student’s laptop was expected to be simpler than for a Message-Passing environment (this turned out to be true).

Parallelization Paradigm: Given our choice to cover Shared-Memory parallelization, and the requirement to use Java and to provide a suitable environment to work on examples and exercises, we basically had three choices: (i) Java-Threads and (ii) OpenMP for Java, (iii) Parallel Java (PJ) – maybe we could have looked at some other more obscure paradigms as well, but I do not think they would have contributed any new aspects. In essence, Java-Threads are similar to Posix-Threads and Win32-Threads and are well-suited for building server-type programs, but not good for parallelizing algorithms or to serve in introductory courses. Using this model, you first have to talk about setting up threads and implementing synchronization before you can start to think parallel ;-). I like OpenMP a lot for this purpose, but there is no official standard of OpenMP for Java. We looked at two implementations:

  1. JOMP, by the Edinburgh Parallel Computing Center (EPCC). To our knowledge, this was the first implementation of OpenMP for Java. It comes as a preprocessor and is easy to use. But the development has long stopped, and it does not work well with Java 1.4 and later.
  2. JaMP, by the University of Erlangen. This implementation is based on the Eclipse compiler and even extends Java for OpenMP to provide more constructs than the original standard, while still not providing full support for OpenMP 2.5. Anyhow, it worked fine with Java 1.6, was easy to install and distribute among the students and thus we used it in the lecture.

Parallel Java (short: PJ), by Alan Kaminsky at the Rochester Institute of Technology, also provides means for Shared-Memory parallelization, but in principle it is oriented towards Message-Passing. Since it provides a very nice and simplified MPI-style API, we would have used it if we included Cluster programming, but sticking to Shared-Memory parallelization we went for JaMP.

Content: What should be covered in just 1.5 hours? Well, of course we need a motivation in the beginning of why parallel programming will be more and more important in the future. We also explained why the industry is shifting towards multicore architectures, and what implications this will or may have. As explained above, the largest part of the lecture was spent on OpenMP for Java along with some examples. We started with a brief introduction on how to use JaMP and how OpenMP programs look like, then covered Worksharing and Data Scoping with several examples. I think experiencing a Data Race is a very important thing every parallel programmer should have made :-), as well as learning about reductions. This was about it for the OpenMP part then. The last minutes of the lecture were spent on clusters and their principle ideas, followed by a Summary.

Given the constraints and our reasoning outlined above, we ended up using Java as the programming language and JaMP as the paradigm to teach Shared-Memory parallelization; just mentioning that there are Clusters as well. Although the official course evaluation is not done yet, we got pretty positive feedback regarding the lecture itself, and the exercises were well-accepted.What unnerves me is the fact, that there is no real OpenMP for Java. The Erlangen team provided a good implementation along with a compiler to serve our example and exercises, but it does not provide full OpenMP 2.5 support, not to speak of OpenMP 3.0. Having a full OpenMP for Java implementation at hand would be a very valuable tool for teaching parallel programming to first-year students, since Java is the language of choice not only at RWTH Aachen University.

Do you have other opinions, experiences, or ideas? I am always in for a discussion.