Tag Archives: Visual Studio

Event Annoucement: Microsoft Azure Compute Tutorial

This time I am going to announce an event to which we never had a similar predecessor: On November 5th, 2012, we will conduct a Microsoft Azure Compute Tutorial with speakers from the European Microsoft Innovation Center in Aachen. What we mean with “compute” is not quite what HPC people might think of as computing. The rationale is the following:

Cloud computing enables the usage of computing resources provided as a service via a network (e.g. the internet). One cloud platform is Microsoft’s Windows Azure. It can be used to build, deploy and manage applications in the cloud, which hereby consists of Microsoft-managed data centers. This workshop will introduce Microsoft Azure facilities with a focus on compute services. In the morning of this tutorial, we will introduce you to Azure computing, storage and services. For interested participants, there will be a hands-on session after lunch, in which an example application will be created step-by-step. More details and the link for registration can be found at the event website.

An Update on Building and Using BOOST.MPI on Windows HPC Server 2008 R2

My 2008 blog post on Building and Using BOOST.MPI on Windows HPC Server 2008 still generates quite some traffic. Since some things have changed since then, I thought it could help those visitors to provide an updated howto. Again, this post puts the focus on building boost.mpi with various versions of MS-MPI, and does not cover all aspects of building boost on Windows (go to Getting Started on Windows for that).

The problem that still remains is, that the MPI auto-configuration only looks for MS-MPI v1, which came with the Compute Cluster Pack and was typically installed to the directory C:\Program Files\Microsoft Compute Cluster Pack. MS-MPI v2, that comes with the Microsoft HPC Pack 2008 [R2], is typically installed to the directory C:\Program Files\Microsoft HPC Pack 2008 [R2] SDK, but the auto-configuration does not examine these directories. In the old post I explained where to change the path the auto-configurator is looking at. Of course, this is not what one expects from an “auto”-configuration tool. Extending the mpi.jam file to search for all possible standard directories where MS-MPI might be installed in turned out to be pretty simple. You can download my modified mpi.jam for boost 1.46.1 supporting MS-MPI v1 and v2 and replace the mpi.jam file that comes with the boost package. As a summary, below are the basic steps to build boost with boost.mpi on Windows (HPC) Server 2008 using Visual Studio and MS-MPI.

  1. Download boost 1.46.1 (82 MB), which is the most current version by the time of this writing (May 13th, 2011).
  2. Extract the archive. For the rest of the instructions I will assume X:\src.boost_1_46_1 as the directory the archive has been extracted into.
  3. Open a Visual Studio command prompt from the Visual Studio Tools submenu. Depending on what you intend to build, you have to use the 32-bit or 64-bit compiler environment. Execute all commands listed in the rest of the instructions from within this command prompt.
  4. Run bootstrap.bat. This will build bjam.exe.
  5. Modify the mpi.jam file located in the tools\build\v2\tools subdirectory to search for MS-MPI in the right place, or use my modified mpi.jam for boost 1.46.1 supporting MS-MPI v1 and v2 instead.
  6. Edit the user-config.jam file located in the tools\build\v2 subdirectory to contain the following line: using mpi ;.
  7. Execute the following to command to start the build and installation process: bjam.exe –build-dir=x:\src.boost_1_46_1\build\vs90-64 –prefix=x:\boost_1_46_1\vs90-64 install. Please note that I use different directories in the –build-dir and –prefix options, since I intend to remove the X:\src.boost_1_46_1 directory once boost is installed. Especially a debug build may use a significant amount of disc storage.
  8. Wait…
  9. There are several other options that you might want to explore, but in many cases the default does just fine. Using the command line from above, on Windows you will get static multi-threaded libraries in debug and release mode using shared runtime. On Windows, the default toolset is msvc, which is the Visual Studio compiler. You can change that via the toolset=xxx option, for example insert toolset=intel to the command line above just before install if you want to build using the Intel compilers.

Since it is uncomfortable to change mpi.jam whenever you are going to build a new version of boost, I filed a bug report on this and proposed to extend the search path to include MS-MPI v2 locations as well.

In order to use this build of boost, in your projects you have to add X:\boost_1_46_1\vs90-32\include\boost-1_46_1 to the list of include directories, and X:\boost_1_46_1\vs90-32\lib to the list of library directories (all acording to the directory scheme I used above). In your code you do #include <boost/mpi.hpp>. The boost header files contain directives to link the correct boost libraries automatically, but of course you have to linke with the MS-MPI library you used to build boost with.

Parallel Programming with Visual Studio 2010: F5 to the Cluster

You probably have noticed that: Visual Studio 2010 (VS2010) is about to be released. As of today, the Microsoft website states that VS2010 will be launched on April 12th. I have been playing with various builds since more than a year and I am really looking forward to taking this new version into production, since it comes loaded with plenty of new features for parallel programmers. After the launch you probably have to pay money for it, so grabbing the release candidate (RC) and taking a look at it right now may be worth it!

The feature I am talking about right now is the improved MPI Cluster Debugger that lets you execute MPI debugging jobs either locally or on a cluster, with only a minor configuration task involved. A few days ago at the German Windows-HPC User Group event Keith Yedlin from Microsoft Corp was talking about it and I demoed it live. Daniel Moth has a blog post providing an overview of that feature, MSDN has a walk-through on how to set it up, so I am not going to repeat all that content but instead explain how I am using it (and what I am still missing).

Examining variable values across MPI processes. This is a core requirement on a parallel debugger, as I stated in previous posts already. Visual Studio 2008 did allow for this already, but Visual Studio 2010 improved the way in which you inspect variables, especially if you are switching between threads and / or processes. I am not sure about the official name of the feature, but let me just call it laminate: when you put the mouse pointer over a variable the menu that will appear does not only show you the variable value, in VS2010 it also contains a sticker that you can click to keep this window persistent in front of the editor.

Screenshot: Visual Studio 2010 Debugging Session, Laminated Variable

Screenshot: Visual Studio 2010 Debugging Session, Laminated Variable

In my debugging workflow I got used to laminate exactly those variables that have different values on the threads and / or processes involved in my program. Whenever I switch to a different thread and / or process that in fact has a different value for that particular variable, the view will become red. This turned out to be very handy!

Screenshot: Visual Studio 2010 Debugging Session, Laminated Variable after Thread Switch

Screenshot: Visual Studio 2010 Debugging Session, Laminated Variable after Thread Switch

Debugging MPI applications on a Cluster. This became usable only with Visual Studio 2010 – before it was possible, but involved many configuration steps. In Visual Studio 2008 I complained about the task of setting up the right paths to mpiexec and mpishim – gone in Visual Studio 2010, thanks a lot. If you intend on using Microsoft MPI v2 (either on a Cluster or on a local Workstation) there is no need to configure anything at all, just switch to the MPI Cluster Debugger in the Debugging pane of your project settings. It gets even better: The field Run Environment allows you to select between the execution on your local machine, or on a Windows HPC Server 2008 Cluster:

Screenshot: Visual Studio 2010 Debugging Configuration

Screenshot: Visual Studio 2010 Debugging Configuration

Debugging on a cluster is particularly useful if you program is not capable of being executed with a small number of processes, or if you have a large dataset to do the debugging with and not enough memory on the local machine. The debugging session is submitted to the cluster just like a regular compute job. Setting up your debugging project for the cluster is pretty simple: just select the head node of your cluster and then the necessary resources, that is all. Hint: After selecting the head node immediately select the node group (if any) to reduce the number of compute nodes status information are being queried from, since this may take a while and let the Node Selector dialog become unresponsive for some moments.

Screenshot: Visual Studio 2010 Debugging Configuration, Node Selector

Screenshot: Visual Studio 2010 Debugging Configuration, Node Selector

After the configuration step you are all set up to F5 to the cluster – once your job has been started you will see no difference to a local debugging session. If you open the Debug -> Processes window from the VS2010 menu you can take a look at the Transport Qualifier column to see which node the debug processes are running on:

Screenshot: Visual Studio 2010 Debugging Session, MPI Processes on a Cluster

Screenshot: Visual Studio 2010 Debugging Session, MPI Processes on a Cluster

If you are interested, you can start the HPC Job Manager program to examine your debugging session. Unless explicitly object, Visual Studio does the whole job of deploying the runtime for your application and afterwards cleaning everything up again:

Screenshot: HPC Job Manager displaying a Visual Studio 2010 Debugging Job (1/2)

c

Screenshot: HPC Job Manager displaying a Visual Studio 2010 Debugging Job (2/2)

Screenshot: HPC Job Manager displaying a Visual Studio 2010 Debugging Job (2/2)

What I am still missing. The new features all are really nice, but there are still two very important things that I am missing for an MPI debugger: (i) better management of the MPI processes during debugging, and (ii) a better way to investigate variable values over multiple processes (this may include arrays). Microsoft is probably working on Dev11 already, so I hope these two points will make it into the next product version, maybe even more…

Upcoming Events in March 2010

Let me point you to some HPC events in March 2010.

Windows HPC Deep Dive Seminar. Right before the User Group Meeting (see next paragraph) there will be a Windows HPC Deep Dive Seminar, hosted by the Fraunhofer SCAI at Schloss Birlinghoven in Sankt Augustin, jointly organized by the Fraunhofer SCAI and RWTH Aachen and Microsoft. The first day (March 8th) will cover  an introduction into Windows HPC Server 2008 and a hands-on installation and configuration as well as advanced management and diagnostic tasks. The second day (March 9th) will focus on parallel programming on Windows, ranging from using the Microsoft and Intel compilers and performance tools to MPI performance analysis and system performance assessment. The third day (March 10th)  is all about Visual Studio 2010 and how to use this great tool for parallel programming on Windows, as well as some aspects of Windows HPC Server 2008 R2. All the talks will be given by speakers from Microsoft, Fraunhofer SCAI or RWTH Aachen. There are some attendance costs attached to this seminar, which include lunch and dinner.

3rd Meeting of the German Windows-HPC User Group. Following the Deep Dive Seminar (see previous paragraph) the third incarnation of the German Windows-HPC User Group Meeting will take place on March 11th + 12th at Schloss Birlinghoven in Sankt Augustin, kindly hosted by the Fraunhofer SCAI. As in the previous years, we got some experts from Microsoft to present about technical aspects of current and future products, as well as users presenting their (good and not so good) experiences in using Windows HPC Server 2008 and folks from industry explaining their solutions and products. From my point of view, the last two User Group Meetings were pretty successful and I really hope for many people showing up at this event. Attendance is free, of course.

2nd Parallel Programming in Computational Engineering and Science Workshop. Continuing the tradition of previous annual SunHPC events and last year’s PPCES, this year’s PPCES workshop will take place at RWTH Aachen University from March 22nd to 26th and cover parallel programming using OpenMP and MPI in Fortran and C/C++ on Linux and Windows platforms. This year we are going to include a half-day session on GPGPU programming as well. In general, we cover the basics of various programming paradigms, development environments and tools, performance analysis and tuning tools, as well as advanced parallel programming patterns and case studies, always providing enough time for the audience to try it themselves during the lab sessions. This course is intended for all audiences, the only prerequisite you should bring is basic programming skills in either Fortran, C or C++. You can register for all five days separately so that you can choose your favourite topics, of course you can stay for all five days as well. Besides speakers from our center we also got external speakers. All talks will be in English, the course is free for all audiences of course.

If you have any questions regarding any of these events, do not hesitate to contact me. Looking forward to see you there!

HPCS 2009 Workshop material: OpenMP + Visual Studio

As announced in a previous post already, I was involved in two workshops attached to the HPCS 2009, hosted by the HPCVL in Kinston, ON, Canada. Being back in the office now I found some time to upload my slide sets. Obviously I can only make my own slides public.

Using OpenMP 3.0 for Parallel Programming on Multicore Systems [abstract]

Ruud van der Pas, Sun Microsystems; Dieter an Mey and Christian Terboven, RWTH Aachen University.

Parallel Programming in Visual Studio 2008 on Windows HPC Server 2008 [abstract]

Christian Terboven, RWTH Aachen University.

Upcoming Events in June 2009

Let me point you to some HPC events in June 2009.

5th International Workshop on OpenMP (IWOMP 2009) in Dresden, Germany. The IWOMP workshop series focuses on the development and usage of OpenMP. This year’s conference is titled Evolving OpenMP in an Age of Extreme Parallelism – I think this phrase is a but funny, but nevertheless one can clearly observe a trend towards Shared-Memory parallelization on the node of even the extremely parallel machines. Attached to the conference is a two day meeting of the OpenMP language committee. The language committee is currently discussing a long list of possible items for a future OpenMP 3.1 or 4.0 specification, including but not limited to my favorites Composability (especially for C++) and Performance on cc-NUMA system. Bronis de Supinski, the recently appointed Chair of the OpenMP Language Committee, will give a talk on the current activities of the LC and how the future of OpenMP might look like – I hope the slides will be made public soon after the talk. Right before the conference there will also be a one day tutorial for all people interested in learning OpenMP (mainly given by Ruud van der Pas – strongly recommended).

High Performance Computing Symposium 2009 (HPCS) in Kingston, Canada. HPCS is a multidisciplinary conference that focuses on research involving High Performance Computing and this year it takes place in Kingston. I’ve never been at that conference series, so I am pretty curious how it will look like. Attached to the conference are a couple of workshops, including Using OpenMP 3.0 for Parallel Programming on Multicore Systems – run again by Ruud van der Pas and us, and Parallel Programming in Visual Studio 2008 on Windows HPC Server 2008 – organized by us as well. Here in Aachen, the interest in our Windows-HPC compute service is still growing fine and thus we have usually around 50 new participants in our bi-yearly training events. The HPCVL people asked explicitly to cover parallel programming on Windows in the OpenMP workshop, so we separated this aspect out without further ado to serve it well. The workshop program can be found here.

International Supercomputing Conference (ISC 2009) in Hamburg, Germany. ISC titles itself as Europe’s premier HPC event – while this is probably true it is of course smaller than the SC events in the US, but usually better organized. Without question you will find numerous interesting exhibits and can listen to several talks (mostly by invited speakers), so please excuse the self-marketing of me pointing to the Jülich Aachen Research Alliance (JARA) booth in the research space where we will show an interactive visualization of large-scale numerical simulation (damage of blood cells by a ventricular device – pretty cool) as well as give an overview of our research activities focused on Shared-Memory parallelization (we will distribute OpenMP syntax references again). If you are interested in HPC software development on Windows, feel invited to stop by at our demo station at the Microsoft booth where we will have many demos regarding HPC Application Development on Windows (Visual Studio, Allinea DDTlite and Vampir are confirmed, maybe more …). And if you are closely monitoring the HPC market, you have probably heard about ScaleMP already, the company aggregating multiple x86 system into a single (virtual) system over InfiniBand – obviously very interesting for Shared-Memory parallelization. If you are interested, you can hear about our experiences with this architecture for HPC.

If you want to meet up during any of these events just drop me an email.