Tag Archives: Windows-HPC UG

Event Annoucement: Microsoft Azure Compute Tutorial

This time I am going to announce an event to which we never had a similar predecessor: On November 5th, 2012, we will conduct a Microsoft Azure Compute Tutorial with speakers from the European Microsoft Innovation Center in Aachen. What we mean with “compute” is not quite what HPC people might think of as computing. The rationale is the following:

Cloud computing enables the usage of computing resources provided as a service via a network (e.g. the internet). One cloud platform is Microsoft’s Windows Azure. It can be used to build, deploy and manage applications in the cloud, which hereby consists of Microsoft-managed data centers. This workshop will introduce Microsoft Azure facilities with a focus on compute services. In the morning of this tutorial, we will introduce you to Azure computing, storage and services. For interested participants, there will be a hands-on session after lunch, in which an example application will be created step-by-step. More details and the link for registration can be found at the event website.

Several Event Annoucements

These are just some announcements of upcoming events in which I am involved in a varying degree. The first two will be take place at RWTH Aachen University and attendance is free of charge, the second is part of the SC12 conference in Salt Lake City, UT in the US.

Tuning for bigSMP HPC Workshop – aixcelerate (October 8th – 10th, 2012). The number of cores per processor chip is increasing. Today’s “fat” compute nodes are equipped with up to 16 eight-core Intel Xeon processors, resulting in 128 phyiscal cores, with up to 2 TB of main memory. Furthermore, special solutions like a ScaleMP vSMP system may consist of 16 nodes with 4 eight-core Intel Xeon processors each and 4 TB of accumulated main memory, scaling the number of cores even further up to 1024 per machine.  While message-passing with MPI is the dominating paradigm for parallel programming in the domain of high performance computing (HPC), with the growing number of cores per cluster node the combination of MPI with shared memory programming is gaining importance. The efficient use of these systems also requires NUMA-aware data management. In order to exploit different levels of parallelism, namely through shared memory programming within a node and message-passing across the nodes, obtaining good performance becomes increasingly difficult.  This tuning workshop will in detail cover tools and methods to program big SMP systems. The first day will focus on OpenMP programming on big NUMA systems, the second day will focus on Intel Performance Tools as well as the ScaleMP machine, and the third day will focus on Hybrid Parallelization. Attendees are kindly requested to prepare and bring in their own code, if applicable. If you do not have an own code, but you are interested in the presented topics, you may work on prepared exercises during the lab time (hands-on). It is recommended to have good knowledge in MPI and/or OpenMP. More details and the registration link can be found at the event website.

OpenACC Tutorial Workshop (October 11th  to 12th, 2012). OpenACC is a directive-based programming model for accelerators which enables delegating the responsibility for low-level (e.g. CUDA or OpenCL) programming tasks to the compiler. To this end, using the OpenACC API, the programmer can easily offload compute-intensive loops to an attached accelerator. The open industry standard OpenACC has been introduced in November 2011 and supports accelerating regions of code in standard C, C++ and Fortran. It provides portability across operating systems, host CPUs and accelerators. Up to know, OpenACC compilers exist from Cray, PGI and CAPS. During this workshop, you will work with PGI’s OpenACC implementation on Nvidia Quadro 6000 GPUs. This OpenACC workshop is divided into two parts (with separate registrations!). In the first part, we will give an introduction to the OpenACC API while focusing on GPUs. It is open for everyone who is interested in the topic. In contrast to the first part, the second part will not contain any presentations or hands-on sessions. To the second day, we invite all programmers who have their own code and want to give it a try to accelerate it on a GPU using OpenACC and with the help of our team members and Nvidia staff. More details and the registration link can be found at the event website.

Advanced OpenMP Tutorial at SC12 (November 12th, 2012). With the increasing prevalence of multicore processors, shared-memory programming models are essential. OpenMP is a popular, portable, widely supported and easy-to-use shared-memory model. Developers usually find OpenMP easy to learn. However, they are often disappointed with the performance and scalability of the resulting code. This disappointment stems not from shortcomings of OpenMP but rather with the lack of depth with which it is employed. Our “Advanced OpenMP Programming” tutorial addresses this critical need by exploring the implications of possible OpenMP parallelization strategies, both in terms of correctness and performance. While we quickly review the basics of OpenMP programming, we assume attendees understand basic parallelization concepts and will easily grasp those basics. We discuss how OpenMP features are implemented and then focus on performance aspects, such as data and thread locality on NUMA architectures, false sharing, and private versus shared data. We discuss language features in-depth, with emphasis on features recently added to OpenMP such as tasking. We close with debugging, compare various tools, and illustrate how to avoid correctness pitfalls. More details can be found on the event website.

On the future of HPC on Windows

Just a few weeks ago during SC11 Microsoft released two new or updated HPC products, namely Windows Azure HPC Scheduler and Windows HPC Server 2008 R2 SP3. However, what I saw and heard during the last few months as well as during SC11 did not give me the best feeling for the future of Microsoft’s HPC Server product. This post is on my impressions and thoughts not only on the product, but also on doing HPC on the Windows platform in general.

What disturbed me a little was the absence of any roadmap presentation. Well, over the last few years Windows HPC Server clearly has become mature enough to not lack any significant feature necessary for deployment and use on a medium-sized HPC installation. However, Microsoft publically outlining a product roadmap with several key features always felt right, and it’s absence at SC11 has been noted by the community. Furthermore, they quietly killed their Dryad project (including LINQ to HPC), which was prominently displayed at SC10, now betting  on a yet-to-be-released distribution of Apache Hadoop for Windows HPC Server and Azure. Finally, there have been several business restructuring activities inside Microsoft. For example, here in Germany Microsoft apparently shut down the HPC group and moved (some of) the people under the hood of Azure. From what I heard, all these activities caused some confusion in the community on how Microsoft sees the future of the Windows HPC Server product and how much support and innovations may be expected from the company on this regard.

What Microsoft now talks a lot about is the Azure integration. If you followed the development of Windows HPC Server up to release R2 SP3, you could clearly see this coming. From a technology point of view, I am impressed. However, I am not convinced yet, for several reasons – the most important one being the offer much too expensive for our application needs. Of course we are following what is going on regarding Clouds and HPC, and in fact in one project we are extending one application to make use of both on-premise and off-premis compute power based on availability (and maybe even price). But for the time being, our local clusters, including the one running Windows, will clearly dominate (or, as we Germans say, set the tone).

Finally, I am missing a clear picture of HPC-related improvements in the Windows Server roadmap. Just recently we added a frontend system with 160 (logical) cores, this is 8 sockets, 512 GB of memory. Windows just works on such a machine – but it could do better. It could serve HPC applications better. And given that next-gen ordinary (HPC) systems probably have a similar core count, Windows really has to serve applications better on such machines in order to stay competitive. Furthermore, smooth and stable integration of accelerators – be it GPGPUs, or something different but similar in spirit – will be as important at least.

Windows Task-Manager with 160 cores (8 sockets)

Windows Task-Manager with 160 cores (8 sockets)

I will stop here. Our user base is clearly showing a demand for Windows HPC Server-based clusters, and in fact the demand is growing. Trying to combine my personal opinion with the feedback and opinions I got from the (German) community, Microsoft has to improve the communication regarding Windows HPC Server. It is time for a clear statement regarding the future of the product and the directions it will be going to.

Dan Reed on Technical (Cloud) Computing with Microsoft: Vision

During ISC 2011 in Hamburg I got the opportunity to talk to Microsoft’s Dan Reed, Corporate Vice President, Technology Policy and Extreme Computing Group. It was a very nice discussion soon targeting towards HPC in the Cloud, touching the topics of Microsoft’s Vision, Standards, and Education. Karsten Reineck from the Fraunhofer SCAI was also present, he already put an excerpt of the interview on his blog (in German). The following is my recapitulation of the discussion pointing out his most important statements – part 1 of 2.

Being the person I am, I started the talk with a nasty question on the pricing scheme of Azure (and similar commercial offerings), claiming that it is pretty expensive both per CPU hour as well as per byte of I/O. Just recently we did a full cost accounting to calculate our price per CPU hour for our HPC service, and we found us to be cheaper by a notable factor.

Dan Reed: Academic sites, of reasonable size such as yours, can do HPC cheaper because they are utilizing the hardware on a 24×7 basis. Traditionally, they do not offer service-level agreements on how fast any job starts, they just queue the jobs. Azure is different, and it has to be, one can get the resources available in a guaranteed time frame. As of today, HPC in the Cloud is interesting for burst scenarios where the on-promise resources are not sufficient, or for people for whom traditional HPC is too complex (regardless of Windows vs. Linux, just maintaining an on-premise cluster versus buying HPC time when it is needed).

I am completely in line with that. I expressed my belief that we will need (and have!) academic HPC centers for the foreseeable future. Basically, we are just a (local) HPC cloud service provider for our users – which of course we call customers, internally. To conclude this topic, he said something very interesting:

Dan Reed: In industry, the cost is not the main constraint, the skill is.

Ok, since we are offering HPC services on Linux and Windows, and since there was quite some buzz around the future of the Windows HPC Server product during ISC, I asked where the Windows HPC Server product is heading to in the future.

Dan Reed: The foremost goal is to better integrate and support cloud issues. For example, currently, there are two schedulers, the Azure scheduler and the traditional Windows HPC Server scheduler. Basically, that is one scheduler too much. Regarding improvements in Azure, we will see support for high-speed interconnects soon.

Azure support for MPI programs has just been introduced with Windows HPC Server 2008 R2 SP2 (a long product name, hm?). By the way, he assumes that future x GigaBit Ethernet will be favoured over InfiniBand.

For us it is clearly interesting to see where Azure, and other similar offerings, are heading to, and we can learn something from that for our own HPC service. For example, we already offer service-level agreements for some customers under some circumstances. However, on-premise resources will play the dominating role for academic HPC in the foreseeable future. Thus I am interested in the future of the product and asked specifically about the future of the Windows HPC Server.

Dan Reed: Microsoft, as a company, is strongly committed to a service-based business model. This has to be understood in order to realize what is driving some of the shifts we are seeing right now, both in the products and the organization itself. The focus on Cloud Computing elevated the HPC Server team, the Technical Computing division is now part of the Azure organization. The emphasis of the future product development thus is clearly shifting towards cloud computing, that is true, although the product remains to be improved and features will be added for a few releases (already in planning).

Well, as a MVP for Windows HPC Server, and a member of the Customer Advisory Board, I know something about the planning of upcoming product release, so I believe Microsoft is still committed to the product (as opposed to some statements made by other people during ISC). However, I do not see the Windows Server itself moving in the right direction for HPC. Obviously HPC is just a niche market for Microsoft, but better support for multi- and many-core processors and hierarchical memory architectures (NUMA !) would be desirable. Asking (again) on that, I got the following answer:

Dan Reed: Windows HPC Server is derived from Windows Server, which itself is derived from Windows. So, if you want to know where Windows HPC Server is going with regard to its base technologies, you have to see (and understand) where Windows itself is going.

Uhm, ok, so we better take a close look at Windows 8 :-). Regarding Microsoft’ way towards Cloud Computing, I will write a second blog post later to cover more of our discussion on the topics of Standards and Education. This this blog post is on the Vision, I just want to share a brief discussion we had when heading back to the ISC show floor. I asked him on his personal (!) opinion on the race towards Exascale. Will we get an Exascale system by (the end of) 2019?

Dan Reed: Given the political will and money, we will overcome the technical issues we are facing today.

Ok. Given that someone has that will and the money, would such a system be usable? Do you see any single application for such a system?

Dan Reed: Big question mark. I would rather see money being invested in solving the software issues. If we get such powerful systems, we have to be able to make use of them for more than just a single project.

Again, I am pretty much in line with that. By no means I am claiming to fully understand all challenges and opportunities of Exascale systems, but what I do see are the challenges to make use of today’s Petaflop systems with applications other than LINPACK, especially from the domain of Computational Engineering. Taking the opportunity, my last question was: Who do you guess would have the political will and the money to build an Exascale system first, the US, or Europe, or rather Asia?

Dan Reed: Uhm. If I would have to bet, I would bet on Asia. And if such a system comes from Asia, all critical system components will be designed and manufactured in Asia.

Interesting. And clearly a challenge.

Recap of the 4th Meeting of the German Windows-HPC User Group

The 4th Meeting of the German Windows-HPC User Group took place on March 31st and April 1st in Karlsruhe, hosted by the Karlsruhe Institute for Technology (KIT). The event was attended by over 70 participants from Industry and Academia. This event has been sponsored by Bull, COMSOL, EMCL @ KIT, Intel, Microsoft and NVIDIA.

After a brief welcome address by the organizators (Wolfgang Dreyer from Microsoft and myself), Rudolf Lohner (KIT) gave an overview of the Steinbuch Centre for Computing (SCC) at the KIT. He was followed by the keynote speak from Microsoft, given by Xavier Pillons (Microsoft Corporation) on Windows HPC Server 2008 R2 and Azure as well as Dryad/DryadLinq. We specifically asked for these two topics, and it turned out that Cloud Computing as well as Data-intensive Computing was the subject of many discussions during this event. After that, Axel Köhler (now NVIDIA) gave a glimpse into the current HPC developments at NVIDIA, including how a pure accelerator-driven supercomputer might look like. He was followed by Dagmar Kremer (BCC), who presented their solution for real-time super-computing on the desktop using Excel. This topic was also on the agendy by popular demand, and apparently the combination of the two keywords “Excel” and “HPC” makes many people interested. The first day was closed by Achim Streit (KIT), who gave his vision on HPC and the Cloud, outlining current projects around HPC as a Service (HPCaaS) for technical computing.

The evening event took place in the ZetKaeM restaurant, after touring the Media Museum, the world’s first and only museum for interactive art. We all experienced some funny exhibits :-). Such an evening event serves well the role of a user group – leading to discussions and thought exchange over a good glass of wine.

The second day started with a keynote address from Vincent Heuveline (KIT) on HPC and hardware-aware computing at the EMCL @ KIT. He was followed by Joachim Redmber (Bull), presenting the Bull way of Supercomputing. Representing a Windows-HPC user, Shiqing Fan (HLRS) outlined their work on implementing and integrating OpenMPI with Windows HPC environments, and apparently they can outperform Microsoft MPI in some benchmarks. Horst Schwichtenberg (Fraunhofer SCAI) gave an example of Excel HPC integration via WCF. As another user contribution, Stefan Truthähn and Martin Steinert (both hhpberlin Ingenieure für Brandschutz GmbH) gave a vivid talk on how they came to use Windows-HPC and HPC in general (more by accident than by master plan ;-) ) and how they see the future of their CFD computations on on-premise as well as Cloud HPC offerings. They were followed by Michael Klemm (Intel), giving an overview of Intel Technology for HPC on Windows. Henrik Nordborg (University of Applied Sciences in Rapperswil) from the Microsoft Technical Computing Innovation Center (MICTC) outlined where he sees an increasing demand for expertise in technical computing (and why) as well as he gave a report on the first activities of the MICTC. The second and final day of the meeting was closed by a talk given by Michael Wirtz and myself on our experience and setting for Windows-HPC for 1000+ users.

All in all, I think this meeting was successful and so far we got positive feedback from the attendees. We plan to have the next meeting our March or April 2012 at an yet-to-be-decided-on location.

Upcoming Events in March 2011

Let me point you to some HPC events in March 2011.

3rd Parallel Programming in Computational Engineering and Science (PPCES) Workshop. This event will continue the tradition of previous annual week-long events taking place in Aachen every spring since 2001, this year from March 21st to March 25th. This year, the agenda is – as always – a little different from the previous one. Beginning with a series of overview presentations on Monday afternoon, we are very happy to announce the upcoming RWTH Compute Cluster to be delivered by Bull. Throughout the week, we will cover serial and parallel programming using OpenMP and MPI in Fortran and C / C++ as well as performance tuning addressing both, Linux and Windows platforms. Due to the positive experience of last year, we are happy to present a renowned speaker to give an introduction into GPGPU architectures and programming on Friday: Michael Wolfe from PGI. All further information can be found at the event website: http://www.rz.rwth-aachen.de/ppces.

4th Meeting of the German Windows-HPC User Group. The fourth meeting of the German Windows HPC User Group will take place in Karlsruhe on March 31st and April 1st, kindly hosted by the KIT. As in the previous years, we will learn about and discuss Microsoft’s current and future products, as well as users presenting their (good and not so good) experiences in doing HPC on Windows. This year, we will have an Expert Discussion Panel for which the audience is invited to ask (tough) question to fire up the discussion.