Tag Archives: .NET

Event Annoucement: Microsoft Azure Compute Tutorial

This time I am going to announce an event to which we never had a similar predecessor: On November 5th, 2012, we will conduct a Microsoft Azure Compute Tutorial with speakers from the European Microsoft Innovation Center in Aachen. What we mean with “compute” is not quite what HPC people might think of as computing. The rationale is the following:

Cloud computing enables the usage of computing resources provided as a service via a network (e.g. the internet). One cloud platform is Microsoft’s Windows Azure. It can be used to build, deploy and manage applications in the cloud, which hereby consists of Microsoft-managed data centers. This workshop will introduce Microsoft Azure facilities with a focus on compute services. In the morning of this tutorial, we will introduce you to Azure computing, storage and services. For interested participants, there will be a hands-on session after lunch, in which an example application will be created step-by-step. More details and the link for registration can be found at the event website.

Dan Reed on Technical (Cloud) Computing with Microsoft: Vision

During ISC 2011 in Hamburg I got the opportunity to talk to Microsoft’s Dan Reed, Corporate Vice President, Technology Policy and Extreme Computing Group. It was a very nice discussion soon targeting towards HPC in the Cloud, touching the topics of Microsoft’s Vision, Standards, and Education. Karsten Reineck from the Fraunhofer SCAI was also present, he already put an excerpt of the interview on his blog (in German). The following is my recapitulation of the discussion pointing out his most important statements – part 1 of 2.

Being the person I am, I started the talk with a nasty question on the pricing scheme of Azure (and similar commercial offerings), claiming that it is pretty expensive both per CPU hour as well as per byte of I/O. Just recently we did a full cost accounting to calculate our price per CPU hour for our HPC service, and we found us to be cheaper by a notable factor.

Dan Reed: Academic sites, of reasonable size such as yours, can do HPC cheaper because they are utilizing the hardware on a 24×7 basis. Traditionally, they do not offer service-level agreements on how fast any job starts, they just queue the jobs. Azure is different, and it has to be, one can get the resources available in a guaranteed time frame. As of today, HPC in the Cloud is interesting for burst scenarios where the on-promise resources are not sufficient, or for people for whom traditional HPC is too complex (regardless of Windows vs. Linux, just maintaining an on-premise cluster versus buying HPC time when it is needed).

I am completely in line with that. I expressed my belief that we will need (and have!) academic HPC centers for the foreseeable future. Basically, we are just a (local) HPC cloud service provider for our users – which of course we call customers, internally. To conclude this topic, he said something very interesting:

Dan Reed: In industry, the cost is not the main constraint, the skill is.

Ok, since we are offering HPC services on Linux and Windows, and since there was quite some buzz around the future of the Windows HPC Server product during ISC, I asked where the Windows HPC Server product is heading to in the future.

Dan Reed: The foremost goal is to better integrate and support cloud issues. For example, currently, there are two schedulers, the Azure scheduler and the traditional Windows HPC Server scheduler. Basically, that is one scheduler too much. Regarding improvements in Azure, we will see support for high-speed interconnects soon.

Azure support for MPI programs has just been introduced with Windows HPC Server 2008 R2 SP2 (a long product name, hm?). By the way, he assumes that future x GigaBit Ethernet will be favoured over InfiniBand.

For us it is clearly interesting to see where Azure, and other similar offerings, are heading to, and we can learn something from that for our own HPC service. For example, we already offer service-level agreements for some customers under some circumstances. However, on-premise resources will play the dominating role for academic HPC in the foreseeable future. Thus I am interested in the future of the product and asked specifically about the future of the Windows HPC Server.

Dan Reed: Microsoft, as a company, is strongly committed to a service-based business model. This has to be understood in order to realize what is driving some of the shifts we are seeing right now, both in the products and the organization itself. The focus on Cloud Computing elevated the HPC Server team, the Technical Computing division is now part of the Azure organization. The emphasis of the future product development thus is clearly shifting towards cloud computing, that is true, although the product remains to be improved and features will be added for a few releases (already in planning).

Well, as a MVP for Windows HPC Server, and a member of the Customer Advisory Board, I know something about the planning of upcoming product release, so I believe Microsoft is still committed to the product (as opposed to some statements made by other people during ISC). However, I do not see the Windows Server itself moving in the right direction for HPC. Obviously HPC is just a niche market for Microsoft, but better support for multi- and many-core processors and hierarchical memory architectures (NUMA !) would be desirable. Asking (again) on that, I got the following answer:

Dan Reed: Windows HPC Server is derived from Windows Server, which itself is derived from Windows. So, if you want to know where Windows HPC Server is going with regard to its base technologies, you have to see (and understand) where Windows itself is going.

Uhm, ok, so we better take a close look at Windows 8 :-). Regarding Microsoft’ way towards Cloud Computing, I will write a second blog post later to cover more of our discussion on the topics of Standards and Education. This this blog post is on the Vision, I just want to share a brief discussion we had when heading back to the ISC show floor. I asked him on his personal (!) opinion on the race towards Exascale. Will we get an Exascale system by (the end of) 2019?

Dan Reed: Given the political will and money, we will overcome the technical issues we are facing today.

Ok. Given that someone has that will and the money, would such a system be usable? Do you see any single application for such a system?

Dan Reed: Big question mark. I would rather see money being invested in solving the software issues. If we get such powerful systems, we have to be able to make use of them for more than just a single project.

Again, I am pretty much in line with that. By no means I am claiming to fully understand all challenges and opportunities of Exascale systems, but what I do see are the challenges to make use of today’s Petaflop systems with applications other than LINPACK, especially from the domain of Computational Engineering. Taking the opportunity, my last question was: Who do you guess would have the political will and the money to build an Exascale system first, the US, or Europe, or rather Asia?

Dan Reed: Uhm. If I would have to bet, I would bet on Asia. And if such a system comes from Asia, all critical system components will be designed and manufactured in Asia.

Interesting. And clearly a challenge.

Windows HPC Server 2008 R2 is ready

All members of the Microsoft Technology Adoption Program (TAP) for Windows HPC Server 2008 R2 just got mail that build number 2369 is ready for release. It is available via MS Connect already and will be made available via the usual channels in the coming days and weeks. We have been trying various builds throughout our participation in the TAP program – with varying success – and got a good overview of the new features in this product. As usual there are some features I really like and have been waiting for, and some features of questionable value.

The new product, both the HPC Pack 2008 R2 and the Windows Server 2008 R2 HPC Edition, will be available in two editions: Express (for traditional HPC usage including MS-MPI, the Job Scheduler and the Admin features you already know) and Enterprise (for SOA and Excel-based workload including everything from Express as well as new Excel and Workstation Cycle Stealing functionality). The HPC Pack 2008 R2 will also be available as a for Workstation only edition (giving you the Cycle Stealing functionality). I still have no clue in what version our licenses with Software Assurance will be converted, lets hope for Enterprise :-).

What is new for traditional HPC users (such as our center)?

  • The MPI stack (MS-MPI) has been improved and, for example, has been equipped with several environment variables to allow for more fine-granular control of the inner workings, i.e. which protocol sheme to use depending on the message size. Together with general performance improvements this offers some options for further performance tuning as well as analysis of the MPI behaviour.
  • The option to boot compute nodes via iSCSI from the network has been introduced. What you need is a suitable iSCSI provider (ask your storage vendor, MS will offer an iSCSI provider development kit) and a suitable volume, Windows HPC Server 2008 R2 is intended to do the management for you. This is the feature I (personally) was most interested in. It took us until the appearance of the release candidate to get it working well with our NetApp installation, so our experience with this is still limited but I am very keen on seeing how this behaves with heavy job loads.
  • Improved Diagnostics have been made available. Especially on the network side the options to (automatically) check the health of your cluster have been significantly improved, along with possibilities to test whether compute nodes are ok to run ISV codes. For the latter, we have written a lot of test on our own, and it took us a lot of time to get them right in detecting the most prominent issues with ISV codes. Providing well-integrated and extensive diagnostics is a great opportunity for ISVs to save their users from a lot of pain!
  • In addition there are several other things, like new Scheduling Policies and an improved Admin Console. The new Windows HPC Server 2008 R2 support for 256 threads (I think the mean cores), instead of 64. It became significantly easier to run pre- and post-job scripts, or enable email notifications when the job status changes, and things like that. Once the R2 cluster is in production I intend to share our experiences with this…

A special focus of this release lies on support for “emerging workloads” – this is how Microsoft names it – based on Enterprise SOA, Excel and Desktop Cycle Stealing. I did not look into the SOA improvements so far, therefore no comment on that. A better integration of Excel with the HPC server is very welcomed, although we do not (yet) have real users for this in our center. You will be able to run distributed instances of Excel 2010 on a cluster where every instance is computing an individual workbook (with a difference dataset), or you can source out the computation of user-defined functions of Excel 2010 to the cluster. In the past myself (and a few others) experimented with using Excel to steer computing, for example optimizing a kernel with various parameters, and I am curious whether there will be more use of that in the future by directly attaching a computation into Excel.

Well, and then there is Desktop Cycle Stealing. The idea (as far as I got it) is to use Windows 7-based workstations to run jobs, without the tight integration into a cluster as regular compute nodes have. Admittedly my view is shaped by what we do in our center, but I do not think using desktops makes a lot of sense for what most people name HPC. We design our cluster in a way that applications run efficiently on it, i.e. by taking special networks. The network connection to a workstation, even if it is GE, is comparably weak. Compute nodes are centrally managed, equipped with efficient cooling, etc. – workstations are distributed and often not reliable. There may be some applications that can profit from getting some cycles here and there. But promising desktop cycle stealing to save some money for HPC-type ISV codes will not result in satisfied users, since these codes just do not run efficiently on a weakly coupled network of inhomogeneous machines. JM2C, as always I am happy to learn about counter examples.

Book Review: C# 2008 and 2005 Thread Programming (Beginner’s Guide)

Just recently – in May 2009 – I gave two lectures on Multithreading with C# for Desktop Applications. I found there are quite a few books available that cover the .NET Thread class when talking about Windows programming in general, but the book C# 2008 and 2005 Threaded Programming: Beginner’s Guide is only about, well, Multithreading with C#. The subtitle Exploit the power of multiple processors for faster, more responsive software also states that both algorithmic parallelization as well as the separation of computation from a graphical user interface (GUI) is covered in here, and this is exactly what I was looking for. The book is clearly marked as a Beginner’s Guide and is well-written for that aspect, so if you already know about Multithreading and just want to learn about how to do this with C#, you might find the book to proceed too slowly. If you are uncertain or clearly new to this subject, then this book might do it’s job very well for you.

Chapters one and two start with a brief motivation of why the shift towards multicore processors has such an important influence on how software has to be designed and written nowadays and also contain a brief description of the typical pitfalls you may run into when parallelizing software. Chapter three describes the BackgroundWorker component, which is the simplest facility to separate the computation from the user interface in order to keep it responsible. Chapters four and five cover the most important aspects of the Thread class as well as how to use Visual Studio to debug multithreaded programs. Chapters six to nine describe how to apply parallelization to a range of common problems and design cases, for example howobject-oriented features of C# and the garbage collector of .NET play along with the Thread class and what to take care for when doing Input/Output and Data Access. Chapter ten explains in detail how GUIs and Threads work together (or not) and how to design you GUI and your application to report progress to the GUI from threads, for example. When doing so there are some rules one has to obey and I found the issues that I was not aware of before very well-explained. Chapter eleven gives a brief overview of the .NET Parallel Extensions – which will be part of .NET 4.0 – such as the Parallel class and PLINQ. The final chapter twelve tries to put all things together into a single application.

Most aspects of Multithreading with C# are introduced by first stating a problem / motivation (with respect to the example code), then showing the solution in C# code and discussing the effects of it and finally explaining the concept in some more detail, if needed. The two example codes, a text message encryption and decryption software and an image analysis tool, are consistently extended with the new features that have been introduced. I personally did not like that there is so much example code shown in the book, although people new to Multithreading might find studying the source code helpfull. With a strong focus on explaining and discussing example the book is not well-suited as a reference, but it does not say to do so. Actually I think that once you are familiar with certain aspects of Multithreading with C#, MSDN does a good job of serving as a reference.

The book is published by Packt Publishing and has been released in January 2009. The price of about 30 Euro for about 420 pages at amazon.de in Germany is affordable for students, I think. Regards to Radha Iyer at Packt Publishing for making this book available for me in time.

Radha Iyer

A performance tuning tale: Optimizing SMXV (sparse Matrix-Vector-Multiplication) on Windows [part 1.5 of 2]

Although it is high time to deliver the second part of this blog post series, I decided to squeeze in one additional post which I named part “1.5”, as it will cover some experiments with SMXV in C#. Since I am currently preparing a lecture named Multi-Threading for Desktop Systems (it will be held in German, though) in which C# plays an important role, we took a closer look into how parallelism has made it’s way into the .NET framework version 3.5 and 4.0. The final post will then cover some more tools and performance experiments (especially regarding cc-NUMA architectures) with the focus back on native coding.

First, let us briefly recap how the SMXV was implemented and examine how this can look like in C#. As explained in my previous post, the CRS format stores just the nonzero elements of the matrix in three vectors: The val-vector contains the values of all nonzero elements, the col-vector contains the column indices for each nonzero element and the row-vector points to the first nonzero element index (in val and col) for each matrix row. Having one class to represent a CRS matrix and using an array of doubles to represent a vector, the SMXV operation encapsulated by the operator* can be implemented like this, independent of whether you use managed or unmanaged arrays:

public static double[] operator *(matrix_crs lhs, double[] rhs)
{
   double[] result = new double[lhs.getNumRows()];
   for (long i = 0; i < lhs.getNumRows(); ++i)
   {
      double sum = 0;
      long rowbeg = lhs.row(i);
      long rowend = lhs.row(i + 1);
      for (long nz = rowbeg; nz < rowend; ++nz)
         sum += lhs.val(nz) * rhs[ lhs.col(nz) ];
      result[i] = sum;
   }
   return result;
}

We have several options to parallelize this code, which I wil present and briefly discuss in the rest of this post.

Threading. In this approach, the programmer is responsible for managing the threads and distributing the work onto the threads. It is not too hard to implement a static work-distribution for any given number of threads, but implementing a dynamic or adaptive work-distribution is a lot of work and also error-prone. In order to implement the static approach, we need an array of threads, have to compute the iteration chunk for each thread, put the threads to work and finally wait for the threads to finish their computation.

//Compute chunks of work:Thread[] threads = new Thread[lhs.NumThreads];
long chunkSize = lhs.getNumRows() / lhs.NumThreads;
//Start threads with respective chunks:
for (int t = 0; t < threads.Length; ++t)
{
   threads[t] = new Thread(delegate(object o)
   {
      int thread = (int)o;
      long firstRow = thread * chunkSize;
      long lastRow = (thread + 1) * chunkSize;
      if (thread == lhs.NumThreads - 1) lastRow = lhs.getNumRows();
      for (long i = firstRow; i < lastRow; ++i)
      { /* ... SMXV ... */ }
   });
   //Start the thread and pass the ID:
   threads[t].Start(t);
}
//Wait for all threads to complete:
for(int t = 0; t < threads.Length; ++t) threads[t].Join();
return result;

Instead of managing the threads on our own, we could use the thread pool of the runtime system. From a usage point of view, this is equivalent to the version shown above, so I will not discuss this any further.

Tasks. The problem of the approach discussed above is the static work-distribution that may lead to load imbalances, and implementing a dynamic work-distribution is error-prone and depending on the code it also may be a lot of work. The goal should be to distribute the workload into smaller packages, but doing this with threads is not optimal: Threads are quite costly in the sense that creating or destroying a thread takes quite a lot of time (in computer terms) since the OS is involved, and threads also need some amount of memory. A solution for this problem are Tasks. Well, tasks are quite “in” nowadays with many people thinking on how to program multicore systems and therefore there are many definitions of what a task really is. I have given mine in previous posts on OpenMP and repeat it here briefly: A task is a small package consisting of some code to execute and some private data (access to shared data is possible, of course) which the runtime schedules for execution by a team of threads. Actually it is pretty simple to parallelize the code from above using tasks: We have to manage a list of tasks and have to decide how much work a task should do (in terms of matrix lines), and of course we have to create and start the tasks and finally wait for them to finish. See below:

//Set the size of the tasks:
List<Task> taskList = new List<Task>();
int chunkSize = 1000;
//Create the tasks that calculate the parts of the result:
for (long i = 0; i < lhs.getNumRows(); i += chunkSize)
{
   taskList.Add(Task.Create(delegate(object o)
   {
      long chunkStart = (long)o;
      for(long index = (long)chunkStart;
      index < System.Math.Min(chunkStart + chunkSize, lhs.getNumRows()); index++)
      { /* ... SMXV ... */ }
   }, i));
}
//Wait for all tasks to finish:
Task.WaitAll(taskList.ToArray());
return result;

Using the TPL. The downside of the approach discussed so far is that we (= the programmer) has to distribute the work manually. In OpenMP, this is done by the compiler + runtime – at least when Worksharing constructs can be employed. In the case of for-loops, one would use Worksharing in OpenMP, With the upcoming .NET Framework version 4.0 there will be something similar (but not so powerful) available for C#: The Parallel class allows for the parallelization of for-loops, when certain conditions are fulfilled (always think about possible Data Races!). Using it is pretty simple thanks to support for delegates / lambda expressions in C#, as you can see below:

Parallel.For(0, (int)lhs.getNumRows(), delegate(int i)
{
   /* ... SMXV ... */
});
return result;

Nice? I certainly like this! It is very similar to Worksharing in the sense that you instrument your code with further knowledge to (incrementally) add parallelization, while it is also nicely integrated in the core language (which OpenMP isn’t). But you have to note that this Worksharing-like functionality is different from OpenMP in certain important aspects:

  • Tasks are used implicitly. There is a significant difference between using tasks underneath to implement this parallel for-loop, and Worksharing in OpenMP: Worksharing uses explicit threads that can be bound to cores / numa nodes, while tasks are scheduled onto threads on the behalf of the runtime system. Performance will be discussed in my next blog post, but tasks can easily be moved between numa nodes and that can spoil your performance really. OpenMP has no built-in support for affinity, but the tricks how to deal with Worksharing on cc-NUMA architectures are well-known.
  • Runtime system has full control. To my current knowledge, there is no reliably way of influencing how many threads will be used to execute the implicit tasks. Even more: I think this is by design. While it is probably nice for many users and applications when the runtime figures out how many threads should be used, this is bad for the well-educated programmer as he often has better knowledge of the application than the compiler + runtime could ever figure out (about data access pattern, for instance). If you want to fine-tune this parallelization, you have hardly any option (note: this is still beta and the options may change until .NET 4.0 will be released). In OpenMP, you can influence the work-distribution in many aspects.

PLINQ. LINQ stands for language-integrated query and allows for declarative data access. When I first heard about this technology, it was demonstrated in the context of data access and I found it interesting, but not closely related to the parallelism I am interested in. Well, it turned out that PLINQ (+ parallel) can be used to parallelize a SMXV code as well (the matrix_crs class has to implement the IEnumerable / IParallelEnumerable interface):

public static double[] operator *(matrix_crs_plinq lhs, double[] rhs)
{
   var res = from rowIndices in lhs.AsParallel().AsOrdered()
             select RowSum(rowIndices, lhs, rhs);
   double[] result = res.ToArray();
   return result;
}
public static double RowSum(long[] rowIndices, matrix_crs_plinq lhs, double[] rhs)
{
   double rowSum = 0;
   for (long i = rowIndices[0]; i < rowIndices[1]; i++)
   {
      rowSum += lhs.val(i) * rhs[lhs.col(i)];
   }
   return rowSum;
}

Did you recognized the AsParallel() in there? That is all you have to do, once the required interfaces have been implemented. Would I recommend using PLINQ for this type of code? No, it is meant to parallelize queries on object collections and more general data sources (think of databases). But (for me at least) it is certainly interesting to see this paradigm applied to a code snippet from the scientific-technical world. As PLINQ uses TPL internally, you will probably have the same issues regarding locality, although I did not look into this too closely yet.

Let me give credit to Ashwani Mehlem, who is one of the student workers in our group. He did some of the implementation work (especially the PLINQ version) and code maintenance of the experiment framework.