Monthly Archives: February 2009

A performance tuning tale: Optimizing SMXV (sparse Matrix-Vector-Multiplication) on Windows [part 1.5 of 2]

Although it is high time to deliver the second part of this blog post series, I decided to squeeze in one additional post which I named part “1.5”, as it will cover some experiments with SMXV in C#. Since I am currently preparing a lecture named Multi-Threading for Desktop Systems (it will be held in German, though) in which C# plays an important role, we took a closer look into how parallelism has made it’s way into the .NET framework version 3.5 and 4.0. The final post will then cover some more tools and performance experiments (especially regarding cc-NUMA architectures) with the focus back on native coding.

First, let us briefly recap how the SMXV was implemented and examine how this can look like in C#. As explained in my previous post, the CRS format stores just the nonzero elements of the matrix in three vectors: The val-vector contains the values of all nonzero elements, the col-vector contains the column indices for each nonzero element and the row-vector points to the first nonzero element index (in val and col) for each matrix row. Having one class to represent a CRS matrix and using an array of doubles to represent a vector, the SMXV operation encapsulated by the operator* can be implemented like this, independent of whether you use managed or unmanaged arrays:

public static double[] operator *(matrix_crs lhs, double[] rhs)
   double[] result = new double[lhs.getNumRows()];
   for (long i = 0; i < lhs.getNumRows(); ++i)
      double sum = 0;
      long rowbeg = lhs.row(i);
      long rowend = lhs.row(i + 1);
      for (long nz = rowbeg; nz < rowend; ++nz)
         sum += lhs.val(nz) * rhs[ lhs.col(nz) ];
      result[i] = sum;
   return result;

We have several options to parallelize this code, which I wil present and briefly discuss in the rest of this post.

Threading. In this approach, the programmer is responsible for managing the threads and distributing the work onto the threads. It is not too hard to implement a static work-distribution for any given number of threads, but implementing a dynamic or adaptive work-distribution is a lot of work and also error-prone. In order to implement the static approach, we need an array of threads, have to compute the iteration chunk for each thread, put the threads to work and finally wait for the threads to finish their computation.

//Compute chunks of work:Thread[] threads = new Thread[lhs.NumThreads];
long chunkSize = lhs.getNumRows() / lhs.NumThreads;
//Start threads with respective chunks:
for (int t = 0; t < threads.Length; ++t)
   threads[t] = new Thread(delegate(object o)
      int thread = (int)o;
      long firstRow = thread * chunkSize;
      long lastRow = (thread + 1) * chunkSize;
      if (thread == lhs.NumThreads - 1) lastRow = lhs.getNumRows();
      for (long i = firstRow; i < lastRow; ++i)
      { /* ... SMXV ... */ }
   //Start the thread and pass the ID:
//Wait for all threads to complete:
for(int t = 0; t < threads.Length; ++t) threads[t].Join();
return result;

Instead of managing the threads on our own, we could use the thread pool of the runtime system. From a usage point of view, this is equivalent to the version shown above, so I will not discuss this any further.

Tasks. The problem of the approach discussed above is the static work-distribution that may lead to load imbalances, and implementing a dynamic work-distribution is error-prone and depending on the code it also may be a lot of work. The goal should be to distribute the workload into smaller packages, but doing this with threads is not optimal: Threads are quite costly in the sense that creating or destroying a thread takes quite a lot of time (in computer terms) since the OS is involved, and threads also need some amount of memory. A solution for this problem are Tasks. Well, tasks are quite “in” nowadays with many people thinking on how to program multicore systems and therefore there are many definitions of what a task really is. I have given mine in previous posts on OpenMP and repeat it here briefly: A task is a small package consisting of some code to execute and some private data (access to shared data is possible, of course) which the runtime schedules for execution by a team of threads. Actually it is pretty simple to parallelize the code from above using tasks: We have to manage a list of tasks and have to decide how much work a task should do (in terms of matrix lines), and of course we have to create and start the tasks and finally wait for them to finish. See below:

//Set the size of the tasks:
List<Task> taskList = new List<Task>();
int chunkSize = 1000;
//Create the tasks that calculate the parts of the result:
for (long i = 0; i < lhs.getNumRows(); i += chunkSize)
   taskList.Add(Task.Create(delegate(object o)
      long chunkStart = (long)o;
      for(long index = (long)chunkStart;
      index < System.Math.Min(chunkStart + chunkSize, lhs.getNumRows()); index++)
      { /* ... SMXV ... */ }
   }, i));
//Wait for all tasks to finish:
return result;

Using the TPL. The downside of the approach discussed so far is that we (= the programmer) has to distribute the work manually. In OpenMP, this is done by the compiler + runtime – at least when Worksharing constructs can be employed. In the case of for-loops, one would use Worksharing in OpenMP, With the upcoming .NET Framework version 4.0 there will be something similar (but not so powerful) available for C#: The Parallel class allows for the parallelization of for-loops, when certain conditions are fulfilled (always think about possible Data Races!). Using it is pretty simple thanks to support for delegates / lambda expressions in C#, as you can see below:

Parallel.For(0, (int)lhs.getNumRows(), delegate(int i)
   /* ... SMXV ... */
return result;

Nice? I certainly like this! It is very similar to Worksharing in the sense that you instrument your code with further knowledge to (incrementally) add parallelization, while it is also nicely integrated in the core language (which OpenMP isn’t). But you have to note that this Worksharing-like functionality is different from OpenMP in certain important aspects:

  • Tasks are used implicitly. There is a significant difference between using tasks underneath to implement this parallel for-loop, and Worksharing in OpenMP: Worksharing uses explicit threads that can be bound to cores / numa nodes, while tasks are scheduled onto threads on the behalf of the runtime system. Performance will be discussed in my next blog post, but tasks can easily be moved between numa nodes and that can spoil your performance really. OpenMP has no built-in support for affinity, but the tricks how to deal with Worksharing on cc-NUMA architectures are well-known.
  • Runtime system has full control. To my current knowledge, there is no reliably way of influencing how many threads will be used to execute the implicit tasks. Even more: I think this is by design. While it is probably nice for many users and applications when the runtime figures out how many threads should be used, this is bad for the well-educated programmer as he often has better knowledge of the application than the compiler + runtime could ever figure out (about data access pattern, for instance). If you want to fine-tune this parallelization, you have hardly any option (note: this is still beta and the options may change until .NET 4.0 will be released). In OpenMP, you can influence the work-distribution in many aspects.

PLINQ. LINQ stands for language-integrated query and allows for declarative data access. When I first heard about this technology, it was demonstrated in the context of data access and I found it interesting, but not closely related to the parallelism I am interested in. Well, it turned out that PLINQ (+ parallel) can be used to parallelize a SMXV code as well (the matrix_crs class has to implement the IEnumerable / IParallelEnumerable interface):

public static double[] operator *(matrix_crs_plinq lhs, double[] rhs)
   var res = from rowIndices in lhs.AsParallel().AsOrdered()
             select RowSum(rowIndices, lhs, rhs);
   double[] result = res.ToArray();
   return result;
public static double RowSum(long[] rowIndices, matrix_crs_plinq lhs, double[] rhs)
   double rowSum = 0;
   for (long i = rowIndices[0]; i < rowIndices[1]; i++)
      rowSum += lhs.val(i) * rhs[lhs.col(i)];
   return rowSum;

Did you recognized the AsParallel() in there? That is all you have to do, once the required interfaces have been implemented. Would I recommend using PLINQ for this type of code? No, it is meant to parallelize queries on object collections and more general data sources (think of databases). But (for me at least) it is certainly interesting to see this paradigm applied to a code snippet from the scientific-technical world. As PLINQ uses TPL internally, you will probably have the same issues regarding locality, although I did not look into this too closely yet.

Let me give credit to Ashwani Mehlem, who is one of the student workers in our group. He did some of the implementation work (especially the PLINQ version) and code maintenance of the experiment framework.

Upcoming Events in March 2009

Let me use this well-linked position to announce three HPC events in March 2009.

Future Directions in High Performance Computing: 2009 – 20018. On Monday, March 23th 2009, Dr. Horst Simon (Associate Laboratory Director for Computing Sciences at Lawrence Berkeley National Laboratory) will give a presentation on future directions in HPC. The talk will be given in room 637 in the SuperC building of RWTH Aachen University at 11:00h. More information can be found at the associated webpage.

Parallel Programming in Computational Engineering and Science (PPCES). This event will continue the tradition of previous annual SunHPC events taking place in Aachen since 2001, which have been organized by the RWTH Aachen University and Sun Microsystems. We are changing the format a little bit with each year, reflecting changes in the hardware architecture (Sun UltraSparc to Intel Nehalem) and the operating systems (Solaris-only to Linux plus Windows) and programming languages (Fortran-dominated examples are disappearing). This year we intend to cover Linux, Windows and Solaris and offer all demos, exercises and examples on both Linux and Windows, demonstrating that Windows-HPC is a first class citizen with respect to our user support. The event will take place in the lecture room and the pc pool of the Center for Computing and Communication at RWTH Aachen University. More information and the option to register can be found at the event website.

2nd Meeting of the German Windows-HPC User Group (Win-HPC UG). The second meeting of the German Windows-HPC User Group will be hosted by the Technical University of Dresden and take place on March 30th and 31th. The agenda is not yet online (but will be shortly), but I am looking forward to some interesting talks. The event is sponsored by Intel, Microsoft, Megware, Sun and Transtec – we are hoping and doing our best to repeat the success of the predecessor event that took place last year in Aachen. It was harder than usual to get technical experts from these companies for a talk, but in the end we are quite happy (if you think you have something interesting to share, please contact us immediately). More information can be found at the event website and our plan is to schedule all talks between Monday 14:00h and Tuesday 17:00h to allow for comfortable travel.