Massey University is kindly hosting NeSI staff at a workshop for those seeking to understand what parallel programming is and how it can benefit research programmes. The workshop is available to all researchers from post-graduate student level onwards. There is a maximum number of 15 places, so please register your interest to avoid being turned away.
If you are from another institution, or cannot attend the session, please use the support tab to request a workshop be run at a time and place more convenient for you.
Registration and Welcome
We'll introduce the speakers and NeSI's Computational Science Team. The Computational Science Team is made up of experts from around the country who work with research groups to assist them with their research.
Introduction to Parallel Programming
This session will expose the audience to some concepts used in HPC and scientific programming in general.
We will start by looking at ways to split problems into manageable chunks that can be processed in parallel, known as problem decomposition. The session will be rich with examples, from bioinformatics to physics.
Further on, more terminology will be introduced. The audience will be exposed to the concepts of task and data parallelism. We will also look at what it means for something to be "embarrassingly parallel" or "tightly coupled".
As we near the end, we will cover some of the issues related to dealing with memory in an HPC system. In particular, we'll split the problem between sharing memory in a single node and working with memory across nodes in distributed system. This will bring us towards OpenMP and MPI, which are explained in depth later in the day.
Parallel Programming - OpenMP
OpenMP is a technique for sharing memory between processes within a single hardware node. What is interesting about the technique is that legacy code that uses tight loops can be sped up with simple compiler #pragmas.
As well as providing a theoretical overview, Gene will be running code directly on NeSI's facilities.
Parallel Programming - MPI
The Message Passing Interface (MPI) is the de facto standard for parallel computing on high performance computing clusters. Charles will cover the primitives offered by MPI and how they can be combined to perform meaningful parallel computations. He will also cover some performance optimisations, debugging techniques, and parallel libraries that can be used by applications.
Using NeSI HPC Resources
Learn about each of NeSI's HPC facilities and how to submit work to them. We will be covering a number of different techniques. Each has its own trade off between simplicity and customisability.
The audience will be invited to contribute to a roundtable discussion about how HPC might enhance their respective research programmes. Each of the speakers from the day will be available for questions and will be happy to provide advice.
Should I attend?
The day is appropriate for researchers at every level, from every scientific discipline at Massey University. We aim to treat the audience as researchers first and as programmers second.
Here are some responses to some common concerns:
- "My work's not big enough" If you are not currently making use of powerful computing resources, the day could be valuable for you to learn what is available. You will gain some exposure to the tools that may benefit you in a few years' time.
- "I already know all this stuff" Excellent! You can help us create a community of researchers at Massey who are comfortable with using NeSI's services
- "I don't use C or Fortran" There are many runtimes available for research. We are more flexible than most HPC centres worldwide with the applications that are run on our systems.
Can't Make it?
If you would like to arrange for another workshop, please use the support tab to request another workshop be run. We will contact you and to see whether we can run an event a time and place more convenient for you.
- Please email Chris Jewell by 19 Feb 2014.
- AgHort Building A room 3.49, Turitea campus