Parallel Programming in Computational Engineering and Science 2016
kindly supported by:
HPC Seminar and Workshop
March, Monday 14 - March , Friday 18, 2016
IT Center RWTH Aachen University
Kopernikusstraße 6, Seminar Room 3 + 4
This event continued the tradition of previous annual week-long events that take place in Aachen every spring since 2001.
Throughout the week we covered parallel programming using OpenMP in Fortran and C / C++ and MPI as well as performance tuning. Furthermore, we introduced the participants to GPGPU programming with OpenACC. Hands-on exercises for each topic were provided, which should not discourage you from working on your own code.
The topics were presented in a modular way, so that you could pick specific ones and register for the particular days only in order to let you invest your time as efficiently as possible. Please register separately for each event day.
Guest lectures were hold by Ruud van der Pas, Thomas Nau, Dean Stewart, Sébastien Grimonet, Bernd Dammann, Patrick Wohlschlegel, Thomas Röhl and Jiri Kraus complete the program.
Agenda and Course Materials
Please find the agenda here >>>
Shared Memory Programming with OpenMP - Day I
- 01 IntroductionToOpenMP.pdf
- 02 OpenMPTaskingInDepth.pdf
- 03 OpenMPSummary.pdf
- OpenMP exercises solutions: PPCES_2016_openmp_exercises_solutions.tar.gz
Shared Memory Programming with OpenMP - Day II
- ppces-2016-openmp-ruud-slides.pdf by Ruud van der Pas
- PPCES-2016-DTrace.pdf by Thomas Nau
- PPCES RW Presentation v1.0.pdf by Dean Stewart
- 2016-IntelThreadingTools.pdf by Tim Cramer
- PPCES2016_hhbs_cm_bd2.pdf by Bernd Dammann
Message Passing with MPI - Day I and II
- TotalView.pdf by Tim Cramer
- 2016-03-17_PPCES_Correctness_Tools.pdf by Joachim Protze
- 2016mar16.uni_aachen.presentation.pdf by Patrick Wohlschlegel
- LIKWID_PPCES_2016.pdf by Thomas Röhl
GPGPU Programming with OpenACC
Attendees should be comfortable with C/C++ or Fortran programming and interested in learning more about the technical details of application tuning and parallelization. The presentations will be given in English.
I. + II. OpenMP is a widely used approach for programming shared memory architectures, supported by most compilers nowadays. We will cover the basics of the programming paradigm as well as some advanced topics such as programming NUMA machines. We will also cover a selection of performance and verification tools for OpenMP. The RWTH Compute Cluster comprises a large number of big SMP machines (up to 128 cores and 2 TB of main memory) as we consider shared memory programming a vital alternative for applications that cannot be easily parallelized with MPI. We also expect a growing number of application codes to combine MPI and OpenMP for clusters of nodes with a growing number of cores.
Furthermore, we will introduce the participants to modern features of the OpenMP 4.0 standard like vectorisation and programming for accelerators and for the Many Integrated Core (MIC) Architecture.
I. Shared Memory Programming with OpenMP Day I Monday Registration closed
II: Shared Memory Programming with OpenMP Day II Tuesday Registration closed
(Please mark in Part II of PPCES in the field 'remarks' if you like to attend the social dinner on Tues, March 15, 7pm)
III. + IV. The Message Passing Interface (MPI) is the de-facto standard for programming large HPC clusters. We will introduce the basic concepts and give an overview of some advanced features. Also covered is hybrid parallelization, i.e. the combination of MPI and shared memory programming, which is gaining popularity as the number of cores per cluster node grows. Furthermore, we will introduce the TotalView debugger and a selection of performance and correctness tools (Score-P, Vampir, MUST).
III. Message Passing with MPI Day I Wednesday Registration closed
IV. Message Passing with MPI Day II Thursday Registration closed
V. OpenACC is a directive-based programming model for accelerators which enables delegating the responsibility for low-level (e.g. CUDA or OpenCL) programming tasks to the compiler. Using the OpenACC industry standard, the programmer can offload compute-intensive loops to an attached accelerator with little effort.
We will give an overview on OpenACC while focusing on NVIDIA GPUs. We will cover topics such as the GPU architecture, offloading loops, managing data movement between hosts and devices, tuning data movement, applying loop schedules and writing heterogeneous applications. Furthermore, our guest speaker Jiri Kraus (NVIDIA) will introduce the application of Unified Memory with OpenACC.
Hands-on sessions are done on the RWTH Aachen GPU (Fermi) Cluster using PGI‘s OpenACC implementation.
V. GPGPU Programming with OpenACC Friday Registration closed
There is no seminar fee. All other costs (e.g. travel, hotel, and consumption) are at your own expenses.
Please make your own hotel reservation. You may find a list of hotels in Aachen on the web pages of Aachen Tourist Service. We recommend that you try to book a room at the "Novotel Aachen City", " Mercure am Graben" or "Aachen Best Western Regence" hotels. These are nice hotels with reasonable prices within walking distance (20-30 minutes, citymap.pdf) from the IT Center through the old city of Aachen. An alternative is the "IBIS Aachen Marschiertor" hotel which is close to the main station, which is convenient if you are traveling by train and also want to commute to the Center by train (4 trains per hour, 2 stops)
Most trains between Aachen and Düsseldorf stop at "Aachen West" station which is a 5 minutes walk away from the IT Center.
From the bus stop and the train station just walk uphill the "Seffenter Weg". The first building on the lefthand side at the junction with "Kopernikusstraße" is the IT Center, RWTH Aachen University. The event will take place in the extension building in the "Kopernikusstraße".