Parallel Programming in Computational Engineering and Science 2014
kindly supported by:
HPC Seminar and Workshop
March, Monday 10 - March , Friday 14, 2014
IT Center RWTH Aachen University
Kopernikusstraße 6, Seminar Room 3 + 4
We rely on your feedback in order to improve our course offer!
Please enter your Feedback here >>>
Recent materials of the course can be found here >>>
Video records of all presentations but "Part V: GPGPU Programming with OpenACC - Friday, March 14" can be found here >>>
This event will continue the tradition of previous annual week-long events taking place in Aachen every spring since 2001.
Throughout the week, we will cover serial (Monday) and parallel programming using MPI (Tuesday) and OpenMP (Wednesday) in Fortran and C / C++ as well as performance tuning. Furthermore, we will introduce the participants to GPGPU programming with OpenACC (Thursday) and provide a brief introduction to the new Intel Many Integrated Core Architecture (Friday) as well as the opportunities for hands-on exercises including a "bring-your-own-code" session.
These topics are presented in a modular way, so that you can choose, pick and register for single days in order to let you invest your time as efficiently as possible.
Please find the agenda here >>>
(had deadline on Monday, March 03, 2014)
Part I: Introduction, Parallel Computing Architectures, Serial Tuning - Monday, March 10
After an introduction to the principles of today's parallel computing architectures, the configuration of the new components of the RWTH Compute Cluster delivered by the company Bull will be explained. As good serial performance is the basis for good parallel performance, we cover serial tuning before introducing parallelization paradigms.
Part II: Message Passing with MPI - Tuesday, March 11
The Message Passing Interface (MPI) is the de-facto standard for programming large HPC Clusters. We will introduce the basic concepts and give an overview of some advanced features. Furthermore, we will introduce the TotalView debugger and a selection of performance tools. We will also cover hybrid parallelization, i.e. the combination of MPI and shared memory programming. Hybrid parallelization is gaining popularity as the number of cores per cluster node is growing.
Part III: Shared Memory Programming with OpenMP - Wednesday, March 12
OpenMP is a widely used approach for programming shared memory architectures, which is supported by most compilers nowadays. We will cover the basics of the programming paradigm as well as some advanced topics, such as programming NUMA machines or clusters, coherently coupled with the vSMP software from ScaleMP. We will also cover a selection of performance and verification tools for OpenMP. The RWTH Compute Cluster comprises a large number of big SMP machines (up to 128 cores and 2 TB of main memory) as we consider shared memory programming a vital alternative for applications which cannot be easily parallelized with MPI. We also expect a growing number of application codes to combine MPI and OpenMP for clusters of nodes with a growing number of cores.
There will be a sponsored social dinner on Wednesday, March 12 at 19:00 in Restaurant Palladion Aachen
Please register here for Part III of PPCES 2014 (Please mark if you like to attend the social dinner on Wed, March 12, 19:00)
Part IV: Programming the Intel® Xeon Phi™ Coprocessor, Tune your own Code - Thursday, March 13
Accelerators, like GPUs, are one way to fulfill the requirement for more and more compute power. However, they often require a laborious rewrite of the application using special programming paradigms like CUDA or OpenCL.The Intel Xeon Phi coprocessor is based on the Intel Many Integrated Core Architecture and can be programmed with standard techniques like OpenMP, POSIX threads, or MPI. We will give a brief introduction to this new architecture and demonstrate the different programming possibilities. For the labs you can also continue working on lab exercises or to get started with porting or tuning your own codes (not only for Xeon Phi). Don‘t put your expectations too high though, as time for working on large codes is limited. You will profit more from this opportunity the better you are prepared.
Part V: GPGPU Programming with OpenACC - Friday, March 14
OpenACC is a directive-based programming model for accelerators which enables delegating the responsibility for low-level (e.g. CUDA or OpenCL) programming tasks to the compiler. To this end, using the OpenACC API, the programmer can offload compute-intensive loops to an attached accelerator with little effort. The open industry standard OpenACC has been introduced in November 2011 and supports accelerating regions of code in standard C, C++ and Fortran. It provides portability across operating systems, host CPUs and accelerators.
During this workshop day, we will give an overview on OpenACC while focusing on NVIDIA GPUs. We will introduce the GPU architecture and explain very briefly how a usual CUDA program looks like. Then, we will dive into OpenACC and learn about its possibilities to accelerate code regions. We will cover topics such as offloading loops, managing data movement between host and device, tuning data movement and accesses, applying loop schedules, using multiple GPUs or interoperate with CUDA libraries. At the end, we will give an outlook to the OpenMP 4.0 standard that may include OpenMP for accelerators. Hands-on sessions are done on the RWTH Aachen GPU (Fermi) Cluster using PGI's OpenACC implementation.
- Prof. Dr. Matthias Müller, IT Center RWTH Aachen University
- Ruud van der Pas, Oracle
- Bernd Dammann, Technical University of Denmark, DTU Informatics
- Hans Henrik B. Sørensen, Technical University of Denmark, DTU Informatics
- Members of the HPC Team, IT Center RWTH Aachen University
IMPORTANT (for RWTH members only)!
Please note that all RWTH members (except UKA) need a PC pool account (which is not the same as the cluster account) to take part in the hands-on sessions.
Please find related information here.
RWTH Aachen University
IT Center, Extension Building
Kopernikusstraße 6, 52074 Aachen
Seminar Room 3 + 4
Please make your own hotel reservation. You may find a list of hotels in Aachen on the web pages of Aachen Tourist Service. We recommend that you trcitymap.pdfy to book a room at the "Novotel Aachen City", " Mercure am Graben" or "Aachen Best Western Regence" hotels. These are nice hotels with reasonable prices within walking distance (20-30 minutes) from the IT Center through the old city of Aachen. An alternative is the "IBIS Aachen Marschiertor" hotel which is close to the main station, which is convenient if you are traveling by train and also want to commute to the Center by train (4 trains per hour, 2 stops)
Most trains between Aachen and Düsseldorf stop at "Aachen West" station which is a 5 minutes walk away from the IT Center.
From the bus stop and the train station just walk uphill the "Seffenter Weg". The first building on the lefthand side at the junction with "Kopernikusstraße" is the IT Center, RWTH Aachen University. The event will take place in the extension building in the "Kopernikusstraße".