Parallel programming Course Summer 2013

July 29 - August 2, 2013 (one week course!)

Center for Computing and Communication, Extension Building

RWTH Aachen University

Kopernikusstraße 6, 52074 Aachen
Seminar Room 4

Introduction

This event is a one week course in parallel programming taking place in the Center for Computing and Communication in the week July 29 -August 2. The course will give an introduction to parallel computing architectures and it will cover the most commonly used parallel programming paradigms OpenMP and MPI. We will present lectures the basics and some advanced usage techniques in both paradigms as well as tools supporting programmers in debugging and performance tuning. The course contains alternating lecture blocks and exercise blocks. In the exercises users can try out the topics on the RWTH Compute Cluster on prepared exercises. 

Dates and Agenda

July 29, 2013: Introduction to Parallel Computing Architectures (14:00 – 17:00)
July 30, 2013: Introduction to OpenMP Programming (9:00 – 17:00)
July 31, 2013: Advanced OpenMP Programming (9:00 – 17:00)
August 01, 2013: Basic message passing with MPI (9:00 – 17:00)
August 02, 2013: Advanced MPI, profiling and debugging of MPI applications (9:00 – 15:30)

Material

Participants

Attendees should be comfortable with C/C++ or Fortran programming and interested in learning more about parallel programming. The presentations will be given in English language. The Linux Cluster of the Center for Computing and Communication will be used for the exercises. 

Details

  • Introduction to Parallel Computing Architectures

On the first day of this course an introduction to parallel computing architectures is given. Basic differences between several kinds of machines are explained and their advantages and disadvantages for High Performance Computing. The basic aspects presented here, like e.g. shared memory, distributed memory and NUMA are a prerequisite to better understand the paradigms presented the rest of this week.

  • Introduction to OpenMP Programming

OpenMP is a widely used approach for programming shared memory architectures. It is supported by most compilers nowadays. This ½-day course will give a comprehensive introduction into shared memory parallel programming in general and particularly with OpenMP. It will cover the largest fraction of OpenMP language elements, focusing on Worksharing and Tasking to speedup program execution. It will also touch on basic aspects of performance optimization, such as load balancing and dealing with NUMA architectures.

  • Advanced OpenMP Programming

This part focuses on performance aspects, such as data and thread locality on NUMA architectures, false sharing, and private versus shared data. It will discuss language features in-depth and explain performance implication of different implementation alternatives. Finally it will also present various tools and how they can be used in the OpenMP parallelization cycle.

  • Basic message passing with MPI

The Message Passing Interface (MPI) is the de-facto standard for programming large distributed memory HPC systems. This ½ -day course will introduce the basic concepts of the Single Program Multiple Data (SPMD) parallel programming model realized using message passing with MPI. It will introduce the MPI standard, run-time, point-to-point communication and the most important of the collective communication operations available in MPI. Attendants are expected to have at least intermediate understanding of either C or Fortran.

  • Advanced MPI, profiling and debugging of MPI applications

This part will cover basic usage of the most popular parallel debuggers and performance tools (TotalView and Vampir in particular) specifically in the context of MPI. The course will also teach basic hybrid parallelization, i.e. the combination of message passing and shared memory programming. Hybrid parallelization is gaining popularity as the number of cores per cluster node is growing. 

Contact

E-mail: hpcevent@rz.rwth-aachen.de