Introduction into Parallel Programming Spring 2008

 

with MPI and OpenMP

Location:

Center for Computing and Communication

RWTH Aachen University

Seffenter Weg 23

52074 Aachen


Friday, April 25, 2008
 
10:45 - 12:15 Seminar room(2.31)
  13:00 - 17:00 Lab room 3

Wednesday, April 30, 2008
  10:45 - 12:15 Conference room (E.11)
  13:00 - 17:00 Lab room 3
Friday, May 30, 2008
  10:45-12:15, Seminar room 2.31
  13:00-17:00, Lab room 3

Friday, June 6, 2008
  10:45-12:15, Lab room 3 (lecture)
  13:00-17:00, Lab room 3 (practice)

Contents

 

Introduction

MPI (Message-Passing Interface) is the de-facto standard for parallelization of applications in Fortran, C, and C++ for distributed memory parallel systems. Multiple processes explicitly exchange data and coordinate their work flow. MPI specifies the interface but not the implementation. Therefore, there are plenty of implementations for PCs as well as for supercomputers. There are freely available implementations and commercial ones, which are particularly tuned for the target platform. MPI has a huge number of calls, although it is possible to write meaningful MPI applications just employing some 10 of these calls.

OpenMP is an Application Programming Interface (API) for a portable, scalable programming model for developing shared-memory parallel applications in Fortran, C, and C++.
So far OpenMP was predominantly employed on large shared memory machines. With the growing number of cores on all kinds of processor chips, with additional OpenMP implementations e.g. the GNU and Visual Studio compilers, OpenMP is available for use by a rapidly growing, broad community. Upcoming multicore architectures make the playground for OpenMP programs even more diverse. The memory hierarchy will grow, with more caches on the processor chips.
Whereas applying OpenMP to Fortran and C programs on machines with a flat memory (UMA architecture) is straight forward in many cases, there are quite some pitfalls when using OpenMP for C++ codes on one hand and on cc-NUMA architectures on the other hand. The increasing diversity of multicore processor architectures further introduces more aspects to be considered for obtaining good scalability.

Participants

The tutorial is organized by the MATSE and the HPC teams of the Center for Computing and Communication.
The target group of these tutorials are the apprentices for the profession of mathematical-technical software developer and bachelor students in Scientific Programming as well as scientists of the RWTH Aachen University who want to get a quick start into parallel programming.
Attendees should be comfortable with C, C++ or Fortran programming. Prepared lab exercises will be made available to participants. These exercises have been selected to demonstrate features discussed in the presentations.
The tutorial language will be German or English upon request.

Course Material

 

Registration

If you are not taking part in the context of your MATSE / MATA apprenticeship, register for the MPI Tutorial and the OpenMP tutorial separately.
Apprentices should use their standard registration mechanisms instead.
Registration is mandatory. Allocation is on a first come, first served basis, as we are limited in capacity.

 

Contact

Dieter an Mey
Tel.: +49 241 80 24377
Fax: +49 241 80 22504
E-mail: anmey@rz.rwth-aachen.de

Hans Joachim Pflug
Tel.: +49 241 80 24763
Fax: +49 241 80 22134
E-mail: pflug@rz.rwth-aachen.de


Abschlußinformationen

  • Keine Stichwörter