Introduction into Parallel Programming Autumn 2008

 

with MPI and OpenMP

in the

Center for Computing and Communication

RWTH Aachen University

Seffenter Weg 23

52074 Aachen


 
  • Mi 05.11.2008 8:15-13:15,im  PC 3 , RZ
 
  • Mi 19.11.2008 8:15-13:15, im PC 3 , RZ
 
 
  • Mi 08.10.2008 8:15-13:15,im  PC 3 , RZ
 
  • Mi 22.10.2008 8:15-13:15,im  PC 3 , RZ
 

Contents

 

Introduction

MPI (Message-Passing Interface) is the de-facto standard for parallelization of applications in Fortran, C, and C++ for distributed memory parallel systems. Multiple processes explicitly exchange data and coordinate their work flow. MPI specifies the interface but not the implementation. Therefore, there are plenty of implementations for PCs as well as for supercomputers. There are freely available implementations and commercial ones, which are particularly tuned for the target platform. MPI has a huge number of calls, although it is possible to write meaningful MPI applications just employing some 10 of these calls.

OpenMP is an Application Programming Interface (API) for a portable, scalable programming model for developing shared-memory parallel applications in Fortran, C, and C++.
So far OpenMP was predominantly employed on large shared memory machines. With the growing number of cores on all kinds of processor chips, with additional OpenMP implementations e.g. the GNU and Visual Studio compilers, OpenMP is available for use by a rapidly growing, broad community. Upcoming multicore architectures make the playground for OpenMP programs even more diverse. The memory hierarchy will grow, with more caches on the processor chips.
Whereas applying OpenMP to Fortran and C programs on machines with a flat memory (UMA architecture) is straight forward in many cases, there are quite some pitfalls when using OpenMP for C++ codes on one hand and on cc-NUMA architectures on the other hand. The increasing diversity of multicore processor architectures further introduces more aspects to be considered for obtaining good scalability.

Participants

The tutorial is organized by the MATSE and the HPC teams of the Center for Computing and Communication.
The target group of these tutorials are the apprentices for the profession of mathematical-technical software developer and bachelor students in Scientific Programming as well as scientists of the RWTH Aachen University who want to get a quick start into parallel programming.
Attendees should be comfortable with C, C++ or Fortran programming. Prepared lab exercises will be made available to participants. These exercises have been selected to demonstrate features discussed in the presentations.
The tutorial language will be German or English upon request.

Material

 

 

Registration

If you are not taking part in the context of your MATSE / MATA apprenticeship, register for the MPI Tutorial (please here>>>) and the OpenMP tutorial (please here>>) separately.
Apprentices should use their standard registration mechanisms instead.
Registration is mandatory. Allocation is on a first come, first served basis, as we are limited in capacity.

 

Contact

Tatjana Streit
Tel.: +49 241 80 24911
Fax: +49 241 80 22662
E-mail: streit@rz.rwth-aachen.de

Christian Terboven
Tel.: +49 241 80 24375
Fax: +49 241 80 22504
E-mail: terboven@rz.rwth-aachen.de


Abschlußinformationen

  • Keine Stichwörter