Service Description

The IT Center operates high performance computers in order to support institutions and employees in terms of education and research.

All machines are integrated into one “RWTH Compute Cluster” running under the Linux operating system.

General information about usage of the RWTH Compute Cluster is described in this area -  whereas information about programming the high performance computers is described in RWTH Compute Cluster - Parallel Programming.

All members of the RWTH Aachen University have free access to the RWTH Compute Cluster. But the amount of resources they can use is limited.

Above a certain threshold applications for more resources have to be submitted which then are reviewed. This application process is also open to external German scientists in institutions related to education and research. Please find related information here.

Please find information about how to get access to the system here.

You can get information about using and programming the RWTH Compute Cluster online on this website, or during our  HPC related Events. For many of these Events, particularly turorials, we collect related material on our web site as well - see  here. And then there are regular lectures, exercises and software labs of the Chair for HPC covering related topics.

Users of the RWTH Compute Cluster will continuously be informed through the HPC Mailinglist (registration, archive )

Maintenance Information

RWTH Störungsmeldungen
Störungsmeldungen für Dienste der RWTH Aachen


old lecacy RV-NRW accounts locked.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             2019-10-2x,                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             locked.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               2019-10-2x,

  •  all known active RV-NRW users were informed mayn times prior this switch
  •  all users are enabled to log in further either by a provate 'rnrw' project granted, or as owner/cowercer on an other project.
  •  if in doubt please contact service desk.

2019-10-2x, major change in handling of project/group accounts:

2019-10-09, HPC Software News: 

2019-10-07, HPC Software News: 

  • TotalView 2019.2 installed and now available as 'totalview/2019.2' module.
  • Major update of tools   installed in DEV-TOOLS category:
    • Cube versions 4.4.4(default) and 4.5-release-preview (incl. Adviser Plugin) installed and is available in 3 modules each (divided in components):
      • cubew (Cube Writer C library)  
      • cubelib (Cube Reader & Writer C++ library & tools)
      • cubegui (Cube GUI Performance Report Explorer)
    • Score-P version 6.0 (with PAPI 5.7 und CUDA 10.0)  installed 
    • Scalasca version 2.5 installed
    • older software versions defunct.

2019-10-01, HPC Software News: 

  • TURBOMOLE installation fixed, documentation mainly rewritten. Throw away your old bath scripts and write new ones, based on examples!

2019-09-17, HPC Software News: 

  • Intel compiler version installed
  • Intel MPI version 2019.5.281 installed and is now available as intelmpi/2019.5
  • Intel TBB  version 2019.8.281 installed and set to be the default inteltbb module instead of inteltbb/2018. Note that this version of Intel TBB is not to be used with Intel compilers (they all have an actual version of Intel TBB included).

2019-09-13, HPC Software News: 

  • ParaView installation revemped:
    • version 5.4.1 (old default) goes to DEPRECATED area. This version need Intel MPI version 5.x which is in DEPRECATED now in actual environment, and the MPI-parallelized 'pvserver' with Intel MPI run on localhost only.
    • version 5.7.0-RC3 installed and set to be the default version. There is a (binary) installation von 'pvserver' available - for Intel MPI only.
    • the flag '--use-offscreen-rendering' is not needed (and deprecated) in versions 5.6 and newer.

2019-09-03, Temperature issues on some nodes:

  •  Investigating user issues on varying job run time / speed, we found out that numerous nodes of the CLAUX18 cluster sometimes run into CPU temperature issues and bring down the clock speed. The root of evil seem to be a hardware issue; a support ticket by the vendor is opened. Some 40 nodes with most clockdown events are locked for production.
  • In bad cases, your job could suffer a slow-down by 1.5x and more, possibly running into the time limit.
  • Note that  even if a node run into a (moderate) clockdown mode your batch job would not neccessarily be slowed-down by this event (true for network/communication bound computations)
  • Note that your job could suffer a speed variation also from other roots (e.g. speed of file system for I/O aggressive jobs, network topology for communication bound job) even if any nodes are running always at full-speed
  • Please report us 'the good' and 'the bad' job IDs if you have seen a great variation in run time (>20%) within the last 4 week and/or see this from 05. September on; the jobs must be very comparable (the same or quite compatible data set).

2019-09-03, HPC Software News: 

2019-08-23, Intel TBB Revamp notes:

  • Intel is planning to improve the usability and simplicity of Intel’s Threading Building Blocks (TBB) through changes centered around compliance with the latest C++ standards. Intel is evaluating the deprecation and eventual removal of some legacy TBB features that make TBB overly complex and are no longer necessary.  Features under consideration for deprecation are mapped to newer and cleaner ways to obtain the same functionality, as described in the attached documentation.  Please find more details at: 
  •  Intel  greatly value and appreciate your feedback by early September, as they look to simplify TBB for the future, starting with release in October 2019. 
  • A new book on TBB explaining all the new features developed over the last decade is available:  “Pro TBB: C++ Parallel Programming with Threading Building Blocks” by Intel’s Michael Voss, and external collaborators Rafael Asenjo and James Reinders. It is available as

Previous blog post: IMPORTANT NEWS for users of the RWTH Compute Cluster: Major Operational Changes on May 1, 2019


All members of the RWTH Aachen University have free access to the RWTH Compute Cluster. But the amount of resources they can use is limited.

Above a certain threshold applications for more resources have to be submitted which then are reviewed.
This application process is also open to all German scientists in institutions related to education and research.

Page: General Page: Usage Page: FAQ Page: News archive