Service Description


The IT Center operates high performance computers in order to support institutions and employees in terms of education and research.

All machines are integrated into one “RWTH Compute Cluster” running under the Linux operating system.

General information about usage of the RWTH Compute Cluster is described in this area -  whereas information about programming the high performance computers is described in RWTH Compute Cluster - Parallel Programming.

All members of the RWTH Aachen University have free access to the RWTH Compute Cluster. But the amount of resources they can use is limited.

Above a certain threshold applications for more resources have to be submitted which then are reviewed. This application process is also open to external German scientists in institutions related to education and research. Please find related information here.

Please find information about how to get access to the system here.

You can get information about using and programming the RWTH Compute Cluster online on this website, or during our  HPC related Events. For many of these Events, particularly turorials, we collect related material on our web site as well - see  here. And then there are regular lectures, exercises and software labs of the Chair for HPC covering related topics.

Users of the RWTH Compute Cluster will continuously be informed through the HPC Mailinglist (registration, archive )

Maintenance Information


RWTH Störungsmeldungen
Störungsmeldungen für Dienste der RWTH Aachen
Rechner-Cluster - Umbau zweier Dialogsysteme - login18-x-1, login18-g-1
Änderung von Montag 23.09.2019 12:00 bis Montag 23.09.2019 13:00 - Aufgrund anstehender Umbaumaßnahmen stehen die Dialogsysteme login18-x-1 und login18-g-1im angegebenen Zeitraum nicht zur Verfügung. Bitte nutzen Sie in der Zeit die Dialogsysteme login18-x-2 und login18-g-2.
Rechner-Cluster - Umbau der GPU Server aus dem Bereich CLAIX2018,Tier3 und IH - ncg*, nrg*, nihg*, login18-x-2 und login18-g-2
Änderung von Montag 23.09.2019 13:00 bis Freitag 27.09.2019 10:00 - Aufgrund anstehender Umbaumaßnahmen stehen die GPU Systeme von CLAIX2018, dem Tier3 Cluster, Dialogsysteme login18-x-2 sowie login18-g-2 im angegebenen Zeitraum nicht zur Verfügung. Bitte nutzen Sie in der Zeit die Dialogsysteme login18-x-1 und login18-g-1.
Rechner-Cluster - Umbau zweier Dialogsysteme - login18-x-1, login18-g-1
Änderung von Freitag 27.09.2019 10:00 bis Freitag 27.09.2019 12:00 - Aufgrund anstehender Umbaumaßnahmen stehen die Dialogsysteme login18-x-1 und login18-g-1im angegebenen Zeitraum nicht zur Verfügung. Bitte nutzen Sie in der Zeit die Dialogsysteme login18-x-2 und login18-g-2.

News


2019-09-17, HPC Software News: 

  • Intel compiler version 19.0.5.281 installed
  • Intel MPI version 2019.5.281 installed and is now available as intelmpi/2019.5
  • Intel TBB  version 2019.8.281 installed and set to be the default inteltbb module instead of inteltbb/2018. Note that this version of Intel TBB is not to be used with Intel compilers (they all have an actual version of Intel TBB included).

2019-09-13, HPC Software News: 

  • ParaView installation revemped:
    • version 5.4.1 (old default) goes to DEPRECATED area. This version need Intel MPI version 5.x which is in DEPRECATED now in actual environment, and the MPI-parallelized 'pvserver' with Intel MPI run on localhost only.
    • version 5.7.0-RC3 installed and set to be the default version. There is a (binary) installation von 'pvserver' available - for Intel MPI only.
    • the flag '--use-offscreen-rendering' is not needed (and deprecated) in versions 5.6 and newer.

2019-09-03, Temperature issues on some nodes:

  •  Investigating user issues on varying job run time / speed, we found out that numerous nodes of the CLAUX18 cluster sometimes run into CPU temperature issues and bring down the clock speed. The root of evil seem to be a hardware issue; a support ticket by the vendor is opened. Some 40 nodes with most clockdown events are locked for production.
  • In bad cases, your job could suffer a slow-down by 1.5x and more, possibly running into the time limit.
  • Note that  even if a node run into a (moderate) clockdown mode your batch job would not neccessarily be slowed-down by this event (true for network/communication bound computations)
  • Note that your job could suffer a speed variation also from other roots (e.g. speed of file system for I/O aggressive jobs, network topology for communication bound job) even if any nodes are running always at full-speed
  • Please report us 'the good' and 'the bad' job IDs if you have seen a great variation in run time (>20%) within the last 4 week and/or see this from 05. September on; the jobs must be very comparable (the same or quite compatible data set).


2019-09-03, HPC Software News: 

2019-08-23, Intel TBB Revamp notes:

  • Intel is planning to improve the usability and simplicity of Intel’s Threading Building Blocks (TBB) through changes centered around compliance with the latest C++ standards. Intel is evaluating the deprecation and eventual removal of some legacy TBB features that make TBB overly complex and are no longer necessary.  Features under consideration for deprecation are mapped to newer and cleaner ways to obtain the same functionality, as described in the attached documentation.  Please find more details at: 
  •  Intel  greatly value and appreciate your feedback by early September, as they look to simplify TBB for the future, starting with release in October 2019. 
  • A new book on TBB explaining all the new features developed over the last decade is available:  “Pro TBB: C++ Parallel Programming with Threading Building Blocks” by Intel’s Michael Voss, and external collaborators Rafael Asenjo and James Reinders. It is available as






Previous blog post: IMPORTANT NEWS for users of the RWTH Compute Cluster: Major Operational Changes on May 1, 2019

Icon

All members of the RWTH Aachen University have free access to the RWTH Compute Cluster. But the amount of resources they can use is limited.

Above a certain threshold applications for more resources have to be submitted which then are reviewed.
This application process is also open to all German scientists in institutions related to education and research.

Page: General Page: Usage Page: FAQ Page: News archive