Service Description


The IT Center operates high performance computers in order to support institutions and employees in terms of education and research.

All machines are integrated into one “RWTH Compute Cluster” running under the Linux operating system.

General information about usage of the RWTH Compute Cluster is described in this area -  whereas information about programming the high performance computers is described in RWTH Compute Cluster - Parallel Programming.

All members of the RWTH Aachen University have free access to the RWTH Compute Cluster. But the amount of resources they can use is limited.

Above a certain threshold applications for more resources have to be submitted which then are reviewed. This application process is also open to external German scientists in institutions related to education and research. Please find related information here.

Please find information about how to get access to the system here.

You can get information about using and programming the RWTH Compute Cluster online on this website, in our Primer which you can download as a single pdf file for print out, or during our  HPC related Events. For many of these Events, particularly turorials, we collect related material on our web site as well - see  here. And then there are regular lectures, exercises and software labs of the Chair for HPC covering related topics.

Users of the RWTH Compute Cluster will continuously be informed through the HPC Mailinglist (registration, archive )

Maintenance Information


RWTH Störungsmeldungen
Zugriff auf $HPCWORK langsam
Teilstörung von Montag 20.11.2017 07:30 bis unbekannt - Aktuell ist der Zugriff auf $HPCWORK unter Umstaenden sehr langsam. Der Hersteller wurde benachrichtigt und arbeitet an dem Problem.

News


  • Important: On CLAIX nodes, for all MPI codes using ScaLAPACK (especially from Intel MKL), we hereby strongly recommend switching to Intel MPI instead of OpenMPI, or avoid using ScaLAPACK [version of] applications. Background: multiple performance issues on combination 'Open MPI + ScaLAPACK + Intel OmniPath network' (some workarounded, some still unter investigation).
    We evaluate switching to Intel MPI as the default MPI installation in our cluster (Open MPI will stay usable).
  • Important: we switched the recommended linking mode for Intel MPI from Intel-Default ('threaded') to 'sequential'. On HPC Systems you typically use the multiple cores explicitely via MPI and/or OpenMP and additional 3rd parallelity level in a library very typically runs with one single thread (our default). Omitting the 'threaded' overhead in Intel MKL allow for better runtimes and less errors (e.g. with 'sequential' MKL you can use MKL with GCC compilers). Users are still free to link Intel MKL in threaded version, of course.
  • 2017-11-21: bug fixed in 'foamExec' script in Foam-Extend (v4.0, module: openfoam/extend4.0)
  • 2017-11-17, HPC Software News:
    • New version 18.0.1.163 of the Intel compiler installed and set to be the default intel/18.0 module. The older version 18.0.0.128 moved to the DEPRECATED area.
    • New version 'Update 1' (Build 535159) of the Intel Inspector installed and set to be the default 'intelixe' module.
    • New version 'Update 1' (Build 535164) of the Intel Advisor installed and set to be the default 'intelaxe' module.
    • New version 2018.1.163 of the Intel MPI installed. However, 2018th version of Intel MPI is known to have a performance issue (at least on IB network) so please avoid using this version without our advise (or use it on own risk).
    • New version 2017.3.8 of the TotalView debugger installed and set to be the default 'totalview' module.
  • 2017-11-08, HPC Software News:
    • Matlab 2017b is now globally available on the cluster. It has also been set as the new default module.
    • TurboMole X 4.3.0 has been installed.

Previous blog posts: Status RWTH Compute Cluster 2017-08-09

Icon

All members of the RWTH Aachen University have free access to the RWTH Compute Cluster. But the amount of resources they can use is limited.

Above a certain threshold applications for more resources have to be submitted which then are reviewed.
This application process is also open to all German scientists in institutions related to education and research.