The IT Center operates high performance computers in order to support institutions and employees in terms of education and research.
All machines are integrated into one “RWTH Compute Cluster” running under the Linux operating system.
General information about usage of the RWTH Compute Cluster is described in this area - whereas information about programming the high performance computers is described in RWTH Compute Cluster - Parallel Programming.
All members of the RWTH Aachen University have free access to the RWTH Compute Cluster. But the amount of resources they can use is limited.
Above a certain threshold applications for more resources have to be submitted which then are reviewed. This application process is also open to external German scientists in institutions related to education and research. Please find related information here.
Please find information about how to get access to the system here.
You can get information about using and programming the RWTH Compute Cluster online on this website, in our Primer which you can download as a single pdf file for print out, or during our HPC related Events. For many of these Events, particularly turorials, we collect related material on our web site as well - see here. And then there are regular lectures, exercises and software labs of the Chair for HPC covering related topics.
- Important (13.09.2017): due to an acute vulnerability in the Emacs Editor we have to uninstall it temporarily. As soon as a fixed version is available, it will be reinstalled.
- Important: On CLAIX nodes, for all MPI codes using ScaLAPACK (especially from Intel MKL), we hereby strongly recommend switching to Intel MPI instead of OpenMPI, or avoid using ScaLAPACK [version of] applications. Background: multiple performance issues on combination 'Open MPI + ScaLAPACK + Intel OmniPath network' (some workarounded, some still unter investigation).
We evaluate switching to Intel MPI as the default MPI installation in our cluster (Open MPI will stay usable).
- Important: we switched the recommended linking mode for Intel MPI from Intel-Default ('threaded') to 'sequential'. On HPC Systems you typically use the multiple cores explicitely via MPI and/or OpenMP and additional 3rd parallelity level in a library very typically runs with one single thread (our default). Omitting the 'threaded' overhead in Intel MKL allow for better runtimes and less errors (e.g. with 'sequential' MKL you can use MKL with GCC compilers). Users are still free to link Intel MKL in threaded version, of course.
- 2017-09-15, HPC Software News:
- VASP News:
- New version 5.4.4 of VASP software installed and is available as 'vasp/5.4.4' in the CHEMISTRY category
- We recommend to use the Intel MPI version of VASP (cf. above Important message)
- We disabled the Open MPI + ScaLAPACK versions of VASP (all installations!) due to known performance issue (3x-4x slowdown on CLAIX cluster). Use either normal (non-ScaLAPACK) version, or (recommended) Intel MPI version of VASP.
- New release 18.0(.0.128) of the Intel Compiler (include also Intel Performance Libraries) installed and is available as 'intel/18.0' module. Please test your application with this compiler, as it will became the default Intel compiler version soon, likely. Previous 18.0-BETA releases moved to DEPRECATED area.
- New release 2018 (Build 523188) of the Intel Advisor tool installed and set to default 'intelaxe' module. (This module also bring along an installation of Flow Graph Analyzer). Versions 2017-u2, 2017-u5, 2018b-u1 moved to DEPRECATED area.
- New release 2018 (Build 522981) of the Intel Inspector tool installed and set to default 'intelixe' module. Version 2018b-u0 moved to DEPRECATED area.
- New release 2018(.0.015) of the Intel ITAC tool installed and set to default 'intelitac' module. Version 2018b moved to DEPRECATED area.
- New release 2018(.0.018) of the Intel Python 2 (2.7) installed and set to default 'pythoni' module. Version 2.7 (2017.0.035) moved to DEPRECATED area.
- New release 2018(.0.018) of the Intel Python 3 installed and is available as 'pythoni/3.6' module. Version 3.5 (2017.0.035) moved to DEPRECATED area.
- New release 2018(.0.128) of the Intel TBB library (for GCC and PGI compilers)
- VASP News: