Service Description


The IT Center operates high performance computers in order to support institutions and employees in terms of education and research.

All machines are integrated into one “RWTH Compute Cluster” running under the Linux operating system.

General information about usage of the RWTH Compute Cluster is described in this area -  whereas information about programming the high performance computers is described in RWTH Compute Cluster - Parallel Programming.

All members of the RWTH Aachen University have free access to the RWTH Compute Cluster. But the amount of resources they can use is limited.

Above a certain threshold applications for more resources have to be submitted which then are reviewed. This application process is also open to external German scientists in institutions related to education and research. Please find related information here.

Please find information about how to get access to the system here.

You can get information about using and programming the RWTH Compute Cluster online on this website, in our Primer which you can download as a single pdf file for print out, or during our  HPC related Events. For many of these Events, particularly turorials, we collect related material on our web site as well - see  here. And then there are regular lectures, exercises and software labs of the Chair for HPC covering related topics.

Users of the RWTH Compute Cluster will continuously be informed through the HPC Mailinglist (registration, archive )

Maintenance Information


News


  • Important (13.09.2017): due to an acute vulnerability in the Emacs Editor we have to uninstall it temporarily. As soon as a fixed version is available, it will be reinstalled.
  • Important: On CLAIX nodes, for all MPI codes using ScaLAPACK (especially from Intel MKL), we hereby strongly recommend switching to Intel MPI instead of OpenMPI, or avoid using ScaLAPACK [version of] applications. Background: multiple performance issues on combination 'Open MPI + ScaLAPACK + Intel OmniPath network' (some workarounded, some still unter investigation).
    We evaluate switching to Intel MPI as the default MPI installation in our cluster (Open MPI will stay usable).
  • Important: we switched the recommended linking mode for Intel MPI from Intel-Default ('threaded') to 'sequential'. On HPC Systems you typically use the multiple cores explicitely via MPI and/or OpenMP and additional 3rd parallelity level in a library very typically runs with one single thread (our default). Omitting the 'threaded' overhead in Intel MKL allow for better runtimes and less errors (e.g. with 'sequential' MKL you can use MKL with GCC compilers). Users are still free to link Intel MKL in threaded version, of course.
  • 2017-09-15, HPC Software News: 
    • VASP News:
      • New version 5.4.4 of VASP software installed and is available as 'vasp/5.4.4' in the CHEMISTRY category
      • We recommend to use the Intel MPI version of VASP (cf. above Important message)
      • We disabled the Open MPI + ScaLAPACK versions of VASP (all installations!) due to known performance issue (3x-4x slowdown on CLAIX cluster). Use either normal (non-ScaLAPACK) version, or (recommended) Intel MPI version of VASP.
    • New release 18.0(.0.128) of the Intel Compiler (include also Intel Performance Libraries) installed and is available as 'intel/18.0' module. Please test your application with this compiler, as it will became the default Intel compiler version soon, likely. Previous 18.0-BETA releases moved to DEPRECATED area.
    • New release 2018 (Build 523188) of the Intel Advisor tool installed and set to default 'intelaxe' module. (This module also bring along an installation of Flow Graph Analyzer). Versions 2017-u2, 2017-u5, 2018b-u1 moved to DEPRECATED area.
    • New release 2018 (Build 522981) of the Intel Inspector tool installed and set to default 'intelixe' module. Version 2018b-u0 moved to DEPRECATED area.
    • New release 2018(.0.015) of the Intel ITAC tool installed and set to default 'intelitac' module. Version 2018b moved to DEPRECATED area.
    • New release 2018(.0.018) of the Intel Python 2 (2.7) installed and set to default 'pythoni' module. Version 2.7 (2017.0.035) moved to DEPRECATED area.
    • New release 2018(.0.018) of the Intel Python 3 installed and is available as 'pythoni/3.6' module. Version 3.5 (2017.0.035) moved to DEPRECATED area.
    • New release 2018(.0.128) of the Intel TBB library (for GCC and PGI compilers) installed and set to default 'inteltbb' module in LIBRARIES category. This version did not support Intel MIC architecture anymore. This module is not intended for using with Intel compilers as these have an release of TBB already included into the installation. Versions 2017(.0.098, previous default) and 4.4(.4.210)  moved to DEPRECATED area.
    • New release 2018(.0.128) of the Intel MKLlibrary (for GCC and PGI compilers) installed and is available as 'intelmkl/2018' module in LIBRARIES category. This version did not support Intel MIC architecture anymore. This module is not intended for using with Intel compilers as these have an release of MKL already included into the installation. Versions 10.2(.6.038)  moved to DEPRECATED area.
    • New release 2018(.0.128) of therelease 2018(.0.128) of therelease 2018(.0.128) of the Intel MPI library installed  and is available as 'intelmpi/2018' moduleand is available as 'intelmkl/2018' module. Please test your application with this MPIlibrary, as it will became the default Intel MPI version soon, likely. Versions 2017(.0.098)[mic] and 2017(.1.132)mic  moved to DEPRECATED area.
  • 2017-09-04, HPC Software News: 
    • Version 4.0.1 of siesta software reinstalled, now also using intel/17 and GCC compilers. (This software utilise ScaLAPACK and should be used with Intel MPI; cf. siesta#5.FAQ/KnownIssues)
  • 2017-08-31, HPC Software News: 
  • 2017-08-28:
    • The FastX2 server software has been upgraded on all dialog systems.
    • New software installed: foam-extend version 4.0. This is a fork of OpenFOAM software, available in modules as 'openfoam/extend4.0' after loading the TECHNICS category.
  • 2017-08-23:
    • In order to reduce the run time of our tape backup, the following sub-directories are excluded from the backup from now on:
      • ~/.cache
      • ~/.comsol/*/.configuration
  • 2017-08-22, HPC Software News: 
    • New version 7.0.6 of Allinea Forge (DDT+MAP) installed and set to be the default 'ddt' modul
    • New version 7.0.6 of Allinea Reports installed and set to be the default 'reports' modul
  • 2017-08-18, HPC Software News: 
    • New version 2017.2.11 of the TotalView debugger installed
  • 2017-08-15, HPC Software News: 
    • New version 7.2.0 of the GCC compiler installed and is now available as 'gcc/7' module. Previous version 7.1.0 moved to DEPRECATED area.
  • 2017-08-14:
    • On cluster-x2, the FastX2 server software has been upgraded to a new version. Please let us know, if you encounter any problems.
  • 2017-08-09, HPC Software News:  
    •  New release 4.3.0 of Likwid tool installed



Previous blog posts: low queue disabled 2017-07-28,   Status RWTH Compute Cluster 2017-07-06

Icon

All members of the RWTH Aachen University have free access to the RWTH Compute Cluster. But the amount of resources they can use is limited.

Above a certain threshold applications for more resources have to be submitted which then are reviewed.
This application process is also open to all German scientists in institutions related to education and research.