Service Description


The IT Center operates high performance computers in order to support institutions and employees in terms of education and research.

All machines are integrated into one “RWTH Compute Cluster” running under the Linux operating system.

General information about usage of the RWTH Compute Cluster is described in this area -  whereas information about programming the high performance computers is described in RWTH Compute Cluster - Parallel Programming.

All members of the RWTH Aachen University have free access to the RWTH Compute Cluster. But the amount of resources they can use is limited.

Above a certain threshold applications for more resources have to be submitted which then are reviewed. This application process is also open to external German scientists in institutions related to education and research. Please find related information here.

Please find information about how to get access to the system here.

You can get information about using and programming the RWTH Compute Cluster online on this website, in our Primer which you can download as a single pdf file for print out, or during our  HPC related Events. For many of these Events, particularly turorials, we collect related material on our web site as well - see  here. And then there are regular lectures, exercises and software labs of the Chair for HPC covering related topics.

Users of the RWTH Compute Cluster will continuously be informed through the HPC Mailinglist (registration, archive )

Maintenance Information


News


  • 2018-07-25, HPC Software News: 
    • GCC compiler version 8.2.0 installed and set to be 'gcc/8' module. Version gcc/8.1.0 previously known by this name will be moved to DEPRECATED area soon.
  • 2018-07-31: version upgrade of Oracle JDK: 1.8.0_171 → 1.8.0_181
  • 2018-07-25, HPC Software News: 
    • The Clang compiler module defaults now to version 6.0[.1] instead of 5.0. Version 'clang/4.0' has been moved to DEPRECATED area.
  • 2018-07-23, HPC Software News: 
    • HDF5 library installed in versions 1.8.20 and 1.10.2. Please note that there versions are not ABI-compatible, ... HDF5-1.10 can read files created with earlier releases, but earlier releases such as HDF5-1.8 may not be able to read HDF5-1.10 files.
  • 2018-07-19, HPC Software News: 
    • FDS version 6.7.0 (binary distribution) installed and set to default. Note that unlikely older version, this one support Intel MPI. Use this version of FDS, stop using old versions!
  • 2018-07-17, HPC Software News: 
  • Intel MPI version 2018.3[.222] installed and may be used by name 'intelmpi/2018.3'. Note that this version is not well-suited for old Bull (InfiniBand fabric) cluster.  
    • Intel MPI version 2019 BETA Update 1 (aka 2019.0.070) installed and may be used by name 'intelmpi/2019.0b'.
    • Version 2019.0.046b previously known by this name has been moved to DEPRECATED area.
    • The module file for this version has been rewritten.
    • The MPIEXEC wrapper has been modified to enable this version, so use $MPIEXEC or $MPITEST or 'mpitest' - 'mpirun' won't work for 'intelmpi/2019.0b' [yet].
  • Older versions of Intel MPI 2017.1, 2018.0, 2018.1, 2018.2 also has been moved to DEPRECATED area.
  • Intel compilers 16.0.4.258, 17.0.6.256 (previously known as intel/17.0), 18.0.2.199 (previously known as intel/18.0), intel/19.0.0.046b (BETA) has been moved to DEPRECATED area
    • Intel compiler version 17.0.7.259 is now 'intel/17.0' module.
    • Intel compiler version 18.0.3.222 is now 'intel/18.0' module.
       
  • Due to technical issues in the BULL InfiniBand Fabric jobs are restricted to one chassis on the BULL cluster. This means, that
    • the maximum coresize is restricted to 216 cores (18 nodes x 12 cores)
    • the maximum number of hosts is restricted to 18 hosts. If violated, in both cases the job will be rejected. 
    • This does NOT affect the NEC cluster or the Integrative Hosting service (IH)!

Previous blog post: Status RWTH Compute Cluster 2018-03-09

Icon

All members of the RWTH Aachen University have free access to the RWTH Compute Cluster. But the amount of resources they can use is limited.

Above a certain threshold applications for more resources have to be submitted which then are reviewed.
This application process is also open to all German scientists in institutions related to education and research.