Service Description


The IT Center operates high performance computers in order to support institutions and employees in terms of education and research.

All machines are integrated into one “RWTH Compute Cluster” running under the Linux operating system.

General information about usage of the RWTH Compute Cluster is described in this area -  whereas information about programming the high performance computers is described in RWTH Compute Cluster - Parallel Programming.

All members of the RWTH Aachen University have free access to the RWTH Compute Cluster. But the amount of resources they can use is limited.

Above a certain threshold applications for more resources have to be submitted which then are reviewed. This application process is also open to external German scientists in institutions related to education and research. Please find related information here.

Please find information about how to get access to the system here.

You can get information about using and programming the RWTH Compute Cluster online on this website, or during our  HPC related Events. For many of these Events, particularly turorials, we collect related material on our web site as well - see  here. And then there are regular lectures, exercises and software labs of the Chair for HPC covering related topics.

Users of the RWTH Compute Cluster will continuously be informed through the HPC Mailinglist (registration, archive )

Maintenance Information


RWTH Störungsmeldungen
Störungsmeldungen für Dienste der RWTH Aachen

News


Dear user of the RWTH Compute Cluster,

 

CLAIX-2018 – the new compute cluster recently delivered by the company NEC – is now going to be gradually taken to full production mode.

This will greatly increase the compute capacity that is available for RWTH and FZJ scientists.
External scientists from all over Germany will have a share, too.

 

On the other hand, the old BULL-Cluster from 2011 will be decommissioned until April 30, 2019.

 

Until then all users of the BULL-Cluster will have to migrate to CLAIX-2018.
Concerning the software environment, there is one big change that you need to be aware of: The LSF batch system will be replaced by a new batch system called “Slurm”. Thus you need to adapt all of your batch job scripts.

 

This is currently not going to affect projects running on the JARA-HPC partition. Those JARA compute projects allocated on CLAIX-2016 will need to switch to Slurm on May 1, 2019. We will provide further information for you.

 

As a first step which impacts current users of the BULL-Cluster, we are going to stop access to the Lustre-File Server ( HPCWORK ) – which repeatedly has been causing problems over the last months.

Those users who need access to HPCWORK need to migrate to CLAIX-2018 until February 25 in order to have access to their data.

 

What does it mean to migrate to CLAIX-2018?

Actually migration is quite easy: You just need to login to another login node which is part of CLAIX-2018. On this node, LSF commands will no longer be available and you have to use Slurm commands to submit and control your batch jobs.
Access to your HOME, WORK and HPCWORK data will be as usual.
A new Lustre Fileserver that will provide more capacity, bandwidth and stability (hopefully!) is still under construction – please stay tuned.

 

More information on CLAIX-2018

  • Cluster nodes each provide 48 cores and 192 GB of main memory. The CLAIX-2018 fabric for MPI communications is Omni-Path as it is for CLAIX-2016.
  • The new cluster does not use the LSF batch system but is operated with Slurm – information about Slurm can be found here …
    https://doc.itc.rwth-aachen.de/display/CC/Using+the+SLURM+Batch+System
    https://doc.itc.rwth-aachen.de/download/attachments/39160017/Slurm%20and%20Modules%20on%20Claix%202018.pdf?version=1&modificationDate=1543833380000&api=v2
  • You must login to one of the new dialog nodes login18-1.hpc.itc.rwth-aachen.de, login18-2.hpc.itc.rwth-aachen.de, login18-3.hpc.itc.rwth-aachen.de, login18-4.hpc.itc.rwth-aachen.de in order to submit batch jobs to Slurm.
  • Please, do not test MPI applications on these login nodes. Use batch jobs for your testing. The turn-around time of Slurm is expected to be favorable. MPI backend nodes for interactive testing will be provided soon.
  • We recommend recompiling your application in order to profit from the new hardware features of the SkyLake processors.
  • We recommend use new versions of the Intel compilers and MPI library. These are used automatically when you apply our default modules and environment variables.
    They have been adapted to the new hardware.
    (e.g.: openmpi/1.10.4 → intelmpi/2018 and intel/16.0 → intel/19.0)
    • Old MPI binaries (for Open MPI) will fail when Intel MPI loaded and used!
  • HOME, WORK and HPCWORK file systems are accessible as usual.
  • Accounting has not been activated yet, so your jobs will not be billed (You will like this current limitation)

Please carefully document any problems you may encounter and, as usual, report them to our ServiceDesk so we can fix them. (mailto:servicedesk@itc.rwth-aachen.de).

We also set up a mailing list for the exchange of information between early users of CLAIX-2018.
You are welcome to subscribe at
https://lists.rwth-aachen.de/postorius/lists/claix18-slurm-pilot.lists.rwth-aachen.de/

 

Kind regards,

Dieter an Mey

 

PS: Please note the upcoming events:

Introduction to High-Performance Computing

Monday, February 25, 2019

in Aachen, Germany

https://doc.itc.rwth-aachen.de/display/VE/Introduction+to+High-Performance+Computing+2019

 

and

Parallel Programming in Computational Engineering and Science (PPCES) 2019

Monday, March 11 - Friday, March 15, 2019

in Aachen, Germany

http://www.itc.rwth-aachen.de/ppces

 

 

Previous blog posts:

Icon

All members of the RWTH Aachen University have free access to the RWTH Compute Cluster. But the amount of resources they can use is limited.

Above a certain threshold applications for more resources have to be submitted which then are reviewed.
This application process is also open to all German scientists in institutions related to education and research.