Service Description


The IT Center operates high performance computers in order to support institutions and employees in terms of education and research.

All machines are integrated into one “RWTH Compute Cluster” running under the Linux operating system.

General information about usage of the RWTH Compute Cluster is described in this area -  whereas information about programming the high performance computers is described in RWTH Compute Cluster - Parallel Programming.

All members of the RWTH Aachen University have free access to the RWTH Compute Cluster. But the amount of resources they can use is limited.

Above a certain threshold applications for more resources have to be submitted which then are reviewed. This application process is also open to external German scientists in institutions related to education and research. Please find related information here.

Please find information about how to get access to the system here.

You can get information about using and programming the RWTH Compute Cluster online on this website, in our Primer which you can download as a single pdf file for print out, or during our  HPC related Events. For many of these Events, particularly turorials, we collect related material on our web site as well - see  here. And then there are regular lectures, exercises and software labs of the Chair for HPC covering related topics.

Users of the RWTH Compute Cluster will continuously be informed through the HPC Mailinglist (registration, archive )

Maintenance Information


RWTH Störungsmeldungen
Job restrictions on the BULL cluster - jobsize
Hinweis von Freitag 01.12.2017 10:00 bis Donnerstag 01.11.2018 00:00 - Due to problems in the BULL InfiniBand Fabric jobs are restricted to one chassis on the BULL cluster. This means, that a) the maximum coresize is restricted to 216 cores b) the maximum number of hosts is restricted to 18 hosts. In both cases, the job will be rejected if these numbers are exceeded. This does NOT affect the NEC cluster or the service integrative hosting!

News


  • 2018-05-17, HPC Software News:
    • ANSYS: Ansys 19.0 has been installed and is accessible via the 'ansys/19.0' module. Ansys 15.0 and prior have been moved to the DEPRECATED area.
  • 2018-05-15: The following dialog systems are available until the maintenance downtime announced at http://maintenance.rz.rwth-aachen.de/messages/show/14 is finished: cluster, cluster-linux, cluster-x, cluster-x2, cluster-copy, cluster-copy2, login, login2, login-g, login-knl, copy
     
  • 2018-05-07, HPC Software News:
    • The xTB software has been updated to version 20180417.
  • 2018-05-03, HPC Software News  
    • Version 18.4 of the PGI compilers has been installed and is accessible as 'pgi/18.4' module. 
    • Version 16.0.8.266 of the Intel compilers has been installed and is accessible as 'intel/16.0.8.266' module. Note that Intel also names this version as 'Update 5' (sic!)
    • Version 17.0.7.259 of the Intel compilers has been installed and is accessible as 'intel/17.0.7.259' module. 
  • 2018-05-02, HPC Software News:
  • 2018-04-19, HPC Software News 
    • Gnuplot version 5.2.2 installed and set to be the default 'gnuplot' module in 'MISC' module category.
  •  2018-04-23: BIOS settings of the KNL partition changed
  • 2018-04-20, Intel Software News
    • Version 18.0.1.163 of the Intel compilers, previously known as 'intel/18.0' module, moved to DEPRECATED area. Version 18.0.2.199 became the new 'intel/18.0' module.
    • New BETA release 19.0[.0.046] of the Intel compilers installed and available als 'intel/19.0b' module.
  • 2018-04-19, HPC Software News
    • The xTB software has been updated to version 20180410. Old version (xtb/20171025) has been moved to DEPRECATED area.
    • Version 18.3 of the PGI compilers has been installed and is accessible as 'pgi/18.3' module.
  • 2018-04-18: Oracle JDK has been upgraded from version 1.8.0_162 to 1.8.0_171
  • 2018-04-17, HPC Software News
    • The default version of Quantum Espresso (QE) raised from qe/6.0 to qe/6.2.1. This avoid the following issue:

       Klicken Sie hier, um zu erweitern...

      ########################################################################################################################
      # FROM IOTK LIBRARY, VERSION 1.2.0
      # UNRECOVERABLE ERROR (ierr=1)
      # ERROR IN: iotk_tag_parse (iotk_misc.f90:999)
      # CVS Revision: 1.39
      # Wrong syntax in tag
      tag=![CDATA[ Generated by new atomic code, or converted to UPF format Author: Generation date: Pseudopotential type: US
      # ERROR IN: iotk_scan (iotk_scan.f90:829)
      # CVS Revision: 1.23
      # direction
      control=2
      # ERROR IN: iotk_scan_end (iotk_scan.f90:241)
      # CVS Revision: 1.23
      # foundl
      ########################################################################################################################

       

  • 2018-04-05, HPC Software News
    • New version of VASP available:   'vasp/5.4.4VTST  '.  This version is the latest-known version of VASP, and contain also VTST pathes/tools, and  parameter  NMAX_DEG raised from 48 to 256.
  • 2018-03-23, HPC Software News
    • Intel Advisor 2018 Update 2 (Build 551025) installed ad set to default 'intelaxe' module. Versions 2017-u4 and 2018-u0 moved to DEPRECATED area.
    • Intel Trace Analyzer and Collector  2018 Update 2  installed and set to default 'intelitac' module. Versions 2017.1.024 (previously known as '2017')  and 2018.0.015 (previously known as '2018') moved to DEPRECATED area.
    • Intel Inspector 2018 Update 2 (Build 551023) installed and set to default 'intelixe' module. Versions 2017-u1 and 2018-u0 moved to DEPRECATED area.
    • Intel Distribution for Python* 2.7 2018 Update 2  (2.7.14 aka 2018.2.037) installed and set to default 'pythoni' module. Version 2.7.13 (aka 2018.0.018) moved to DEPRECATED area.
    • Intel Distribution for Python* 3.6 2018 Update 2  (3.6.3 aka 2018.2.037) installed and set to default 'pythoni/3.6' module. Version 3.6.2 (aka 2018.0.018) moved to DEPRECATED area. 
    • Intel Math Kernel Library 2018 Update 2 (2018.2.199) installed and set to default 'intelmkl/2018' module. Version 2018.0.128 moved to DEPRECATED area. 
    • Threading Building Blocks 2018 Update 2 (2018.2.199) installed and set to default 'inteltbb/2018' module. Version 2018.0.128 moved to DEPRECATED area.
    • Intel MPI Library 2018 Update 2 (2018.2.199) installed and is available as 'intelmpi/2018.2' module. 
    • Intel Parallel Studio XE 2018 Update 2 (18.0.2.199) installed and is available as 'intel/18.0.2.199' module.
  • 2018-03-09, HPC Software News:
    •  New LAMMPS Stable version (11Aug17) installed and available as 'lammps/170811' in the CHEMISTRY category.
    • Due to this issue the named version is not available for intel/18.0 compilers.
  • 2018-02-14, LSF News:
    • new, simplified method for requesting gpus, allows to use only one gpu per job, thus allows two gpu jobs per node
    • scheduling simulator should give more accurate results on when your pending jobs start

       

  • Due to technical issues in the BULL InfiniBand Fabric jobs are restricted to one chassis on the BULL cluster. This means, that
    • the maximum coresize is restricted to 216 cores (18 nodes x 12 cores)
    • the maximum number of hosts is restricted to 18 hosts. If violated, in both cases the job will be rejected. 
    • This does NOT affect the NEC cluster or the Integrative Hosting service (IH)!

Previous blog post: Status RWTH Compute Cluster 2017-12-01

Icon

All members of the RWTH Aachen University have free access to the RWTH Compute Cluster. But the amount of resources they can use is limited.

Above a certain threshold applications for more resources have to be submitted which then are reviewed.
This application process is also open to all German scientists in institutions related to education and research.