This page gives an overview of the hardware available in the Linux part of the RWTH Compute Cluster.

 

Abbreviations:

  • p.N. - per Node
  • all CPU Codename are meant to be 'Intel', if not anything to the contrary
  • Value '#Cores per Node' mean the physical cores (not Hyperthreading 'CPUs' reported in the OS system). By default the HyperThreading is OFF on our nodes; for those few with HT=ON remarks are posted here.

Dialog systems


The dialog systems are provided for the purpose of interactive logins. These systems should be used for programming, debugging, preparation and postprocessing of batch jobs. They are not intended for production runs. Processes that have consumed more than 20 minutes CPU time may get killed without warning.

Icon

For maintenance reasons all dialog systems are rebooted each Monday morning at 6 am.

 

 

HostnameHardware Node TypeCPU CodenameCPU ModelClock Speed [GHz]#Sockets per Node#Cores per Socket#Cores per NodeMemory

 per Node

[GB]
Beginning of operationRemarks
login.hpc.itc.rwth-aachen.deNEC HPC128Rg-2Broadwell EPE5-2695v42.121836256November 2016Main dialog system for CLAIX2016, Hyperthreading ON (2x)
login2.hpc.itc.rwth-aachen.deNEC HPC128Rg-2Broadwell EPE5-2695v42.121836256November 2016Hyperthreading ON (2x)
login18-1.hpc.itc.rwth-aachen.deintel-R2208WFTZSSkylakePlatinum 81602.122448384November 2018Main dialog system for CLAIX2018
login18-2.hpc.itc.rwth-aachen.deintel-R2208WFTZSSkylakePlatinum 81602.122448384November 2018 
login18-3.hpc.itc.rwth-aachen.deintel-R2208WFTZSSkylakePlatinum 81602.122448384November 2018 
login18-4.hpc.itc.rwth-aachen.deintel-R2208WFTZSSkylakePlatinum 81602.122448384November 2018 
login18-x-1.hpc.itc.rwth-aachen.desupermicro-1029GP-TRSkylakePlatinum 81602.122448384November 2018Dedicated to remote desktop sessions
login18-x-2.hpc.itc.rwth-aachen.desupermicro-1029GP-TRSkylakePlatinum 81602.122448384November 2018Dedicated to remote desktop sessions
login-g.hpc.itc.rwth-aachen.deNEC-GPS12G3Rg-1
2 x Tesla P100
Broadwell EPE5-2650v42.222424128November 2016GPU Cluster
login18-g-1.hpc.itc.rwth-aachen.de

supermicro-1029GP-TR

2 x Tesla V100

SkylakePlatinum 81602.121248384November 2018GPU Cluster
login18-g-2.hpc.itc.rwth-aachen.de

supermicro-1029GP-TR

2 x Tesla V100

SkylakePlatinum 81602.122448384November 2018GPU Cluster
login-t.hpc.itc.rwth-aachen.deNEC-HPC1812Rg-2Broadwell EPE5-2650v42.222424128November 2016 Dedicated to tuning purposes using VTune or Likwid
login18-t.hpc.itc.rwth-aachen.deIntel HNS2600BPBSkylakePlatinum 81602.122448192

April 2019

Dedicated to  tuning purposes using VTune or Likwid
login-knl.hpc.itc.rwth-aachen.deNECXeon Phi72101.316464196November 2017Intel KNL system, Hyperthreading ON (4x)
lect.hpc.itc.rwth-aachen.deIntel HNS2600BPBSkylakePlatinum 81602.122448192February 2019

Dedicated to attend lectures (registration required)

lect2.hpc.itc.rwth-aachen.deIntel HNS2600BPBSkylakePlatinum 81602.122448192February 2019

Dedicated to attend lectures (registration required)

linuxnvc01.rz.rwth-aachen.deBullx R421-E3
2 x NVIDIA Tesla K20Xm
Sandy Bridge EPE5-26502.0281664June 2013GPU Cluster

 

Data transfer nodes


The following systems are dedicated to (remote and local) file transfers:

HostnameHardware Node TypeCPU CodenameCPU ModelClock Speed [GHz]#Sockets per Node#Cores per Socket#Cores per NodeMemory

 per Node

[GB]
Beginning of operationNetwork Speed [Gbits/s]
copy18-1.hpc.itc.rwth-aachen.deintel-R2208WFTZSSkylakePlatinum 81602,122448384November 20182 x 40
copy18-2.hpc.itc.rwth-aachen.deintel-R2208WFTZSSkylakePlatinum 81602,122448384November 20182 x 40
copy.hpc.itc.rwth-aachen.deNEC HPC128Rg-2Broadwell EPE5-2695v42,121632256November 2016

2 x 10

The data transfer nodes differ from the other dialog systems in the following aspects:

  • You cannot start remote desktop sessions with FastX
  • Dedicated interactive programs like firefox or thunderbird are not installed
  • The TSM software to archive files is installed exclusively on the data transfer node
  • You cannot submit or manage batch jobs

MPI backend systems


For testing proper start-up of MPI jobs we provide a dedicated small partition within the cluster. The Xeon Phi (KNL) nodes are not used for MPI tests by default but are alos listed here as they are not integrated into SLURM.

Node informationCluster information
Hardware Node TypeCPU CodenameCPU ModelClock Speed [GHz]#Nodes#Sockets per Node#Cores per Socket#Cores per NodeMemory per Node [GB]Sum SocketsSum CoresSum Memory [GB]Beginning of operation
Intel HNS2600BPBSkylakePlatinum 81602.14224481928192768February 2019
NECXeon Phi (KNL)72101.31516464196159602940November 2017 

Note that Xeon Phi (KNL) nodes have HyperThreading ON (4x).

Batch systems (SLURM)


SLURM informationNode informationCluster information
Node TypePartitionFeaturesmax recomm. memory per node [MB]default memory per task [MB]Hardware Node TypeCPU CodenameCPU ModelClock Speed [GHz]#Nodes#Sockets per Node#Cores per Socket#Cores per NodeMemory per Node [GB]Sum SocketsSum CoresSum Memory [GB]Beginning of operation
ncmc18mskylake, skx8160, hpcwork187.2003.900Intel HNS2600BPBSkylakePlatinum 81602.11.0322 (star)24 (star)

48

1922.06449.536198.144December 2018
nrmc18mskylake, skx8160, hpcwork187.2003.900Intel HNS2600BPBSkylakePlatinum 81602.12112 (star)24 (star)4819242210.12840.512February 2019
ncgc18gskylake, skx8160, hpcwork187.2003.900

Supermicro 1029GQ-TVRT-01
2 x Tesla V100

SkylakePlatinum 81602.1482 (star)24 (star)48192962.3049.216March 2019
nrgc18gskylake, skx8160, hpcwork187.2003.900

Supermicro 1029GQ-TVRT-01
2 x Tesla V100

SkylakePlatinum 81602.162 (star)24 (star)48192122881.152March 2019
lnmc16mbroadwell, bwx2650, hpcwork124.8005.200NEC HPC1812Rg-2Broadwell EPE5-2650v42.2600212241281.20014.40076.800November 2016
lnnc16mbroadwell, bwx2650, hpcwork, nvme124.8005.200

NEC HPC1812Rg-2
1 x Intel DC P3600 NVME SSD, 2TB

Broadwell EPE5-2650v42.2821224128161921024November 2016
lngc16gbroadwell, bwx2650, hpcwork124.8005.200

NEC HPC1812Rg-2
2x Tesla P100

Broadwell EPE5-2650v42.2921224128182161152November 2016
lnsc16sbroadwell, bwx8860, hpcwork, nvme1.020.0007.100

NEC HPC1812RG-7

2 x Intel DC P3600 NVME SSD, 2TB

Broadwell EXE5-8860v42.268181441024488646144November 2016
lnsc16sbroadwell, bwx8860, hpcwork, nvme1.020.0007.100

NEC HPC1812RG-7 
1 x Tesla P100

2 x Intel DC P3600 NVME SSD, 2TB

Broadwell EXE5-8860v42.228181441024162882048November 2016

(star) SubNUMAClustering enabled for Skylake CPUs, which means, there exist 4 NUMA nodes with 12 cores each. SLURM interpretes these NUMA nodes as sockets, therefore these nodes have 4 sockets and 12 cores per socket.

 

Further systems are integrated into the RWTH Compute Cluster through the Integrative hosting service.

 

 

  • No labels