This page gives an overview of the hardware available in the Linux part of the RWTH Compute Cluster.

Dialog systems


The dialog systems are provided for the purpose of interactive logins. These systems should be used for programming, debugging, preparation and postprocessing of batch jobs. They are not intended for production runs. Processes that have consumed more than 20 minutes CPU time may get killed without warning.

Icon

For maintenance reasons all dialog systems are rebooted each Monday morning at 6 am.

 

 

HostnameHardware Node TypeCPU CodenameCPU ModelClock Speed [GHz]#Sockets#CoresMemory [GB]Beginning of operationRemarks
login.hpc.itc.rwth-aachen.deNEC HPC128Rg-2Intel Broadwell EPE5-2695v42.1236256November 2016Main dialog system for CLAIX2016
login2.hpc.itc.rwth-aachen.deNEC HPC128Rg-2Intel Broadwell EPE5-2695v42.1236256November 2016 
login18-1.hpc.itc.rwth-aachen.deintel-R2208WFTZSIntel SkylakePlatinum 81602.1248384November 2018Main dialog system for CLAIX2018
login18-2.hpc.itc.rwth-aachen.deintel-R2208WFTZSIntel SkylakePlatinum 81602.1248384November 2018 
login18-3.hpc.itc.rwth-aachen.deintel-R2208WFTZSIntel SkylakePlatinum 81602.1248384November 2018 
login18-4.hpc.itc.rwth-aachen.deintel-R2208WFTZSIntel SkylakePlatinum 81602.1248384November 2018 
login18-x-1.hpc.itc.rwth-aachen.desupermicro-1029GP-TRIntel SkylakePlatinum 81602.1248384November 2018Dedicated to remote desktop sessions
login18-x-2.hpc.itc.rwth-aachen.desupermicro-1029GP-TRIntel SkylakePlatinum 81602.1248384November 2018Dedicated to remote desktop sessions
login-g.hpc.itc.rwth-aachen.deNEC-GPS12G3Rg-1
2 x Tesla P100
Intel Broadwell EPE5-2650v42.2224128November 2016GPU system
login18-g-1.hpc.itc.rwth-aachen.de

supermicro-1029GP-TR

2 x Tesla V100

Intel SkylakePlatinum 81602.1248384November 2018GPU system
login18-g-2.hpc.itc.rwth-aachen.de

supermicro-1029GP-TR

2 x Tesla V100

Intel SkylakePlatinum 81602.1248384November 2018GPU system
login-t.hpc.itc.rwth-aachen.deNEC-HPC1812Rg-2Intel Broadwell EPE5-2650v42.2224128November 2016Reserved for tuning purposes
login18-t.hpc.itc.rwth-aachen.deIntel HNS2600BPBIntel SkylakePlatinum 81602.1248192

April 2019

Reserved for tuning purposes
login-knl.hpc.itc.rwth-aachen.deNECIntel Xeon Phi72101.3164196November 2017Intel KNL system
lect.hpc.itc.rwth-aachen.deIntel HNS2600BPBIntel SkylakePlatinum 81602.1248192February 2019

Dedicated to attend lectures (registration required)

lect2.hpc.itc.rwth-aachen.deIntel HNS2600BPBIntel SkylakePlatinum 81602.1248192February 2019

Dedicated to attend lectures (registration required)

linuxnvc01.rz.rwth-aachen.deBullx R421-E3
2 x NVIDIA Tesla K20Xm
Intel Sandy Bridge EPE5-26502.021664June 2013GPU Cluster

 

Data transfer nodes


The following systems are dedicated to (remote and local) file transfers:

HostnameHardware Node TypeCPU CodenameCPU ModelClock Speed [GHz]#Sockets#CoresMemory [GB]Beginning of operationNetwork Speed [Gbits/s]
copy18-1.hpc.itc.rwth-aachen.deintel-R2208WFTZSIntel SkylakePlatinum 81602,1248384November 20182 x 40
copy18-2.hpc.itc.rwth-aachen.deintel-R2208WFTZSIntel SkylakePlatinum 81602,1248384November 20182 x 40
copy.hpc.itc.rwth-aachen.deNEC HPC128Rg-2Intel Broadwell EPE5-2695v42,1232256November 2016

2 x 10

MPI backend systems


For testing proper start-up of MPI jobs we provide a dedicated small partition within the cluster. A description of how to use this part can be found on the page "Testing of MPI Jobs"

Hardware Node Type#Nodes#Sockets per NodeCPU CodenameCPU ModelClock Speed [GHz]#Cores per Chip#Cores per NodeMemory per Node [GB]Sum SocketsSum CoresSum Memory [GB]Beginning of operation
Intel HNS2600BPB42Intel SkylakePlatinum 81602.124481928192768Feb 2019

 

Batch systems (SLURM)


Node TypePartitionHardware Node TypeCPU CodenameCPU ModelClock Speed [GHz]Features#Nodes#Sockets per Node#Cores per SocketMemory per Node [GB]max recomm. memory per node [MB]default memory per task [MB]Sum SocketsSum CoresSum Memory [GB]Beginning of operation
ncmc18mIntel HNS2600BPBIntel SkylakePlatinum 81602.1skylake, skx8160, hpcwork1.0322 (star)24 (star)192187.2003.9002.06449.536198.144December 2018
nrmc18mIntel HNS2600BPBIntel SkylakePlatinum 81602.1skylake, skx8160, hpcwork2112 (star)24 (star)192187.2003.90042210.12840.512February 2019
ncgc18g

Supermicro 1029GQ-TVRT-01
2 x Tesla V100

Intel SkylakePlatinum 81602.1skylake, skx8160, hpcwork482 (star)24 (star)192187.2003.900962.3049.216March 2019
nrgc18g

Supermicro 1029GQ-TVRT-01
2 x Tesla V100

Intel SkylakePlatinum 81602.1skylake, skx8160, hpcwork62 (star)24 (star)192187.2003.900122881.152March 2019
lnmc16mNEC HPC1812Rg-2Intel Broadwell EPE5-2650v42.2broadwell, bwx2650, hpcwork600212128124.8005.2001.20014.40076.800November 2016
lnnc16m

NEC HPC1812Rg-2
1 x Intel DC P3600 NVME SSD, 2TB

Intel Broadwell EPE5-2650v42.2broadwell, bwx2650, hpcwork, nvme8212128124.8005.200161921024November 2016
lngc16g

NEC HPC1812Rg-2
2x Tesla P100

Intel Broadwell EPE5-2650v42.2broadwell, bwx2650, hpcwork9212128124.8005.200182161152November 2016
lnsc16s

NEC HPC1812RG-7

Intel Broadwell EXE5-8860v42.2broadwell, bwx8860, hpcwork, nvme681810241.020.0007.100488646144November 2016
lnsc16s

NEC HPC1812RG-7 
1 x Tesla P100

Intel Broadwell EXE5-8860v42.2broadwell, bwx8860, hpcwork, nvme281810241.020.0007.100162882048November 2016
lnkc16kNECIntel Xeon Phi (KNL) 1.3 15164196187.200    November 2017 (not yet in SLURM)

(star) SubNUMAClustering enabled for Skylake CPUs, which means, there exist 4 NUMA nodes with 12 cores each. SLURM interpretes these NUMA nodes as sockets, therefore these nodes have 4 sockets and 12 cores per socket.

 

Further systems are integrated into the RWTH Compute Cluster through the Integrative hosting service.

 

  • No labels