High Performance Computing on Sun
Today and Tomorrow
Colloquium
October 5 - 6, 2004


Time:

Tuesday, October 5, 2004, 13:00 - Wednesday, October 6, 2004, 13:00

Location:Lecture Room of the
Center for Computing and Communication
RWTH Aachen University
Seffenter Weg 23
52074 Aachen
Germany
Social Event:

Dinner in the restaurant "Kazan"
Annastraße 26, opposite to the cathedral (main entrance)
Tuesday, October 5, 2004, 19:30

Introduction

It is now over three years that the Center for Computing and Communication of the RWTH Aachen University is operating a large Sun Fire SMP Cluster. These days the machinery will be considerably upgraded from 1.2 Teraflops to 4.6 Teraflops total peak performance. Whereas so far the equipment was quite homogeneously consisting of UltraSPARC III Cu processors, in future there will be a cluster with 64 smaller nodes, with 4 Opteron processors each, operating beside 28 large SMP servers equipped with the new UltraSPARC IV processors.

Nowadays everybody is installing large clusters of small commodity machines, so do we? Well, not really. We think that nodes should offer some reasonable amount of internal scalability at least. Yes, MPI is the predominating paradigm for parallelization. It is easy to understand, it is very mature, it is very portable, it is wide spread. But is it convenient? Is parallelization using Message Passing an easy task? Is there a chance for automatic parallelization with MPI? Can MPI programs be easily verified? In most cases the answer to these questions is no.

How about the future of processor architecture? Almost all manufactures are now producing dual core processors or have them on their roadmap. UltraSPARC IV is the first incarnation of the Sparc architecture with two cores per chip. The future will be multi-threaded in one or the other way. Multiple threads or processes within a chip will have access to a common memory. Multi-threading comes with a rather new flavor which is comfortable to program, which is portable, which can be verified with suitable tools and which can be combined with automatic parallelization: OpenMP.

The future of HPC will be parallel. We are just increasing the number of processors of our Sun machinery from 672 to 1792. The average number of processors employed by production batch jobs has been increasing from about 8 to about 16 over the last three years. For many engineering and scientific applications this has not been trivial. Unfortunately in our environment we do not see many codes which are "embarrassingly parallel". In future parallelization has to be on multiple levels. Hybrid parallelization using MPI plus OpenMP and autoparallelization, nested parallelization with MPI and with OpenMP will be needed to keep even more processors busy and to cut down the turnaround time of large simulation jobs.

This is why we think we have to open up both directions: Horizontal scalability for message passing and vertical scalability for shared memory programming. All our servers (will) offer both perspectives: 64 nodes with 4 CPUs, 8 with 24, 16 with 48, and 4 with 144.

Sun Microsystems has been actively pushing the envelope towards scalable shared memory systems during the last decade and future projects target multi-threading within the chip.The new keywords are chip multi-threading (CMT) and high productivity computing (HPC).

This colloquium aims at both: HPC (high performance computing) today and HPC ( high productivity computing) tomorrow.

We want to exchange experiences about the current usage of our HPC equipment and to shed some light on future deployment of this kind of computing machinery from the users' point of view as well as the manufacturers' and the providers' point of view.

The Agenda


Day 1 - Tuesday, October 5, 2004, 13:00 - 17:30

* Introduction (video(rm))
Dieter an Mey,, Center for Computing and Communication, RWTH Aachen University

* Infrastructure for Simulation Science (slides(pdf), video(rm))
Christian Bischof, Center for Computing and Communication and Institute for Scientific Computing, RWTH Aachen University

* Productivity and Collaboration in HPTC: A Vision from Sun Microsystems (slides(pdf), video(rm))
Steve Perrenod, Group Mgr., Science and Engineering, Education and Research sales, Sun Microsystems

* Throughput Computing - Why and How (slides(pdf), video(rm))
Partha Tirumalai and Ruud van der Pas, Scalable Systems Group, Sun Microsystems

* The AMD64 Technology (slides(pdf), video(rm))
Ulrich Knechtel., Advanced Micro Devices (AMD)

* Perspectives of Quantum Computing (slides(ps), video(rm))
Thomas Lippert, John von Neumann Institute for Computing (NIC) and Central Institute for Applied Mathematics (ZAM), Research Center Jülich

Social Dinner, 19:30 in the restaurant "Kazan", Annastraße 26, between the Aachen Cathedral and the Anna-Church


Day 2 - Wednesday, October 6, 2004, 8:45 - 13:00

* Analysis of Flows via CFD, LES and CAA (slides(pdf), video(rm))
Wolfgang Schröder, Chair of Fluid Mechanics and Institute of Aerodynamics, RWTH Aachen University

* Numerical Flow Simulations for a Complex Reentry Space Vehicle(slides(pdf), video(rm))
Birgit Reinartz, Michael Hesse, Mechanics Department, RWTH Aachen University

* Steady and Unsteady Numerical Simulations of the Flow in Multistage Turbines(slides(pdf), video(rm))
Dieter Bohn, Jing Ren, Christian Tümmers, Institute of Steam and Gas Turbines, RWTH Aachen University

* From Simulation to Optimization in Computational Engineering(slides(pdf), video(rm))
Martin Bücker, Institute for Scientific Computing, RWTH Aachen University

* Virtual Reality in Computational Fluid Dynamics(slides1(pdf), slides2(pdf), video(rm))
Andreas Gerndt, Center for Computing and Communication, RWTH Aachen University

* Monte Carlo Simulation of High-Energy Particle Physics Models on a Lattice(slides(pdf), video(rm))
Federico Farchioni, Institute for Theoretical Physics, University Münster

* Gene Mapping, Linkage Analysis and Computational Challenges(slides(pdf), video(rm))
Konstantin Strauch, Institute for Medical Biometry, Informatics, and Epidemiology, University of Bonn

* Computational Chemistry. We demand more!(slides(pdf), video(rm))
Bernhard Eck, Institute of Inorganic Chemistry, RWTH Aachen University

* Performance of the Sun Fire Cluster in daily routine - Experiences from Computational Chemistry(slides(pdf), video(rm))
Markus Hölscher, Institute of Technical and Macromolecular Chemistry, RWTH Aachen University

* Think Parallel!(slides(pdf), video(rm))
Dieter an Mey, Center for Computing and Communication, RWTH Aachen University


You can find the detailed agenda here (pdf).

The Costs

The seminar is organized in cooperation with the RWTH Aachen University and Sun Microsystems. There is no fee. All other costs (e.g. travel, hotel, and consumptions) are at your own expenses.
The social dinner on Tuesday evening is sponsored by Sun Microsystems.

Registration

Registration for the Colloquium is mandatory.
The registration deadline is October 1, 2004.

There will be a social dinner on Tuesday evening sponsered by Sun Microsystems in the restaurant "Kazan".
Please indicate in your registration in the rubric "Remarks" whether you want to take part in the dinner or not.

Register here for the Sun HPC Colloquium (please add remark: I want / I do not want to take part in the social dinner)

Travel Information

Please make your own hotel reservation.
You can find some housing information here. A complete list of hotels is on the web pages of the Aachen Tourist Service.
Please, download a sketch of the city (pdf, 415 KB) with some points of interest marked.

You may find a description of how to reach us by plane, train or car here.

The weather in Aachen is usually unpredictable. At that time of the year the temperatures will be most likely between 10 and 20 degrees Celsius. It might be very nice and sunny with colorful leaves, but sometimes it is also grey and rainy. So it is always a good idea to carry an umbrella. If you'll bring one, it might be sunny!

Downloads

Contact

Dieter an Mey
Tel.: +49 241 80 24377
Fax: +49 241 80 22504
E-mail: anmey@rz.rwth-aachen.de

  • Keine Stichwörter