Please notice that this page describes the Piz Daint & Piz Dora system that has been operational from 2012 until November 2016. For an overview of the current Piz Daint system please refer to the actual page of the supercomputer.
Named after Piz Daint, a prominent peak in Grisons that overlooks the Fuorn pass, this supercomputer is a Cray XC30 system and is the flagship system for national HPC Service.
Piz Daint has a computing power of 7.8 PFlops, this means 7.8 quadrillion of mathematical operations per second. Piz Daint can compute in one day more than a modern laptop could compute in 900 years.
This supercomputer is a 28 cabinet Cray XC30 system with a total of 5'272 compute nodes. The compute nodes are equipped with an 8-core 64-bit Intel SandyBridge CPU (Intel® Xeon® E5-2670), an NVIDIA® Tesla® K20X with 6 GB GDDR5 memory, and 32 GB of host memory. The nodes are connected by the "Aries" proprietary interconnect from Cray, with a dragonfly network topology.
Specifications
Model | Cray XC30 |
Compute Nodes (one Intel® Xeon® E5-2670 and one NVIDIA® Tesla® K20X) | 5'272 |
Theoretical Peak Floating-point Performance per node | 166.4 Gigaflops (Intel® Xeon® E5-2670) 1311.0 Gigaflops (NVIDIA® Tesla® K20X) |
Theoretical Peak Performance | 7.787 Petaflops |
Memory Capacity per node | 32 GB (DDR3-1600) 6 GB non-ECC (GDDR5) |
Memory Bandwidth per node | 51.2 GB/s DDR3 250.0 GB/s non-ECC GDDR5 |
Total System Memory | 169 TB DDR3 32 TB non-ECC GDDR5 |
Interconnect Configuration | Aries routing and communications ASIC, and Dragonfly network topology |
Peak Network Bisection Bandwidth | 33 TB/s |
System storage capacity | 2.5 PB |
Parallel File System Peak Performance | 117 GiB/s |
Piz Dora
The Piz Daint extension - Piz Dora - is a Cray XC40 with 1256 compute nodes, each with two 18-core Intel Broadwell CPUs (Intel® Xeon® E5-2695 v4). Piz Dora has a total of 45'216 cores (36 cores per node) or 90'432 total virtual cores (72 virtual cores per node) when hyperthreading (HT) is enabled. Out of the total, 1192 nodes features 64GB of RAM each, while the remaining 64 compute nodes have 128GB of RAM each (fat nodes), accessible under the SLURM partition bigmem.