April 04, 2022 - by Simone Ulmer

In the first phase of the SPH-EXA project (Optimizing Smoothed Particle Hydrodynamics for Exascale Computing), a mini-app was developed that can be used to simulate highly complex fluid dynamic processes in astrophysics much more efficiently on future high-performance computer architectures. In SPH-EXA2, the team is now developing a new production code from the mini-app that should make it possible to simulate a trillion particles (see also the research portrait of Florina Ciorba), such as the formation of planets or the death of stars.

Sebastian and Jean-Guillaume, how did you get involved in the PASC project? Is astrophysics your field of expertise? 

Sebastian Keller: I got involved in the project because I was a mentor for the SPH-EXA project at the virtually hosted Hackathon 2020. I therefore only got involved towards the end of the first phase of the project. At the beginning of the second phase last summer, however, I was already quite active. From the scientific side, this project is not exactly what I studied as a scientist. I have a master's degree in physics and a PhD in theoretical chemistry from ETH Zurich. I worked with computer simulations from the beginning, be it for term papers, the bachelor’s in physics, the master’s, or during the PhD. I always wanted to work in this field because it is an extremely exciting interface between science and technology. That's why I'm at CSCS. 

Jean-Guillaume Piccinali: I started to work at CSCS in the Scientific Support group in 2008. After a couple of years, I asked my managers how I could extend my expertise and help our scientists in a different way. At that time, the PASC initiative was funding innovative software development projects, so CSCS gave me the opportunity to get involved in the SPH-EXA project.

How much did you have to get involved in astrophysics for the project? 

J-GP: My training was in mathematics and scientific computing, so astrophysics was not exactly my field of expertise. But PASC promotes strong collaboration between domain and computational scientists. I am convinced that interdisciplinary work is the way to prepare Swiss teams for exascale computing. 

SK: Originally, I worked on quantum physics. But at the end of the day, fluid dynamics is also a differential equation that you solve, so the techniques are not that different. I had to learn the ropes, but I think that's something you learn in a PhD — to get an overview in a new field of what best helps to reach the goal.

In the first project, the goal was to develop a mini-app. Now you are developing a completely new code. 

SK: Yes, because with the existing codes for simulating such astrophysical processes, the ambitious goal of simulating a trillion particles cannot be achieved. The mini-app is now being developed into a computer program that can simulate all kinds of things. For example, SPH-EXA is supposed to simulate what telescopes observe. The telescopes of the Square Kilometre Array Observatory, which is currently under construction, collect so much data that it will be difficult to analyse it. This is where the simulations will help draw conclusions about what is being observed.

How are the tasks in the project distributed? 

SK: SPH stands for smoothed particle hydrodynamics. In order to use this method for astrophysics, two additional things are needed: gravity between particles and physical effects, such as nuclear reactions, chemical reactions, or radiative cooling. Astrophysicist and co-PI Lucio Mayer from the University of Zurich and his team are concentrating on mapping this physics in addition to fluid dynamics in the code. This physics is needed to simulate galaxies or an entire universe. In Basel, the astrophysicist and co-PI Rubén Cabezón is working to ensure that the hydrodynamic part is as scientifically as accurate as possible. The lead PI Florina Ciorba and her group are working on performance improvement aspects like asynchronous parallelism and load balancing. She basically ensures that the code can efficiently exploit future computer architectures.

What is your task? 

SK: We are involved in the project in two areas. On the one hand, I have taken on a coordination role; and on the other hand, I am involved in the development of the code. I work with the parts of the code that are performance-sensitive, meaning I ensure that the code runs efficiently on a supercomputer. To do this, I rewrite and evolve the scientists' code so that it runs faster. Every particle interacts with every other particle, so for N particles there are N2 pairings. So that we don't have to calculate N2 pairings, we need special algorithms. Such algorithms already exist, but to be able to implement them in the code, you need to know the hardware of the computer. Also, I implemented gravity and communication between computer nodes in the code.

J-GP: Extreme scale computing is already a reality across many areas of science, and there is a relatively small amount of time to learn the many different tools and techniques in that field. I am applying and promoting software engineering good practices, such as regression testing and continuous integration.

What are you currently doing? 

SK: The most important parts, such as the communication framework and GPU accelerated gravity and hydro-dynamical solvers, are already done. Individually, they are running at incredible speeds, but that means that the interfaces between these components become the weak link. We’re still working very hard on ensuring that these interfaces require as little communication as possible such that we can sustain the high compute speeds that the solvers and the hardware are capable of. We have lots of compute power, but sending a particle from A to B remains very expensive in comparison. Another area that we need to focus on is data input and output. Basically, we haven't really had the time yet to add the fine-grained controls that you would expect from a user’s perspective to start and evaluate a simulation. It's all quite bare-bones at the moment. Therefore, we're working on building up the infrastructure needed to move a trillion particles in place to start the simulation and evaluate its output. 

J-GP: While Sebastian is doing the heavy lifting, I am trying to help where I can. For example, I am using the code to benchmark future systems. This can help us in making some technical decisions that require expert knowledge about the target architectures and how to use the resources efficiently.

How successful have you been so far? 

SK: The first implementation was able to calculate gravity for about 100,000 particles per second. In the meantime, the calculations are running on the graphics cards, and a single one of them manages to calculate over 30 million particles per second — and there are thousands of them in “Piz Daint”. SPH-EXA has reached the point where the code also runs on the new AMD graphics cards that will be used in LUMI, one of the first European pre-exascale systems. So far, we have significantly improved what is possible, but we haven't yet proven that we can simulate a trillion particles with gravity. But, it looks like we can do it. 

What is the most exciting aspect for you in this PASC project? 

SK: What I really like is that the project has a scientific and a technical part. Moreover, it has to do with different scientific fields, like astrophysics and with fluid dynamics. I can take full advantage of my scientific background, researching who has done what so far. For example, I studied a lot of literature on three-dimensional graphics and visualisation because they have extremely powerful algorithms. To be able to simulate collision detection, I took ideas from computer game technologies to speed up the astrophysics application on the “Piz Daint” graphics cards. 

J-GP: This project is trying to solve the mystery of our universe. I am honoured to contribute to this project, and I feel an immense joy to work together with such talented people united by a common goal.