Select Page

Co-Design Centers Help Make Exascale Computing a Reality

By Joan Koka // January 31, 2017

The next generation of supercomputers will help researchers tackle increasingly complex problems through modeling large-scale systems, such as nuclear reactors or global climate, and simulating complex phenomena, such as the chemistry of molecular interactions. In order to be successful, these systems must be able to carry out vast numbers of calculations at extreme speeds, reliably store enormous amounts of information and be able to quickly deliver this information with minimal errors.

To create such a system, computer designers have to first find ways to overcome limitations in existing high-performance computing systems and develop, design and optimize new software and hardware technologies to operate at exascale. ‘Exascale’ refers to high-performance computing systems capable of at least a billion billion calculations per second, or a factor of 50 times faster than the nation’s most powerful supercomputers in use today. Computational scientists aim to use these systems to generate new insights and accelerate discoveries in materials science, precision medicine, national security and numerous other fields.

As collaborators in four co-design centers created by the U.S. Department of Energy’s (DOE) Exascale Computing Project (ECP), researchers at the DOE’s Argonne National Laboratory are helping to solve some of these complex challenges and pave the way for the creation of exascale supercomputers.

The term ’co-design’ describes the integrated development and evolution of hardware technologies, computational applications and associated software. In pursuit of ECP’s mission to help people solve realistic application problems through exascale computing, each co-design center targets different features and challenges relating to exascale computing.

Co-design Center for Online Data Analysis and Reduction at the Exascale (CODAR)

Ian Foster, a University of Chicago professor, Computation Institute Senior Fellow, and Argonne Distinguished Fellow, leads a co-design center on a mission to strengthen and optimize processes for data analysis and reduction for the exascale.

“Exascale systems will be 50 times faster than existing systems, but it would be too expensive to build out storage that would be 50 times faster as well,” he said. “This means we no longer have the option to write out more data and store all of it. And if we can’t change that, then something else needs to change.”

Foster and other researchers in CODAR are working to overcome the gap between computation speed and the limitations in the speed and capacity of storage by developing smarter, more selective ways of reducing data without losing important information.

There are many powerful techniques for doing data reduction, and CODAR researchers are studying various approaches. One such approach, lossy compression, is a method whereby unnecessary or redundant information is removed to reduce overall data size. This technique is what’s used to transform the detail-rich images captured on our phone camera sensors into JPEG files, which are small in size. While data is lost in the process, the most important information ― the amount needed for our eyes to interpret the images clearly ― is maintained, and as a result, we can store hundreds more photos on our devices.

“The same thing happens when data compression is used as a technique for scientific data reduction. The important difference here is that scientific users need to precisely control and check the accuracy of the compressed data with respect to their specific needs,” said Argonne computer scientist Franck Cappello, who is leading the data reduction team for CODAR.

Other data reduction techniques include use of summary statistics and feature extraction.

Center for Efficient Exascale Discretizations (CEED)

CEED is working to improve another feature for exascale computing ― how applications create computer models. More specifically, they’re looking at the process of discretization, in which the physics of the problem is represented as a finite number of grid points that represent the model of the system.

“Determining the best layout of the grid points and representation of the model is important for rapid simulation,” said computational scientist Misun Min, the Argonne lead in CEED.

Discretization is important for computer modeling and simulation because the process enables researchers to numerically represent physical systems, like nuclear reactors, combustion engines, or climate systems. How researchers discretize the systems they’re studying affects the amount and speed of computation at exascale. CEED is focused particularly on high-order discretizations that require relatively few grid points to accurately represent physical systems.

“Our aim is to enable more efficient discretization while still maintaining a high level of accuracy for the researcher. Greater efficiency will help minimize the number of calculations needed, which would in turn reduce the overall size of computation, and also enable relatively fast relay of information,” said Paul Fischer, a professor at the University of Illinois at Urbana-Champaign and Argonne computational scientist involved in CEED.

Co-design Center for Particle Applications (CoPA)

Researchers behind CoPA are studying methods that model natural phenomena using particles, such as molecules, electrons or atoms. In high-performance computing, researchers can represent systems via discrete particles or smooth entities such as electromagnetic waves or sound waves; or they can combine the two techniques.

Particle methods span a wide range of application areas, including materials science, chemistry, cosmology, molecular dynamics and turbulent flows. When using particle methods, researchers characterize the interactions of particles with other particles and with their environment in terms of short-range and long-range interactions.

“The idea behind the co-design center is that, instead of everyone bringing their own specialized methods, we identify a set of building blocks, and then find the right way to deal with the common problems associated with these methods on the new supercomputers,” said CI Senior Fellow Salman Habib, the Argonne lead in CoPA and a senior member of the Kavli Institute for Cosmological Physics at the University of Chicago.

“Argonne’s collaboration in this effort is in methods for long-range particle interactions as well as speeding up codes for short-range interactions; we work hard on what is needed to make codes run fast,” he said.

Block-Structured AMR Co-design Center

The Block-structured AMR Co-design Center focuses on making computation more efficient using a technique known as adaptive mesh refinement, or AMR.

AMR allows an application to achieve higher level of precision at specific points or locations of interest within the computational domain and lower levels of precision elsewhere. In other words, AMR helps to focus the computing power where it is most effective to get the most precise calculations for lowest cost.

“Without AMR, calculations would require so much more resources and time,” said Anshu Dubey, the Argonne lead in the Block-Structured AMR Center and a fellow of the Computation Institute. “AMR helps researchers to focus the computational resources on features of interest in their applications while enabling efficiency in computing.”

AMR is already used in applications such as combustion, astrophysics and cosmology; now researchers in the Block-Structured AMR co-design center are focused on enhancing and augmenting it for future exascale platforms.

Image: Simulation of turbulence inside an internal combustion engine, rendered using the advanced supercomputing resources at the Argonne Leadership Computing Facility, an Office of Science User Facility. The ability to create such complex simulations helps researchers solve some of the world’s largest, most complex problems. (Credit: George Giannakopoulos.)