Turbulent Flow Simulation at the Exascale: Opportunities and Challenges Workshop

Sponsored by the U.S. Department of Energy
Office of Advanced Scientific Computing Research
Autograph Mayflower Hotel
Washington, DC
August 4 - 5, 2015

Plenary Talks

Challenges on the Path to Exascale (9.23 MB)
Jeffrey A.F. Hittinger
Center for Applied Scientific Computing, Lawrence Livermore National Laboratory

Abstract: The move to exascale computing is expected to be disruptive due to significant changes in computer architectures. Computational scientists will need to address new challenges in extreme concurrency, limited memory, data locality, resilience, and overall system and software complexity. Advances in applied mathematics will be necessary to realize the full potential of these supercomputers, but will these advances be incremental changes to existing methods or will exascale computing require a substantial rethinking of how we compute? Will the transition to exascale be evolutionary or revolutionary? Reflecting on the findings of the DOE Advanced Scientific Computing Research Program Exascale Mathematics Working Group, Dr. Hittinger will provide his perspective on the path to exascale and the opportunities for new applied mathematics research that will enable exascale computing.

Speaker Bio: Jeffrey Hittinger is a Computational Scientist and Group Leader in the Center for Applied Scientific Computing at Lawrence Livermore National Laboratory where he works on numerical methods for computational plasma physics, adaptive mesh refinement, a posteriori error estimation, and multi-physics code and calculation verification. He studied Mechanical Engineering as an undergraduate at Lehigh University and attended the University of Michigan for his graduate studies, earning Master's degrees in Aerospace Engineering and in Mathematics and his doctorate in Aerospace Engineering and Scientific Computing. Dr. Hittinger was a recipient of both the NSF Graduate Research and the DOE Computational Science Graduate Fellowships.

Block-structured AMR for Exascale Applications (5.17 MB)
John Bell
Center for Computational Sciences and Engineering, Lawrence Berkeley National Lab https://ccse.lbl.gov/people/jbb

Abstract: Block-structured adaptive mesh refinement (AMR) has been successfully used in a number of multiphysics petascale applications. The ability of AMR to localize computational effort where needed, reducing both memory requirements and computational work, makes it a key technology for exascale applications. In this talk, I will discuss the basic concepts of AMR and how discretizations of simple partial differential equations on adaptively refined grids are constructed. I will then discuss how to combine the basic discretization methodology into algorithms for more complex multiphysics applications based on decomposition of the problem into component processes. Finally, I will discuss some of the implementation issues associated with AMR and describe some of the research issues needed to make AMR an effective tool for exascale computing.

Speaker Bio: John Bell is a Senior Staff Mathematician at Lawrence Berkeley National Laboratory and Chief Scientist of Berkeley Lab's Computational Research Division. His research focuses on the development and analysis of numerical methods for partial differential equations arising in science and engineering. He has made contributions in the areas of finite volume methods, numerical methods for low Mach number flows, adaptive mesh refinement, stochastic differential equations, interface tracking and parallel computing. He has also worked on the application of these numerical methods to problems from a broad range of fields, including combustion, shock physics, seismology, atmospheric flow, flow in porous media, mesoscale fluid modeling and astrophysics. He is a Fellow of the Society of Industrial and Applied Mathematics and a member of the National Academy of Sciences.

Path to high-order unstructured-grid exascale CFD (7.74 MB)
Paul Fischer
Mathematics and Computer Science, Argonne National Laboratory, and Mechanical Science and Engineering, University of Illinois at Urbana-Champaign

Abstract: Petascale computing platforms currently feature million-way parallelism and it is anticipated that exascale computers with billion-way concurrency will be deployed in the early 2020s. In this talk, we explore the potential and difficulties of PDE-based simulation at these scales with a focus on turbulent fluid flow and heat transfer in a variety of applications including nuclear energy, combustion, oceanography, vascular flows, and astrophysics. Using data from leading-edge platforms over the past 25 years, we analyze the scalability of state-of-the-art solvers to predict parallel performance on exascale architectures. With the end of frequency scaling, the principal avenue for increased performance is through greater concurrency, which favors solution of larger problems rather than faster solution of today's problems. We analyze these trends in order to shed light on the expected scope of next generation simulations and to provide insight to design requirements for future algorithms, codes, and architectures.

Speaker Bio: Paul Fischer is a Blue Waters Professor at the University of Illinois, Urbana-Champaign in the departments of Computer Science and Mechanical Science & Engineering. He received his Ph.D. in mechanical engineering from MIT and as a post-doc held the Center for Research in Parallel Computation prize fellowship at Caltech. Fischer is a senior scientist (part-time) in the Mathematics and Computer Science Division at Argonne National Laboratory, where he leads development efforts in high-order methods for scalable fluid dynamics simulations. He is also deputy director of the DOE Center for Exascale Simulation of Advanced Reactors and is the architect of the open source fluid dynamics/heat transfer code Nek5000, which is currently used by more than 250 researchers for a variety of applications in turbulent and transitional flows. Nek5000 has scaled beyond a million ranks and has been recognized with the Gordon Bell Prize in high-performance computing.

Simulation of Turbulent Flow on Emerging HPC – An Aerospace Perspective ( 2.80 MB)
Jeffrey Slotnick
Boeing Research & Technology, The Boeing Company

Abstract: As the use of high-fidelity, physics-based computational fluid dynamics (CFD) numerical simulation continues to expand into the conceptual and preliminary design phases of aerospace systems, which, in many cases, involve highly complicated geometry with complex flow physics, the need to accurately and reliably predict turbulent, viscous flow fields with significant flow separation is paramount. Further, efficient use of these CFD tools and processes on emerging exascale computing architectures presents some significant challenges. This talk will focus on the current state of turbulent flow CFD simulation and key technical issues, and will draw heavily from the NASA Vision 2030 CFD study report, as well as from recent discussions on this topic among the aerospace CFD community. Recommended research thrusts will be described, along with opportunities for collaboration between DOE ASCR and the aerospace engineering discipline.

Speaker Bio:

  • Technical Fellow, Boeing Research & Technology, Huntington Beach, CA
  • 28 years of experience in applied CFD tool and process development, primarily in external aerodynamics, with particular emphasis on overset grid CFD technology and applications
  • Current research interests include high-lift aerodynamics, formation flight, and wake and vortical flow prediction capability

Note: The presentations are in Adobe PDF format and and may contain large graphics and images resulting in a large file size. Please reference the file size listed beside each presentation that is 1 MB or larger and take this into consideration when downloading the file. Dependent upon your Internet connection, large file sizes may delay the download speed of the file.

Adobe Reader is necessary to view PDF files. If you don't have the latest version of Reader, you can download a free copy at the Adobe download site.