Workshop on Extreme-Scale Solvers: Transition to Future Architectures
March 8-9, 2012
American Geophysical Union, Washington, DC
Jim Ang, Al Geist, Mike Heroux, Paul Hovland, Osni Marques, Lois Curfman McInnes, Esmond Ng
ASCR Point of Contact: Karen Pao
Workshop Description: The objective of this 1½-day workshop is to bring together experts in the development of scalable solvers to determine research areas needed for extreme-scale algorithms and software to effectively utilize the 100PF systems and prepare for the exascale systems. The 100PF systems are expected to be evolutionary modifications from the 20 petaflops-class systems being deployed today, likely employing variants of today's CPU and GPU technologies. The yet-to-be-determined architectures of extreme-scale systems beyond these 100 PF systems are expected to be radically different; nevertheless, there are common characteristics of these extreme-scale systems, independent of the actual systems designs, which will require serious rethinking of today's numerical algorithms for large-scale scientific simulations. Opportunities may exist for the solver community to influence the design of future extreme-scale computers.
Architectural features that are the most salient to the design, implementation, and deployment of all numerical algorithms for parallel, high-performance scientific computing include:
- Extreme Parallelism: Estimates of 2 to 3 orders of magnitude of parallelism over today's levels will require solvers to pay particular attention to Amdahl's Law.
- Communication: Minimizing data movement will be the key to performance as well as a primary way for solvers to reduce power consumption.
- Resilience: The number of failures is expected to increase with concurrency, requiring solvers that can run through or detect and recover from faults.
- Heterogeneous Architectures: Heterogeneity will be required to meet the power requirements of the 100PF systems, but little has been done to develop portable, hybrid codes that can run across different types of architectures.
In this workshop we will identify how architecture features will affect the design and implementation of the numerical algorithms, especially linear algebra algorithms such as direct and iterative linear solvers, nonlinear solvers, and eigensolvers, which are often the most computationally intensive parts of HPC science application codes, and other related capabilities required for successful solver deployment. Workshop participants will explore issues such as:
- What do mathematicians and computer scientists need to know about future architectures to be able to write efficient, robust, scalable, and portable, high-performance solvers?
- Are there architectural features needed or desired for the development of extreme-scale solvers?
- What do code developers need in terms of development tools and programming environments to deal with extreme parallelism and minimization of communication and data movement?
- What are some architecture-specific and architecture-independent solutions, and what are their respective strengths and weaknesses for extreme-scale solvers?
- How do we evolve from today's solvers to the extreme-scale algorithms and software needed for these 100 PF systems and architectures of the future?
It is expected that the research issues discussed during this workshop will impact algorithmic research for areas beyond linear algebra, in areas such as partial differential equations, mathematical programming and optimization, computational stochastics, uncertainty quantification, mesh generation, and many others that are relevant to HPC science applications modeling and simulation.