Change font size

Poster Sessions

Monday, September 10 | Tuesday, September 11

Posters are available for download in PDF format below. Adobe Reader is necessary to view PDF files. If you don't have the latest version of Reader, you can download a free copy at the Adobe download site.

Note: Some of these files may contain large graphics and images resulting in a large file size. Please reference the file size listed beside each poster that is 1 MB or larger and take this into consideration when downloading the file. Dependent upon your internet connection, large file sizes may greatly delay the download speed.


Monday, September 10, 2012 - Institute Posters

Day 1 Poster Introduction (22.07 MB)

Poster # 1

Title/Topic

Mesh and Solver Interoperability in FASTMath (1.06 MB)

Lead Contact

Adams, Mark

Affiliation

Columbia Univ

One of the goals of FASTMath is the interoperability of existing FASTMath software.  This poster presents examples of recent FASTMath efforts to improve the interoperability of both structured and unstructured mesh frameworks with external solvers. Examples of block-structured AMR frameworks BoxLib and Chombo utilizing the hypre and PETSc libraries for linear and nonlinear solves on adaptive mesh hierarchies are shown. In the case of unstructured mesh methods, current efforts are focused on the “in-memory” integration of the MeshAdapt component with existing PDE analysis procedures (Albany, a demonstration application employing the Trilinos framework, and the PHASTA CFD).

top of page

Poster # 2

Title/Topic

Block-Structured AMR:  Algorithms and Applications (824 KB)

Lead Contact

Almgren, Ann

Affiliation

LBNL

Block-structured adaptive mesh refinement (AMR) is a powerful tool for computing solutions to partialdifferential equations involving multiple length and time scales. BoxLib and Chombo are software frameworks for the development of parallel block-structured AMR applications. They serve as the basis for a wide variety of application codes and are freely available for the development of new algorithms and applications. They contain support for finite difference discretization of partial and ordinary differential equations, as well as embedded boundary support for the representation of complex geometries, support for particle-mesh algorithms within the AMR framework, and higher-order methods in time and space.

top of page

Poster # 3

Title/Topic

Architecture-aware Themes in FASTMath Algorithms and Software (1.44 MB)

Lead Contact

Brown, Jed

Affiliation

ANL

Awareness of architecture features has become increasingly important in the design and implementation of efficient mathematical algorithms for HPC. In particular, the high cost of data motion relative to floating point operations must be taken into account with algorithms that reduce or avoid data movement both from main memory and between compute nodes. This includes the proper layout of data across memory sockets, which is crucial to achieving high performance within multithreaded (OpenMP or PThread) code. Additionally the large-scale concurrency and machine jitter require algorithms without strong synchronization; for example, blocking reductions.

top of page

Poster # 4

Title/Topic

Data Partitioning for Scientific Applications and Emerging Architectures (285 KB)

Lead Contact

Devine, Karen

Affiliation

SNL

For years, the data partitioning problem focused only on balancing processor workloads while minimizing interprocessor communication in traditional finite element/volume methods and linear solvers.  The combination of sophisticated applications (with irregular data dependencies and multiple load-balance constraints) and heterogeneous architectures (with multiple levels of processing and memory hierarchy) create new challenges for distributing data among processors.  We present FASTMath's tools and research that are addressing these issues.  Specifically, we present two toolkits, Zoltan and PARMA, which provide dynamic and static geometric and topology-based partitioners, mesh-based partition optimization tools, and hierarchical partitioning strategies for use in a wide range of applications.

top of page

Poster # 5

Title/Topic

FASTMath Nonlinear Solvers and Time Integrators (1.69 MB)

Lead Contact

Salinger, Andrew

Affiliation

SNL

Since different science applications result in diverse systems of nonlinear equations and ODEs, the “best” solver for a given problem is often a semi-custom approach, composed from a small number of building blocks, that best balances attributes such as robustness, performance, extensibility, and development effort. FASTMath provides such software building blocks, including Newton methods, matrix-free Newton methods, accelerated fixed-point algorithms, stiff integrators, and explicit integrators through SUNDIALS, KINSOL, PETSc and NOX, and leveraging FASTMath linear solvers. One active development area is IMEX methods, which compose implicit integrators for the stiff part of an ODE with explicit integrators for the rest.

top of page

Poster # 6

Title/Topic

FASTMath Unstructured Mesh Techniques (3.59 MB)

Lead Contact

Shephard, Mark

Affiliation

RPI

Unstructured meshes, often adaptively defined, can yield required levels of accuracy using many fewer degrees of freedom at the cost of more complex parallel data structures and algorithms. The FASTMath efforts in this area are focused on the parallel unstructured mesh data structures and services needed by the developers of PDE solution procedures. Current efforts are addressing parallel techniques for mesh representations on exascale computers (iMeshP), mesh-to-mesh solution transfer (MBCoupler), mesh optimization (Mesquite) and mesh adaptation (MeshAdapt) including for high-order curved meshes, a massively parallel unstructured CFD solver (PHASTA) and a library to coordinate the execution of meshing operations (MeshKit).

top of page

Poster # 7

Title/Topic

Numerical Algorithms for Large-scale Eigenvalue Calculation (1.15 MB)

Lead Contact

Yang, Chao

Affiliation

LBNL

Large-scale eigenvalue problems arise in a number of SciDAC applications. Although significant progress has been made in the last few decades in numerical algorithms for solving this type of problem, the existing eigensolvers are not well suited for multi-/many-core systems that consist of hundreds of thousands of cores (e.g., the leadership class machines at DOE labs). Eigensolvers that can exploit multiple levels of parallelism and leverage efficient computational kernels available on modern multi/many core machines must be developed to enable large-scale simulations. This is one of the goals of the DOE FASTMath SciDAC institute. We will show a number of potential approaches that we are pursuing.

top of page

Poster # 8

Title/Topic

Linear System Solution in FASTMath Software (2.88 MB)

Lead Contact

Yang, Ulrike

Affiliation

LLNL

Sparse linear solvers are an important part of many application codes, and can consume more than half of the total run time. Therefore an efficient implementation is critical to enabling large-scale simulations on high performance computers. However, the new generation of systems with multi-/manycore nodes and million-way parallelism presents new challenges for algorithm designers. We will discuss algorithmic advances for linear system solvers in four FASTMath software tools: PETSc, SuperLU, Trilinos and hypre. Topics include multithreading key sparse matrix kernels, new scheduling strategies for sparse LU factorizations, and communication reducing variants of multigrid methods.

top of page

Poster # 9

Title/Topic

SUPER’s Green HPC: Improving the Energy Efficiency of DOE HPC (1.66 MB)

Lead Contact

Carrington, Laura

Affiliation

SDSC

SUPER's Energy thrust is charged with understanding how computation and communication patterns affect the overall energy requirements of HPC applications. We then leverage this understanding to design software- and hardware-aware optimization techniques that reduce the DOE's HPC energy footprint. Three complementary sub-thrusts have emerged within this thrust: research on power measurement devices, software solutions that provide fine-grained access to the power measurements, and energy-efficiency research that utilizes the first two to develop green optimization strategies. We highlight recent accomplishments all of the three research themes and present empirical results that illustrate SUPER's contributions in minimizing DOE’s HPC energy requirements.

top of page

Poster # 10

Title/Topic

Building a Resilience Autotuning Framework (601 KB)

Lead Contact

de Supinski, Bronis R.

Affiliation

LLNL

SUPER is creating a comprehensive tool set to evaluate application fault vulnerability, to reduce it and to assess trade-offs between reduced vulnerability and performance. Our overall goal is automatic vulnerability assessment and transformations that balance those concerns. To attain this goal, we are developing tools to inject faults into applications to identify vulnerable code regions and data structures, for which we develop techniques to reduce overall application vulnerability. Hand transformations demonstrate possible improvements and motivate directives to guide transformations that we automate through ROSE. That work naturally leads to an autotuning framework that explores combinations of vulnerability-reducing transformations and identifies the best overall set.

top of page

Poster # 11

Title/Topic

Tuning for Multiple Objectives: Power versus Performance (2.5 MB)

Lead Contact

Hovland, Paul

Affiliation

ANL

When a single objective, such as execution time, is available, the autotuning search problem can be posed as a numerical optimization problem. However, it is increasingly common to have multiple objectives, such as execution time, energy consumption, resilience to errors, power demands, and memory footprint. When the relative weights or constraints on these objectives are not known at search time, one must pose the autotuning search problem as a multi-objective optimization problem. We describe the optimization framework for search, discuss some of the potential tradeoffs among multiple objectives, and provide empirical evidence that such tradeoffs do exist in practice.

top of page

Poster # 12

Title/Topic

Tool Integration and Autotuning for SUPER Performance Optimization (1.55 MB)

Lead Contact

Malony, Allen

Affiliation

Univ Oregon

Methods for performance optimization of petascale applications must address the growing complexity of new HPC hardware/software environments that limit the ability of manual efforts in successful performance problem triage, software transformation, and static/dynamic parameter configuration. Greater performance tool integration and tuning process automation are necessary to manage and share performance information, to correctly generate multiple code variants, to conduct controlled performance experiments, and to efficiently search and discover high-performant solutions, thereby improving performance portability overall.  SUPER is advancing autotuning capabilities through the coupling of performance measurement, analysis, and database tools, compiler and program translators, and autotuning frameworks.

top of page

Poster # 13

Title/Topic

Approaches to and Tools for Project Performance Tracking and Analysis (857 KB)

Lead Contact

Worley, Patrick

Affiliation

ORNL

Performance optimization begins with performance data collection. To identify and prioritize performance problems, it is important to collect data from production runs and on the complete workflow ('end-to-end'). As many factors can influence performance, multiple views of performance from the application code and from the system must be correlated and merged ('holistic'). SUPER is identifying and further developing tools and methodologies for collecting end-to-end and holistic performance data. An example is presented of using such data to quantify the impact and identify possible sources of performance variability in production runs of the Community Earth System Model.

top of page

Poster # 14

Title/Topic

Algorithmic developments for UQ in extreme scale simulations (4.12 MB)

Lead Contact

Debusschere, Bert

Affiliation

SNL

In this poster, we present algorithmic advances for UQ extreme scale simulations: handling high-dimensionality in stochastic space, detecting sparsity, and uncertainty in upscaling methods. To handle high-dimensional uncertainties, we are employing adaptive refinement with dimension-adaptive and locally-adaptive collocation approaches based on structured grids, and enhancement of global surrogates with adjoint-based local derivative information. We are pursuing robust compressive sensing and Bayesian compressive sensing algorithms to build sparse polynomial chaos representations from random samples. For uncertainty in upscaling approaches, we are characterizing uncertainty in subgrid models inferred from atomistic simulations, and the propagation of this uncertainty into continuum simulations.

top of page

Poster # 15

Title/Topic

QUEST Software Tools for Uncertainty Propagation and Inference (1.79 MB)

Lead Contact

Eldred, Michael

Affiliation

SNL

QUEST provides a set of software tools for uncertainty quantification (UQ) that supports both a range of studies, including sensitivity analysis, uncertainty propagation, and statistical inference, and a range of environments, from rapid prototyping in interpreted languages to production computing in compiled languages on parallel platforms. DAKOTA provides a production tool supporting UQ algorithms, simulation interfacing, and computing services.  QUESO specializes on parallel Bayesian methods with advanced posterior estimation approaches.  GPMSA provides emulator-based Bayesian inference within a MATLAB environment, and UQTk provides a lightweight C++ library for intrusive UQ.  These interoperable tools are being actively extended to address key SciDAC challenges.

top of page

Poster # 16

Title/Topic

Reduced and Adapted Models for UQ (544 KB)

Lead Contact

Ghanem, Roger

Affiliation

USC

Modeling, analysis, and quadratures in high-dimensions is one of the key challenges of UQ.  The complexity of these procedures is typically tied to the complexity of model parameters that are themselves introduced with high-dimensional representations.  We tackle this problem by focusing attention of quantities of interest that are themselves, often, very low-dimensional.  We develop novel upscaling procedures and associated quadrature rules that reduce the computational burden of UQ by orders of magnitude. Simultaneously, we develop and demonstrate novel procedures for updating stochastic representations based on knowledge acquired in the form constraints on functionals of the quantities of interest.  These updated representations provide very effective mechanism for sampling from posterior distributions.

top of page

Poster # 17

Title/Topic

LANL UQ Interactions with other SciDAC Efforts

Lead Contact

Higdon, David

Affiliation

LANL

Los Alamos National Laboratory’s (LANL) component of the QUEST effort will focus on collaboration with three of the SciDAC efforts: Nuclear Computational Low Energy Initiative; Plasma Surface Interactions: Bridging from the Surface to the Micron Frontier through Leadership Class Computing; and Computation-Driven Discovery for the Dark Universe. While these constitute a diverse collection of scientific efforts, the expected UQ effort within these projects will have some commonthemes.  These include: Sensitivity analysis – understanding and quantifying the impact of model inputs on key model outputs; Model calibration – combining physical measurements with model runs to estimate/constrain important model parameters; Bridging scales – using a collection of models, or model fidelities, to bridge spatial (and/or temporal) scales for the physical processes of interest; and Accounting for model discrepancy – modeling the difference between model predictions and the true physical process, producing more reliable uncertainties associated with model predictions.  This poster briefly describes the UQ activities proposed in each of these projects and connects them to broader UQ themes and research areas.

top of page

Poster # 18

Title/Topic

Development and A Priori Testing of a Sparse Adaptive Pseudospectral PC Representation (110 KB)

Lead Contact

Knio, Omar

Affiliation

Duke Univ

A  crucial aspect in the implementation of uncertainty quantification methods to complex models concerns the ability to sample quantities of interest efficiently.  This poster presentation discusses the development and implementation of a sparse, adaptive sampling strategy to efficiently and reliably construct suitable polynomial chaos (PC) surrogates.  The approach relies on a pseudospectral construction that accommodates arbitrary admissible sparse grids.  An a priori analysis of the implementation and performance of the algorithm is conducted, based on existing databases of ocean global circulation simulations.  The tests demonstrate that in the present setting order-of-magnitude savings are obtained over isotropic sparse sampling.

top of page

Poster # 19

Title/Topic

Technologies for Scientific Visualization (3.71 MB)

Lead Contact

Ahrens, James

Affiliation

LANL

A major challenge faced by scientists today is the increasing rate of data generation by simulations and experimental devices. In this poster, we describe how SDAV tools, libraries, and algorithms can scalably and efficiently process these massive results. An additional major challenge is that the complexity of data generated from multi-parameter, multi-physics, multi-scale simulations are also increasing at a significant rate. Features in these datasets often manifest in a wide range of spatial and temporal scales, and a complete development of a phenomenon often traverses these scales. We highlight the need for exploratory visualization techniques that support understanding ensembles of results, methods for quantifying uncertainty, and tools for visually exploring and understanding features.

top of page

Poster # 20

Title/Topic

Recent Achievements in Visual Data Exploration and Analysis for Climate Science

Lead Contact

Bethel, Wes

Affiliation

LBNL

Climate science projects, like many areas of computational science, face substantial challenges that result from increasing data size and complexity. Legacy tools, which have served well for decades, are being replaced with new technologies better suited to run on advanced computational platforms and to enable knowledge discovery on increasingly more complex and challenging lines of scientific inquiry. We present recent results of several different projects, conducted in collaboration with the climate science community, that show the impact of these technological advances.

top of page

Poster # 21

Title/Topic

SDAV Support to Combustion Research with Visualization, Analysis, and Data Movements

Lead Contact

Gyulassy, Attila

Affiliation

Univ Utah

Combustion accounts for the majority of the world's energy needs, and scientists are developing increasingly large and complex simulations to gain a better insight into clean and efficient fuels and burning devices. Visualization and analysis algorithms are integral to answering science questions about combustion; however, these algorithms must be executed concurrently with the simulations without negatively impacting their performance. We present recent results where in-situ and in-transit paradigms are used to achieve efficient topological analysis and high resolution visualizations that are well coupled with combustion simulation via high-throughput data movements that minimize any performance overhead.

top of page

Poster # 22

Title/Topic

SDAV technologies for the next generation fusion techniques

Lead Contact

Klasky, Scott

Affiliation

ORNL

Plasma fusion simulations are among the most complex computations that use the largest computing facilities and generate unprecedented amount of data. They have been and will be a driving force behind the research and development of data management technologies that the SDAV Institute brings together to address the challenges that increasing number of scientists face when managing their data. In this poster we present how SDAV technologies have been applied to various fusion models (energetic particles, gyrokinetics, MHD, RF). Computational fusion is moving beyond verification of simulations towards building the next-generation validation framework and thus requiring an orchestrated management of experimental and simulation data.  We highlight the directions in the development of our technologies to provide a scalable and flexible framework for these new challenges.

top of page

Poster # 23

Title/Topic

Advanced Data Analysis Techniques for Science Discovery

Lead Contact

Pascucci, Valerio

Affiliation

Univ Utah

The evolution of modern science has been characterized by an exponential increase in use and production of data generated by simulations, experiments and sensing devices. Therefore, data analysis tools have become an essential component of the scientific investigation and discovery process. In this poster, we present a variety of data analysis tools that the SDAV institute is making available to the scientific community. To serve a broad set of needs, the tools include new advanced techniques in many diverse fields such as statistics, mining and topology. Moreover, they are deployed in advanced software components that deal efficiently with massive data models and provide effective solutions for the challenges of data intensive science.

top of page

Poster # 24

Title/Topic

Visual and Numerical Data Analysis of Large-Scale Structures in Cosmological Simulations (381 KB)

Lead Contact

Peterka, Tom

Affiliation

ANL

Cosmological data are often analyzed through large-scale structures such as halos, voids, filaments, and walls. The visual and statistical identification of such features enables simulated matter tracer particles to be correlated with telescope surveys and ultimately to inform how gravity drives the distribution of galaxies in the universe, driving inferences about the properties of dark energy and dark matter. We present a set of cosmological tools used both in situ and in postprocessing for scientists to locate, track, and evaluate the evolution of such structures. We highlight one use of this tool chain to derive an in situ Voronoi tessellation of the distribution of matter tracer particles and to further analyze the derived mesh in ParaView in order to identify cosmological voids through a robust set of metrics called Minkowski functionals.

top of page

Poster # 25

Title/Topic

Technologies for Extreme-Scale Data Management

Lead Contact

Ross, Rob

Affiliation

ANL

Simulations are generating an unprecedented amount of data, facilitated by the rapidly increasing computational capabilities of leading compute resources. Additionally, the systems on which these simulations execute are becoming increasingly complex. Organizing, transforming, indexing, and reducing data to enable effective analysis, as well as carefully managing data movement and orchestrating data analysis in complex system architectures, are significant challenges in using extreme scale systems for scientific discovery. We present an overview of the data management tools that the SDAV Institute brings to bear on these challenges.

top of page

Poster # 26

Title/Topic

Scalable Data Management, Analysis and Visualization of Particle Accelerator Simulation Data (287 KB)

Lead Contact

Ruebel, Oliver

Affiliation

LBNL

Particle accelerators are among the most versatile and important tools of scientific discovery. Large-scale, high-resolution simulations are essential to the development of next generation accelerators, such as the electron linacs for a proposed next-generation x-ray free electron laser (FEL) at Lawrence Berkeley National Laboratory (LBNL), and studies of novel accelerator concepts, such as laser plasma particle acceleration. The datasets produced by modern particle-in-cell simulations are extremely large, of varying spatial and temporal resolution, heterogeneous and high-dimensional, making analysis and knowledge discovery from complex particle accelerator simulation data a challenging task. We present recent results of several different projects, conducted in close collaboration with accelerator scientists and simulation experts, demonstrating the application of state-of-the-art data management, analysis and visualization technology to automatically detect particle beams in laser-plasma accelerators, to facilitate query-driven beam diagnostics in linacs and to perform in situ analysis and visualization.

top of page


Tuesday, September 11, 2012 - Application Posters

Day 2 Poster Introduction (57.45 MB)

Poster # 1

Title/Topic

Parameterizations and Performance in Multi-Scale Ocean Models (1.98 MB)

Lead Contact

Evans, Kate

Affiliation

ORNL

This poster summarizes our progress in developing a global ocean model, and discusses challenges that have been identified. Our global ocean model employs unstructured, multi-resolution horizontal meshes that are structured in the vertical and are capable of using a variety of vertical coordinate. The presence of multiple horizontal length scales requires parameterizations that are "scale-aware", meaning they act appropriated without ad hoc tuning across a wide range of grid-length scales. The performance of unstructured meshes has historically been a concern for many ocean model developers.  Computational performance of this new ocean model suggests that this concern is largely mitigated when the vertical coordinate is structured, thus allowing direct addressing along the fastest-varying index. Our model development effort faces challenges in developing scale-aware ocean parameterizations within an unstructured global ocean model system that is computationally efficient enough to contribute to IPCC-class climate simulations.

top of page

Poster # 2

Title/Topic

Introducing scale awareness into a subgrid parameterization (182 KB)

Lead Contact

Larson, Vincent

Affiliation

Univ Wisconsin

An excessive dependence of parameterized cloud simulations on the horizontal grid spacing is undesirable, because it requires re-tuning the cloud parameterization at each different grid spacing.  Rather, it would be preferable to develop a cloud parameterization that takes into account the horizontal grid spacing and adjusts appropriately, so that the model yields similar results over a range of horizontal grid spacings.  This poster presents a simple, provisional method to introduce scale awareness into cloud parameterizations that contain a scalar dissipation equation.  Simulations of a shallow cumulus case and a shallow stratocumulus case are run in a 3D configuration at 2-km, 4-km, and 16-km horizontal grid spacings.  The boundary-layer simulations that include CLUBB exhibit a reduced sensitivity to horizontal grid spacing.  However, further work is needed to improve the behavior of deep convective cases over a range of grid spacings.

top of page

Poster # 3

Title/Topic

Computational Challenges in the ACES4BGC Project (1.93 MB)

Lead Contact

Hoffman, Forrest

Affiliation

ORNL

The Applying Computationally Efficient Schemes for BioGeochemical Cycles (ACES4BGC) project will advance the predictive capabilities of Earth System Models (ESMs) by reducing two of the largest sources of uncertainty, aerosols and biospheric feedbacks, to deliver a second-generation ESM with improved representation of biogeochemical interactions at the canopy-to-atomsphere, river-to-coastal ocean, and ocean-to-atmosphere interfaces. Significant computational challenges emerge from the scientific requirement to support large numbers of reactive biogeochemical tracers across ESM component models. In particular, a new computationally efficient, accurate, and conservative tracer advection scheme is critically needed. Moreover, routine monitoring and performance optimization are required to assure project success.

top of page

Poster # 4

Title/Topic

Verification, Validation, and Uncertainty Quantification for PISCEES (22.75 MB)

Lead Contact

Jackson, Charles

Affiliation

Univ Teaxs-Austin

An important theme of the PISCEES project is the development of methods and software tools for integrating observations and idealized solutions into activities for the verification, validation, and uncertainty quantification of the next-generation Community Ice Sheet Model (CISM). The objective of these exercises is to establish the scientific credibility and to quantify uncertainties when using CISM for predicting changes in ice sheet mass in response to environmental change. Our initial focus is on developing a diagnostics package for documenting model experiments, computational performance metrics, and generating a suite of figures for comparing model output against observations. We have also evaluated the impacts of uncertainties in estimates of surface mass balance on the ice sheet model’s response to future global warming.

top of page

Poster # 5

Title/Topic

Ice Sheet Model Dynamical Core Development for PISCEES (743 KB)

Lead Contact

Martin, Daniel

Affiliation

LBNL

Under the PISCEES project, we are developing two advanced dynamical cores ("dycores") for ice sheet models: (1) BISICLES, a scalable, finite-volume, block-structured adaptive mesh refinement (AMR) dycore based on the Chombo framework; and (2) FELIX, a finite-element based dycore on an unstructured mesh, which uses the MPAS framework and the Trilinos solver libraries. Both BISICLES and FELIX build on demonstrated successes in modeling the complex dynamics of ice sheets. Under PISCEES, both will undergo major enhancements in physical fidelity and computational performance. Further, both will be interfaced with the Community Ice Sheet Model (CISM) to make them broadly useful as next-generation ice sheet models and as components of the Community Earth System Model (CESM).

top of page

Poster # 6

Title/Topic

Electric binding energies of hydrated hydroxide and hydronium ions: ab initio molecular dynamics combined with GW quasiparticle theory (432 KB)

Lead Contact

Wu, Xifan

Affiliation

Temple Univ

The solvation structure of hydronium and hydroxide ions are investigated with density functional theory (DFT) based ab initio molecular dynamics simulations. The PBE (GGA) exchange correlation functional is adopted and the solvated ions are simulated within a supercell surrounded by 64 water molecules. An elevated temperature of 330K is adopted in order to roughly approximate the quantum nuclei effect of H atoms. The solvation structures are analyzed and followed by a static GW electronic structure calculation for the first time including the local fields due to the inhomogeneity of the medium within the Hybertsen-Louie ansatz. The resulting defect spectral (positions and linewidth) of both hydronium and hydroxide ions are found to be accurately consistent with the recent photoemission experiment. Furthermore, a detailed orbital analysis reveals that the defect states originate from distorted 1b1 (lone pair) and 1b2 states for solvated hydroxide and hydronium ions respectively.

top of page

Poster # 7

Title/Topic

New Computational Methodologies for Accelerating ab initio Molecular Dynamics Simulation (822 KB)

Lead Contact

Lin, Lin

Affiliation

LBNL

We propose new computational methodologies for accelerating ab initio molecular dynamics (AIMD) simulation for large scale metallic and insulating systems.  We develop adaptive local basis functions in a discontinuous Galerkin framework (ALB-DG) for systematically reducing the number of basis functions, starting from a complete basis set such as planewaves.  We develop the pole expansion and selected inversion (PEXSI) method for accurately evaluating the electron density, energy and atomic force with at most quadratic scaling.  PEXSI can be used with both the new ALB-DG basis set and Gaussian type orbitals / localized atomic orbitals to accelerate AIMD.

top of page

Poster # 8

Title/Topic

Numerical Methods for Accelerating the GW Calculation (1.3 MB)

Lead Contact

Yang, Chao

Affiliation

LBNL

We examine a number of potential techniques for accelerating GW and Bethe-Salpeter (BSE) calculations for probing excited state properties of energy related materials.  These techniques include new methods for computing a large number of unoccupied states required to construct the irreducible polarization operator, algorithms for approximating the irreducible polarization operator without computing any unoccupied states and efficient ways to solve the Bethe-Salpeter equation. We will present both algorithmic developments and implementation strategies of these approaches.

top of page

Poster # 9

Title/Topic

BerkeleyGW: A massively parallel computer package for calculating excited state properties of materials

Lead Contact

Deslippe, Jack

Affiliation

LBNL

BerkeleyGW is a massively parallel computational package for electron excited-state properties based on the ab initio GW methodology. The package can be used to compute the electronic and optical properties of a wide variety of material systems from bulk semiconductors and metals to nanostructured materials and molecules. The poster will discuss recent developments in the code including hybrid OpenMP/MPI parallelization approaches to both DFT and GW, parallel IO performance and general scaling studies with cores and threads. The package scales to 10,000s of cores and can be used to study systems containing up to 100s of atoms.

top of page

Poster # 10

Title/Topic

Challenges Associated with the  Development of Electron-Correlated Methods for Excited State Structure and Dynamics (283 KB)

Lead Contact

Gagliardi, Laura

Affiliation

Univ Minnesota

Specific challenges and algorithms underpinning new theoretical methods for the study of charge transfer and charge transport in photoactivated systems will be described. A range of electronic structure models is planned for development and inclusion in the NWChem computational chemistry software suite. Efficient coding and implementation will benefit from close collaboration with the SciDAC Institute for Frameworks, Algorithms, and Scalable Technologies for Mathematics (FASTMATH) and the SciDAC Institute for Sustained Performance, Energy, and Resilience (SUPER).

top of page

Poster # 11

Title/Topic

Optimization of Vortex Pinning in High-Temperature Superconductors via Large-Scale Simulation (3.47 MB)

Lead Contact

Karpeev, Dmitry

Affiliation

ANL

The dynamics of magnetic vortices in high-temperature superconductors is modeled by the Time-Dependent Ginzburg Landau (TDGL) equation. It captures the important effect of vortex pinning by inclusions of different geometries embedded in the superconducting matrix. Accurate prediction of the response of the vortex liquid to different configurations of inclusions can be obtained only numerically, but is computationally intensive. It requires adaptive meshing and discretization, long-time numerical integration of TDGL, as well as the use of advanced numerical optimization algorithms. We outline our approach to large-scale TDGL simulation as well as scaling to leadership-class hardware.

top of page

Poster # 12

Title/Topic

General Block-Tensor Library for High-Level Electronic Structure Calculations (1.65 MB)

Lead Contact

Epifanovsky, Evgeny

Affiliation

USC

This paper presents an open-source object-oriented C++ software library to perform tensor algebra. The primary purpose of the library is to enable post-Hartree-Fock electronic structure methods.  However, it is general enough to be applicable in other areas of the physical sciences. The library supports tensors of arbitrary order, size, and symmetry. Implemented data structures and algorithms operate on large tensors by splitting them into smaller blocks, storing them in memory and on disk, and applying divide-and-conquer-type parallel algorithms. The library offers a set of general tensor symmetry algorithms and a full implementation of tensor symmetries typically found in electronic structure theory: permutational, spin, and molecular point group symmetry.

top of page

Poster # 13

Title/Topic

Challenges in building an extreme scale first-principles edge physics code for magnetic fusion program

Lead Contact

Hesthavan, Jan

Affiliation

Brown Univ

As known experimentally, edge plasma condition determines fusion efficiency. However, the multiscale, nonlinear, non-equilibrium thermodynamics nature of the edge plasma requires extreme-scale computing from first-principles equations. A peta-scale code XGC1 was developed during the previous SciDAC cycle at Center for Plasma Edge Simulation (CPES), which scaled efficiently to the maximal Jaguarpf capacity. Future plans will be discussed. The more powerful the computer becomes, the more complete physics XGC1 could contain. Close collaboration with OFES and ASCR scientists is necessary to realize this potential. Present XGC1 capabilities on peta-scale CPU-GPUs and challenges on future computers will also be discussed.

top of page

Poster # 14

Title/Topic

On-the-fly data management for high performance multiscale fusion simulation on Titan

Lead Contact

Klasky, Scott

Affiliation

ORNL

Fusion simulation is moving towards building next-generation numerical experiment based upon first principles physics. One focus in EPSI is to develop an advanced computational framework that can support tight coupling between large-scale kinetic multiscale physics systems, support in-situ uncertainty quantification (UQ), and enable pluggable analysis and visualization services that can ingest and assimilate large size in-situ data, and enable the verification and validation. Specifically, we will address key data challenges by augmenting ADIOS with Data Spaces to enable hybrid staging and in-situ/in-transit data processing workflows, and the eSiMon dashboard for pervasive access to the next generation numerical fusion experiment.

top of page

Poster # 15

Title/Topic

XOLOTL-PSI Code Development (664 KB)

Lead Contact

Bernholdt, David

Affiliation

ORNL

This poster presentation will describe the physics and applied math/computer science challenges associated with the development of a new computational code for simulating plasma surface interactions in materials, XOLOTL-PSI. The code is being developed from scratch, but based on a one-dimensional reaction - diffusion framework developed for simulating radiation effects in thin metal films. The poster will discuss the physics challenges of PSI, the PARASPACE reaction - diffusion code which provides the outline for XOLOTL-PSI, and the code development framework and plans.

top of page

Poster # 16

Title/Topic

Computing at the Cosmic Frontier (1.87 MB)

Lead Contact

Habib, Salman

Affiliation

ANL

The Cosmic Frontier presents fundamental discovery opportunities related to dark energy, dark matter, the masses of neutrinos and their hierarchy, and unique probes of inflation and the early Universe. Large-scale surveys of the sky are collecting massive amounts of data that hold the key to a deeper understanding of the Universe. High performance computing is a powerful tool of discovery in extracting insights and making precision predictions for these observations. Our new SciDAC Partnership project brings together a National Laboratory-based collaboration – in partnership with the SciDAC Institutes – to build next generation computational cosmology prediction and analysis frameworks that are directly targeted to current and future observations.

top of page

Poster # 17

Title/Topic

The Search for Beyond-the-Standard-Model Physics at the Intensity Frontier (1.22 MB)

Lead Contact

Na, Heechang

Affiliation

ANL

One of the most urgent tasks of particle physics experiments is the search for the suspected small effects of as yet undiscovered "beyond-the-standard-model" physics in the weak decays of hadrons, in so-called Intensity Frontier experiments. High-precision lattice QCD calculations are an essential component of this analysis, needed to separate the effects of the strong interactions of QCD from the possible effects of new physics. These calculations require the most highly optimized programs on the highest capability computers available. In this poster, I will describe this physics, and the role of lattice gauge theorists in designing and using the Blue Gene family of computers.

top of page

Poster # 18

Title/Topic

Strongly Coupled Field Theories at the Energy Frontier (1.79 MB)

Lead Contact

Neil, Ethan

Affiliation

FNAL

One of the most urgent tasks of particle physics experiments is the search for the suspected small effects of as yet undiscovered "beyond-the-standard-model" physics in so-called Intensity Frontier experiments. High-precision lattice QCD calculations are an essential component, needed to separate the low-energy QCD effects from the possible signals of new physics.  These calculations require the most highly optimized programs on the highest capable computers available.  I will present this physics, the role of lattice gauge theorists in designing and using the Blue Gene family of computers, and a recent installation of 48 racks of BG/Q (MIRA) at ALCF.

top of page

Poster # 19

Title/Topic

Computational Accelerator Modeling for the Intensity Frontier (329 KB)

Lead Contact

Cary, John

Affiliation

Tech-X

The Intensity Frontier of High Energy Physics is critical for precision measurements of rare processes, which rely on intense beams.  Accelerators for the Intensity Frontier bring computational challenges, such the development of electron clouds, design of appropriate cavities, and understanding of wake fields. Under SciDAC3, the ComPASS collaboration is enhancing existing and developing new tools for addressing these challenges. These tools will be used both for improving current facilities as well as for designing new facilities.

top of page

Poster # 20

Title/Topic

Advanced Computational Tools for the Energy Frontier (529 KB)

Lead Contact

Vay, Jean-Luc

Affiliation

LBNL

Large accelerators colliding fundamental particles at the energy frontier are among the most expensive tools for scientific discovery (e.g. the LHC at CERN nears 30 kilometers in circumference and cost over 10 billion dollars to build). Under SciDAC 3, the ComPASS collaboration is pushing further the state-of-the-art in the modeling of conventional and advanced (plasma-based, high-gradient, dielectric structures) particle accelerators, through the development and utilization of the most advanced algorithms and performance optimization strategies, on the latest most powerful computing platforms. Coupled to cutting-edge non-linear parameter optimization and uncertainty quantification methods, those new tools will provide unprecedented capabilities for understanding and optimizing existing accelerators, and for developing new acceleration technologies.

top of page

Poster # 21

Title/Topic

Nuclear Density Functional Theory - Exploring Fission and the Limits of the Nuclear Landscape (627 KB)

Lead Contact

McDonnell, Jordan

Affiliation

Univ Tennessee-Knoxville

Advances in computing have allowed us to make great strides in the theory of nuclear structure.  We present progress in the construction of a Skyrme-like energy density functional, optimized for deformed nuclei to aid in the study of nuclear fission and nuclei far from stability.  We calculate fission barrier heights and spontaneous fission half-lives for several actinide nuclei, from radium to californium.  We also present predictions for the location of the proton and neutron drip lines.  Our predictions are found to agree well with empirical data where known, and the consistency between our models lends confidence to our predictions beyond experimentally measured nuclei.

top of page

Poster # 22

Title/Topic

NUCLEI Ab Initio Calculations Address Leadership-Class Challenges

Lead Contact

Nam, Hai Ah

Affiliation

ORNL

NUCLEI project teams of nuclear theorists, applied mathematicians and computer scientists have developed large-scale computational codes to study the atomic nucleus using ab initio methods at DOE’s Leadership Computing Facilities.  These applications coupled with computing power at these facilities enable unprecedented first-principles studies of nuclei and the forces that hold them together.  To continue to push scientific frontiers in low-energy nuclear physics, NUCLEI application teams with diverse scientific backgrounds are addressing the challenges created by the considerable change in computing architectures.  We present details from three ab initio approaches used in NUCLEI: Configuration Interaction, Green’s Function Monte Carlo and Coupled-Cluster.

top of page

Poster # 23

Title/Topic

Hadronic Parity Violation and Lattice QCD (262 KB)

Lead Contact

Haxton, Wick

Affiliation

UCB

A major experiment is underway at the SNS, n+p -> D+γ, to exploit parity nonconservation (PNC) among nucleons to isolate the so-far-unobserved strangeness-conserving weak hadronic neutral current.  We describe a new analysis of the asymmetry measurements in p+p scattering that points to a strong enhancement in the ratio of PNC isospin I=1 and I=0 amplitudes.  We describe past and planned lattice QCD efforts that could significantly increase the impact of the SNS and other PNC experiments by constraining two key parameters, the isovector pion and isotensor rho couplings.  We discuss the computational challenges in properly executing the envisioned program.

top of page

Poster # 24

Title/Topic

Connecting Cold QCD to Nuclear Many-Body Physics

Lead Contact

Luu, Thomas

Affiliation

LLNL

I discuss various methods of bridging model-independent cold LQCD calculations of nuclear observables to nuclear many-body effective theories.

top of page

Poster # 25

Title/Topic

Computing Properties of Hot and Dense Nuclear Matter from Quantum Chromodynamics (191 KB)

Lead Contact

Karsch, Frithjof

Affiliation

BNL

Under conditions of extreme temperature or densities the constituents of protons and neutrons, quarks and gluons, get liberated and form a new, largely unexplored state of matter. Numerical calculations based on lattice regularized Quantum Chromodynamics (QCD) allow to study the transition from the hadronic phase to the quark gluon plasma phase. They provide input to the interpretation of large experimental programs at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory and the Large Hadron Collider (LHC) at CERN, Switzerland.  In the poster, we describe recent successes at understanding the properties of hot and dense matter from lattice QCD, and future questions that will be addressed through our SciDAC activities.  In particular, we describe the computational challenges, and the likely impact of the resulting calculations on current and future experiments.

top of page

Poster # 26

Title/Topic

Understanding the structure and interactions of matter in Quantum Chromodynamics (485 KB)

Lead Contact

Edwards, Robert

Affiliation

JLab

Quantum Chromodynamics (QCD) is the underlying theory describing the Strong Interactions, and together with the electroweak interactions, describes the properties of nuclei and of the nucleons of which they are composed.  Quantum Chromodynamics can be solved on a four-dimensional grid, or lattice, and first principles calculations performed in order to address key questions in nuclear physics.   In the poster, we describe recent successes at understanding the structure and interactions of nucleons from lattice QCD, and future questions that will be addressed through our SciDAC activities.  In particular, we describe the computational challenges, and the likely impact of the resulting calculations on current and future forefront experimental facilities, including those at Brookhaven National Laboratory, at Jefferson Laboratory, and at the future FRIB.

top of page

 


 

Copyright 2012, Oak Ridge Associated Universities
Privacy Security Notice | ORISE | ORAU

select smallest font sizeselect medium font sizeselect large font size