ASCR Cybersecurity for Scientific Computing Integrity Workshop
Sponsored by the U.S. Department of Energy,
Office of Advanced Scientific Computing research
Hilton Gaithersburg, 620 Perry Parkway
June 2-3, 2015
Summary for relevant research topics based on our recent workshop report:
1. Trustworthy Supercomputing for Understanding and Improving Cybersecurity: Enhance the "trustworthiness" of DOE supercomputers against malicious alterations of results by developing:
- means to build solutions for assuring scientific computing into the design of supercomputers;
- robust means for evaluating ways in which a system composed of interconnected, networked elements can affect scientific computing integrity;
- precise and robust means of capturing the right data to provide concrete evidence of scientific computing integrity such that reproducibility is possible and also so that integrity can be verified when it is maintained or diagnosed when it cannot;
- metrics for quantifying the trustworthiness of scientific data, capturing the likelihood and potential magnitude of errors due to uncertain inputs, incomplete models, incorrect implementations, silent hardware errors, and malicious tampering; and
- significantly improved means for balancing the assurance of scientific computing integrity between hardware and software to best monitor and maintain integrity while also minimally impacting the throughput of scientific research.
2. Extreme Scale Data, Knowledge and Analytics for Understanding and Improving Cybersecurity: Research and develop means to collect extreme-scale data and knowledge, and develop and apply analytics in order to understand and improve scientific computing integrity, particularly against malicious alternations of the system and environment, and computing security by:
- developing an analysis framework capable of collecting scientific computing integrity data at an unprecedented scale from multiple sources that collectively represent the system under study to enable adaptive, streaming analysis for monitoring and maintaining scientific computing integrity;
- developing means to automatically learn and maintain interdependent causal models of the scientific computation, exascale system, and computer security in real-time to enable better, faster recovery to reduce disruptions to scientists' efforts, and alter facilities managers to potential cyber breaches;
- developing capabilities to model, quantify, and manage exascale performance to allow exascale computing users and system operators to effectively manage the tradeoffs between scientific throughput and scientific computing integrity performance; and
- develop new methods for meaningful risk measures and threat measures of HPC integrity.
3. Trust within Open, High-End Networking and Data Centers for Understanding and Improving Cybersecurity: Develop means to assure trust within open, high-end networking and data centers by performing research to:
- understand the resilience of DOE scientific computing to integrity failures in order to understand how to best create data centers to support increasing computing integrity;
- explore how the evolution of virtualization, containerization, and modular runtime environments impact scientific computing integrity, and where control, layering, and modularity enhance integrity assurance, and where it adds complexity and scaling problems;
- understand how to create new, scalable techniques that enable the secure tagging of data and network packets in real-time for subsequent policy management and forensic analysis; and
- create means for developing coherent authorization and access controls particular to the open science mission, which can maximize integrity and computing efficiency.