The final workshop of this program will be an event designed to promote the generation of tangible products from the long program and to explore the best practices in setting up a computational co-design process that ranges from derivation of the key formalisms, to the design of the computational approach, and finally to its implementation. The vision of the program is that the long-term participants will self-organize into working groups designed to identify, analyze, and solve key problems that impede the effective use of extremescale computing in materials science. In order to ensure that these advances make their way into the hands of the community at large, this IPAM “hackathon” will gather code developers, mathematicians, method developers, and computer scientists and engineers from the computer vendors for a week of discussion, hands-on development, and implementation of the new ideas generated during the program. A key objective will be to ensure the transition from “research” codes into “production” codes that can be used and further improved by the community. As such, significant parts of the workshop will involve participants organizing into working groups in order to solve (or take the first steps in solving) practical issues that will have been raised throughout the program. The workshop will include a short series of talks where lead developers of various projects will share their experience and processes when attacking new problems related to evolving architectures, either in terms of new mathematical formalism, increasing computational scales, and architectural details.
Topics: Part of the Long Program New Mathematics for the Exascale: Applications to Materials Science, Integration of direct simulations, online data analysis, and experimental data. Mathematical methods for data assimilation. Large-scale inverse problems. Computation-aided online experimental design at massive scales. Active exploration of chemical space using massive quantum calculations. Workflow infrastructure. Integration of numerically-intensive calculations with ML/data-science at scale.