HEP MINI-APPS

HEP computational use cases can be broadly classified as compute-intensive or data-intensive. The category of compute-intensive tasks belongs primarily to the realm of high performance computing (HPC). The primary use cases of HPC within the HEP community include groups working on accelerator modeling, cosmology simulations, and lattice QCD. These users are computationally sophisticated and leverage significant state-of-the-art computing resources to solve complex, large-scale, computationally demanding problems. Very large datasets can be generated by these HPC applications — potentially much larger than those from any experiment. Analyzing these datasets can prove to be as difficult a problem as running the original simulation. Thus, data-intensive tasks currently go hand in hand with many HPC applications that were originally viewed as being only compute-intensive.

In contrast to the HPC applications, the data-intensive use cases in HEP experiments exploit high throughput computing (HTC) to take advantage of the inherent parallelism of event-centric datasets; users have built large computing grids based on commodity hardware in which each computing node handles a computing problem from start to end. A typical example of such an HTC application is event reconstruction in Energy and Intensity Frontier experiments.

In the HPC world, many HEP researchers are already well equipped in terms of expertise to successfully use next-generation HPC systems at extreme scale. However, for the data-intensive applications related to HEP experiments, the community is at an early phase of learning how to leverage HPC resources to address these problems. Both communities have to face the challenge posed by next-generation architectures. The evolution of HPC architectures is driven by a number of technological, financial, and power-related, constraints. Future HPC architectures are focusing performance within relatively loosely coupled nodes, with high local concurrency and less RAM per core than in current systems. At the same time there is a general trend to investigate the notion — if not to immediately adopt it — that future HPC platforms should be able to perform some subset of scientifically important data-intensive tasks. This is due to the need for more computational power from data-intensive applications, but also because there are substantial advantages to having data analysis performed on the same systems where the underlying detailed theoretical modeling and simulations are also being run. Although it is not clear that HTC systems will follow the choices made by HPC designs, it is likely that there will be many similarities.

In order to understand how complex software applications will run on a number of future systems, and what changes would be needed to obtain the best overall performance, it is not practical to work with full-blown applications for a number of reasons (e.g., complex software environment requirement, application complexity, multiple overlapping requirements). Even in complex applications, it is usually the case that relevant performance (compute, I/O) is concentrated in a number of compact kernels. The idea behind mini-apps is to reduce the functional components to the basic interactions between these kernels, or even to simplify further. Instead of hundreds of thousands of lines to the millions, mini-apps can be restricted to thousands of lines of code. A set of mini-app characteristics have been given by Heroux et al. (Sandia Report SAND2009-5574):

  • Interaction with external research communities: Mini-apps are open source software, in contrast to many production applications that have restricted access.
  • Simulators: Mini-apps are the right size for use in simulated environments, supporting study of processor, memory and network architectures.
  • Early node architecture studies: Scalable system performance is strongly influenced by the processor node architecture. Processor nodes are often available many months before the complete system. Mini-apps provide an opportunity to study node performance very early in the design process.
  • Network scaling studies: Mini-apps are easily configured to run on any number of processors, providing a simple tool to test network scalability. Although not a replacement for production applications, mini-apps can again provide early insight into scaling issues.
  • New language and programming models: Mini-apps can be refactored or completely rewritten in new languages and programming models. Such working examples are a critical resource in determining if and how to rewrite production applications.
  • Compiler tuning: Mini-apps provide a focused environment for compiler developers to improve compiled code.

For more on mini-apps, see Mike Heroux’s talk at the 2015 Workshop on Representative Applications (IEEE Cluster 2015).

The HEP-FCE is initiating an activity to develop a number of compute-intensive and data-intensive mini-apps that will exercise a broad spectrum of HEP use cases. The mini-apps will be a logical medium of interaction with ASCR researchers; in particular with Argonne’s Joint Laboratory for System Evaluation. Members of the community are encouraged to package and supply mini-apps.

An HEP Collision Point