CCE Summer Students 2017

CCE Summer Students 2017

Wasikul Islam

Oklahoma State University and Argonne National Lab

The proton-proton collisions at the Large Hadron Collider at CERN result a wide variety of event topologies each having many different particles to reconstruct and identify in particle detectors. The image recorded by a particle detector is not unlike large three dimensional camera recording the energy deposited by passing particles through the active layers of the detector. The reconstruction algorithms written by many collaborations use these images to recognize particles in the detector. Wasikul worked with Dr. Taylor Childers, a member of the Argonne ATLAS group, to study the viability of Deep Learning techniques from industry to identify event topologies and particle signatures within events. Wasikul used the Keras framework with the Tensorflow machine learning backend. A simplified training dataset was created using twodimensional images from ATLAS calorimeter data, substituting the 3-color component of standard color images with 2-channels in depth separated into the electromagnetic calorimeter and the hadronic calorimeter. The event images are ATLAS simulations of Zboson decays to two leptons (e, mu, and tau) in association with two jets. Truth information was used to create smaller images, which include only the calorimeter region containing the lepton or jet. These smaller images were used to train the model from the Keras Cifar10 example with four classes (three leptons and a jet). After running some shallow parameter scans using the NERSC Cori (Phase 2) supercomputer, reasonable accuracy was achieved with >89% accuracy for the electron, muon, and jet. Since the tau decays some fraction of the time before reaching the calorimeter, it had a large misidentification rate, but this is expected for this simple study. Initial work was done to study event-level identification and using auto-encoders as a way to reduce data scarcity in these large detectors which can artificially increase computational requirements when moving to a full three dimensional representation.

Nesar Ramachandra

Kansas University and Argonne National Lab

Statistics of strong gravitational lenses could provide insights about matter density profiles, probe the evolution of lensing media, and constrain cosmological parameters. In the near future, large sky surveys such as the Large Synoptic Survey Telescope (LSST),  Euclid, and the Wide-Field Infrared Survey Telescope (WFIRST) are estimated to detect over 100,000 galaxy-scale strong lenses. This necessitates robust, automated pipelines for detection and analysis of gravitational lenses. During the HEP-CCE summer internship, Nesar worked with Prof. Salman Habib and Dr. Taylor Childers to explore the use of Deep Learning algorithms for astrophysical applications – specifically for the LSST strong lensing images. Deep Convolutional Neural Networks (CNNs) were trained using mock galaxy-galaxy lensing images provided by Dr. Nan Li. We reached about 85 percent accuracy in classification of images as lensing or non-lensing using a small training set of 8,000 grayscale images and a relatively modest hyper-parameter sweep. The algorithms were trained using many-core processors on the Cori supercomputer at NERSC and state-of-the-art GPU facilities at the Cosmological Physics and Advanced Computing (CPAC) Group at Argonne. Inference from fully trained networks can be done, in principle, on laptops within milliseconds per image.

Beyond this application of machine learning, which is limited to astronomical detection, implementation of several other CNNs were tested:
1. Non-linear regression networks for parameter estimations of ellipticity, velocity dispersion and magnification of the lens
2. Auto-encoding networks for de-noising telescope images
3. Generative Adversarial Networks for unsupervised learning of lensing images
The preliminary results from these neural networks were promising, and will be applied in quantitative analyses of strong gravitational lensing in the near future.

Samantha Sword-Fehlberg

New Mexico State University and Fermilab

Neutrinos offer a unique probe into the structure of quarks within the proton. The MicroBooNE experiment uses a liquid argon time projection chamber (LArTPC) to study low energy neutrino interactions. Working with Dr. Erica Snider (Fermilab) and Dr. Kazu Terao (Columbia University), Samantha used convolutional neural networks to analyze LArTPC images and classify neutrino events to allow determination of the strange quark’s contribution to the proton’s axial form factor.


Michela Paganini

Yale University and Lawrence Berkeley National Lab

The simulation of scientific datasets, often used as test beds for the development and evaluation of application and domain specific machine learning algorithms, is, in various disciplines such as High Energy Particle Physics and Atmospheric Science, a slow and complex yet necessary step in many scientists’ workflows.

Michela’s project successfully validated the possibility of encoding complex physics generation processes into deep generative adversarial networks (GAN). The training dataset consisted of image representations of 3D high energy particle cascades from photons, positrons and charged pions interacting with an ATLAS-inspired, heterogeneously segmented, liquid argon calorimeter and simulated with the standard GEANT4 package.

Over the summer, we focused on encoding domain-specific constraints and prior knowledge into the adversarial training, and on exploring the effects of traversing high-dimensional physical manifolds to investigate the space of GAN-generated images.The “CaloGAN” image quality was assessed both qualitatively and quantitatively, and showed excellent results while identifying key areas for further R&D to improve physics precision to the point where these could replace traditional simulators. GPU-enabled simulation of particle showers with generative adversarial networks affords us speed-ups of ∼100,000× compared to state-of-the-art physics simulators. Working at Lawrence Berkeley National Lab proved fruitful for exploring distributed training on NERSC’s Cori supercomputer with TensorFlow optimizations for modern Intel architectures.

This work paves the way to a new era of fast simulation that could save scientists significant computing time and disk space, and enable physics searches and precision measurements at the LHC and beyond.

arXiv (submitted to PRL+PRD)


Data Science @ HEP
CERN School of Computing
NERSC Data Day
Luke de Oliveira’s ACAT 2017 talk
Ben Nachman’s BOOST 2017 talk

An HEP Collision Point