LINNA: Likelihood Inference Neural Network Accelerator

Kavli Affiliate: Risa H. Wechsler

| First 5 Authors: Chun-Hao To, Eduardo Rozo, Elisabeth Krause, Hao-Yi Wu, Risa H. Wechsler

| Summary:

Bayesian posterior inference of modern multi-probe cosmological analyses
incurs massive computational costs. For instance, depending on the combinations
of probes, a single posterior inference for the Dark Energy Survey (DES) data
had a wall-clock time that ranged from 1 to 21 days using a state-of-the-art
computing cluster with 100 cores. These computational costs have severe
environmental impacts and the long wall-clock time slows scientific
productivity. To address these difficulties, we introduce LINNA: the Likelihood
Inference Neural Network Accelerator. Relative to the baseline DES analyses,
LINNA reduces the computational cost associated with posterior inference by a
factor of 8–50. If applied to the first-year cosmological analysis of Rubin
Observatory’s Legacy Survey of Space and Time (LSST Y1), we conservatively
estimate that LINNA will save more than US $$300,000$ on energy costs, while
simultaneously reducing $rm{CO}_2$ emission by $2,400$ tons. To accomplish
these reductions, LINNA automatically builds training data sets, creates neural
network surrogate models, and produces a Markov chain that samples the
posterior. We explicitly verify that LINNA accurately reproduces the first-year
DES (DES Y1) cosmological constraints derived from a variety of different data
vectors with our default code settings, without needing to retune the algorithm
every time. Further, we find that LINNA is sufficient for enabling accurate and
efficient sampling for LSST Y10 multi-probe analyses. We make LINNA publicly
available at https://github.com/chto/linna, to enable others to perform fast
and accurate posterior inference in contemporary cosmological analyses.

| Search Query: ArXiv Query: search_query=au:”Risa H. Wechsler”&id_list=&start=0&max_results=10

Read More

Leave a Reply