Interpretable Uncertainty Quantification in AI for HEP

Kavli Affiliate: Brian Nord

| First 5 Authors: Thomas Y. Chen, Biprateep Dey, Aishik Ghosh, Michael Kagan, Brian Nord

| Summary:

Estimating uncertainty is at the core of performing scientific measurements
in HEP: a measurement is not useful without an estimate of its uncertainty. The
goal of uncertainty quantification (UQ) is inextricably linked to the question,
"how do we physically and statistically interpret these uncertainties?" The
answer to this question depends not only on the computational task we aim to
undertake, but also on the methods we use for that task. For artificial
intelligence (AI) applications in HEP, there are several areas where
interpretable methods for UQ are essential, including inference, simulation,
and control/decision-making. There exist some methods for each of these areas,
but they have not yet been demonstrated to be as trustworthy as more
traditional approaches currently employed in physics (e.g., non-AI frequentist
and Bayesian methods).
Shedding light on the questions above requires additional understanding of
the interplay of AI systems and uncertainty quantification. We briefly discuss
the existing methods in each area and relate them to tasks across HEP. We then
discuss recommendations for avenues to pursue to develop the necessary
techniques for reliable widespread usage of AI with UQ over the next decade.

| Search Query: ArXiv Query: search_query=au:”Brian Nord”&id_list=&start=0&max_results=10

Read More