Kavli Affiliate: Max Tegmark
| First 5 Authors: David “davidad” Dalrymple, Joar Skalse, Yoshua Bengio, Stuart Russell, Max Tegmark
| Summary:
Ensuring that AI systems reliably and robustly avoid harmful or dangerous
behaviours is a crucial challenge, especially for AI systems with a high degree
of autonomy and general intelligence, or systems used in safety-critical
contexts. In this paper, we will introduce and define a family of approaches to
AI safety, which we will refer to as guaranteed safe (GS) AI. The core feature
of these approaches is that they aim to produce AI systems which are equipped
with high-assurance quantitative safety guarantees. This is achieved by the
interplay of three core components: a world model (which provides a
mathematical description of how the AI system affects the outside world), a
safety specification (which is a mathematical description of what effects are
acceptable), and a verifier (which provides an auditable proof certificate that
the AI satisfies the safety specification relative to the world model). We
outline a number of approaches for creating each of these three core
components, describe the main technical challenges, and suggest a number of
potential solutions to them. We also argue for the necessity of this approach
to AI safety, and for the inadequacy of the main alternative approaches.
| Search Query: ArXiv Query: search_query=au:”Max Tegmark”&id_list=&start=0&max_results=3