Kavli Affiliate: Peter Ford
| First 5 Authors: Nicolas Lair, Clément Delgrange, David Mugisha, Jean-Michel Dussoux, Pierre-Yves Oudeyer
| Summary:
People are becoming increasingly comfortable using Digital Assistants (DAs)
to interact with services or connected objects. However, for non-programming
users, the available possibilities for customizing their DA are limited and do
not include the possibility of teaching the assistant new tasks. To make the
most of the potential of DAs, users should be able to customize assistants by
instructing them through Natural Language (NL). To provide such
functionalities, NL interpretation in traditional assistants should be
improved: (1) The intent identification system should be able to recognize new
forms of known intents, and to acquire new intents as they are expressed by the
user. (2) In order to be adaptive to novel intents, the Natural Language
Understanding module should be sample efficient, and should not rely on a
pretrained model. Rather, the system should continuously collect the training
data as it learns new intents from the user. In this work, we propose AidMe
(Adaptive Intent Detection in Multi-Domain Environments), a user-in-the-loop
adaptive intent detection framework that allows the assistant to adapt to its
user by learning his intents as their interaction progresses. AidMe builds its
repertoire of intents and collects data to train a model of semantic similarity
evaluation that can discriminate between the learned intents and autonomously
discover new forms of known intents. AidMe addresses two major issues – intent
learning and user adaptation – for instructable digital assistants. We
demonstrate the capabilities of AidMe as a standalone system by comparing it
with a one-shot learning system and a pretrained NLU module through simulations
of interactions with a user. We also show how AidMe can smoothly integrate to
an existing instructable digital assistant.
| Search Query: ArXiv Query: search_query=au:”Peter Ford”&id_list=&start=0&max_results=10