Language as a Cognitive Tool to Imagine Goals in Curiosity-Driven Exploration

Kavli Affiliate: Peter Ford

| First 5 Authors: Cédric Colas, Tristan Karch, Nicolas Lair, Jean-Michel Dussoux, Clément Moulin-Frier

| Summary:

Developmental machine learning studies how artificial agents can model the
way children learn open-ended repertoires of skills. Such agents need to create
and represent goals, select which ones to pursue and learn to achieve them.
Recent approaches have considered goal spaces that were either fixed and
hand-defined or learned using generative models of states. This limited agents
to sample goals within the distribution of known effects. We argue that the
ability to imagine out-of-distribution goals is key to enable creative
discoveries and open-ended learning. Children do so by leveraging the
compositionality of language as a tool to imagine descriptions of outcomes they
never experienced before, targeting them as goals during play. We introduce
IMAGINE, an intrinsically motivated deep reinforcement learning architecture
that models this ability. Such imaginative agents, like children, benefit from
the guidance of a social peer who provides language descriptions. To take
advantage of goal imagination, agents must be able to leverage these
descriptions to interpret their imagined out-of-distribution goals. This
generalization is made possible by modularity: a decomposition between learned
goal-achievement reward function and policy relying on deep sets, gated
attention and object-centered representations. We introduce the Playground
environment and study how this form of goal imagination improves generalization
and exploration over agents lacking this capacity. In addition, we identify the
properties of goal imagination that enable these results and study the impacts
of modularity and social interactions.

| Search Query: ArXiv Query: search_query=au:”Peter Ford”&id_list=&start=0&max_results=10

Read More