Assistive Tele-op: Leveraging Transformers to Collect Robotic Task Demonstrations

Kavli Affiliate: Kevin Parker

| First 5 Authors: Henry M. Clever, Ankur Handa, Hammad Mazhar, Kevin Parker, Omer Shapira

| Summary:

Sharing autonomy between robots and human operators could facilitate data
collection of robotic task demonstrations to continuously improve learned
models. Yet, the means to communicate intent and reason about the future are
disparate between humans and robots. We present Assistive Tele-op, a virtual
reality (VR) system for collecting robot task demonstrations that displays an
autonomous trajectory forecast to communicate the robot’s intent. As the robot
moves, the user can switch between autonomous and manual control when desired.
This allows users to collect task demonstrations with both a high success rate
and with greater ease than manual teleoperation systems. Our system is powered
by transformers, which can provide a window of potential states and actions far
into the future — with almost no added computation time. A key insight is that
human intent can be injected at any location within the transformer sequence if
the user decides that the model-predicted actions are inappropriate. At every
time step, the user can (1) do nothing and allow autonomous operation to
continue while observing the robot’s future plan sequence, or (2) take over and
momentarily prescribe a different set of actions to nudge the model back on
track. We host the videos and other supplementary material at
https://sites.google.com/view/assistive-teleop.

| Search Query: ArXiv Query: search_query=au:”Kevin Parker”&id_list=&start=0&max_results=3

Read More