Kavli Affiliate: Xiang Zhang
| First 5 Authors: Yuxin Chen, Yuxin Chen, , ,
| Summary:
Aligning robot behavior with human preferences is crucial for deploying
embodied AI agents in human-centered environments. A promising solution is
interactive imitation learning from human intervention, where a human expert
observes the policy’s execution and provides interventions as feedback.
However, existing methods often fail to utilize the prior policy efficiently to
facilitate learning, thus hindering sample efficiency. In this work, we
introduce MEReQ (Maximum-Entropy Residual-Q Inverse Reinforcement Learning),
designed for sample-efficient alignment from human intervention. Instead of
inferring the complete human behavior characteristics, MEReQ infers a residual
reward function that captures the discrepancy between the human expert’s and
the prior policy’s underlying reward functions. It then employs Residual
Q-Learning (RQL) to align the policy with human preferences using this residual
reward function. Extensive evaluations on simulated and real-world tasks
demonstrate that MEReQ achieves sample-efficient policy alignment from human
intervention.
| Search Query: ArXiv Query: search_query=au:”Xiang Zhang”&id_list=&start=0&max_results=3