Adversarial Attacks on Reinforcement Learning Agents for Command and Control

Kavli Affiliate: John Richardson

| First 5 Authors: Ahaan Dabholkar, James Z. Hare, Mark Mittrick, John Richardson, Nicholas Waytowich

| Summary:

Given the recent impact of Deep Reinforcement Learning in training agents to
win complex games like StarCraft and DoTA(Defense Of The Ancients) – there has
been a surge in research for exploiting learning based techniques for
professional wargaming, battlefield simulation and modeling. Real time strategy
games and simulators have become a valuable resource for operational planning
and military research. However, recent work has shown that such learning based
approaches are highly susceptible to adversarial perturbations. In this paper,
we investigate the robustness of an agent trained for a Command and Control
task in an environment that is controlled by an active adversary. The C2 agent
is trained on custom StarCraft II maps using the state of the art RL algorithms
– A3C and PPO. We empirically show that an agent trained using these algorithms
is highly susceptible to noise injected by the adversary and investigate the
effects these perturbations have on the performance of the trained agent. Our
work highlights the urgent need to develop more robust training algorithms
especially for critical arenas like the battlefield.

| Search Query: ArXiv Query: search_query=au:”John Richardson”&id_list=&start=0&max_results=3

Read More