Robust Network Slicing: Multi-Agent Policies, Adversarial Attacks, and Defensive Strategies

Kavli Affiliate: Feng Wang

| First 5 Authors: Feng Wang, M. Cenk Gursoy, Senem Velipasalar, ,

| Summary:

In this paper, we present a multi-agent deep reinforcement learning (deep RL)
framework for network slicing in a dynamic environment with multiple base
stations and multiple users. In particular, we propose a novel deep RL
framework with multiple actors and centralized critic (MACC) in which actors
are implemented as pointer networks to fit the varying dimension of input. We
evaluate the performance of the proposed deep RL algorithm via simulations to
demonstrate its effectiveness. Subsequently, we develop a deep RL based jammer
with limited prior information and limited power budget. The goal of the jammer
is to minimize the transmission rates achieved with network slicing and thus
degrade the network slicing agents’ performance. We design a jammer with both
listening and jamming phases and address jamming location optimization as well
as jamming channel optimization via deep RL. We evaluate the jammer at the
optimized location, generating interference attacks in the optimized set of
channels by switching between the jamming phase and listening phase. We show
that the proposed jammer can significantly reduce the victims’ performance
without direct feedback or prior knowledge on the network slicing policies.
Finally, we devise a Nash-equilibrium-supervised policy ensemble mixed strategy
profile for network slicing (as a defensive measure) and jamming. We evaluate
the performance of the proposed policy ensemble algorithm by applying on the
network slicing agents and the jammer agent in simulations to show its
effectiveness.

| Search Query: ArXiv Query: search_query=au:”Feng Wang”&id_list=&start=0&max_results=3

Read More