PRECISION: Decentralized Constrained Min-Max Learning with Low Communication and Sample Complexities

Kavli Affiliate: Jia Liu

| First 5 Authors: [#item_custom_name[1]], [#item_custom_name[2]], [#item_custom_name[3]], [#item_custom_name[4]], [#item_custom_name[5]]

| Summary:

Recently, min-max optimization problems have received increasing attention
due to their wide range of applications in machine learning (ML). However, most
existing min-max solution techniques are either single-machine or distributed
algorithms coordinated by a central server. In this paper, we focus on the
decentralized min-max optimization for learning with domain constraints, where
multiple agents collectively solve a nonconvex-strongly-concave min-max saddle
point problem without coordination from any server. Decentralized min-max
optimization problems with domain constraints underpins many important ML
applications, including multi-agent ML fairness assurance, and policy
evaluations in multi-agent reinforcement learning. We propose an algorithm
called PRECISION (proximal gradient-tracking and stochastic recursive variance
reduction) that enjoys a convergence rate of $O(1/T)$, where $T$ is the maximum
number of iterations. To further reduce sample complexity, we propose
PRECISION$^+$ with an adaptive batch size technique. We show that the fast
$O(1/T)$ convergence of PRECISION and PRECISION$^+$ to an $epsilon$-stationary
point imply $O(epsilon^{-2})$ communication complexity and
$O(msqrt{n}epsilon^{-2})$ sample complexity, where $m$ is the number of
agents and $n$ is the size of dataset at each agent. To our knowledge, this is
the first work that achieves $O(epsilon^{-2})$ in both sample and
communication complexities in decentralized min-max learning with domain
constraints. Our experiments also corroborate the theoretical results.

| Search Query: [#feed_custom_title]

Read More