STIMULUS: Achieving Fast Convergence and Low Sample Complexity in Stochastic Multi-Objective Learning

Kavli Affiliate: Jia Liu

| First 5 Authors: Zhuqing Liu, Zhuqing Liu, , ,

| Summary:

Recently, multi-objective optimization (MOO) has gained attention for its
broad applications in ML, operations research, and engineering. However, MOO
algorithm design remains in its infancy and many existing MOO methods suffer
from unsatisfactory convergence rate and sample complexity performance. To
address this challenge, in this paper, we propose an algorithm called STIMULUS(
stochastic path-integrated multi-gradient recursive eulstimator), a new and
robust approach for solving MOO problems. Different from the traditional
methods, STIMULUS introduces a simple yet powerful recursive framework for
updating stochastic gradient estimates to improve convergence performance with
low sample complexity. In addition, we introduce an enhanced version of
STIMULUS, termed STIMULUS-M, which incorporates a momentum term to further
expedite convergence. We establish $O(1/T)$ convergence rates of the proposed
methods for non-convex settings and $O (exp-mu T)$ for strongly convex
settings, where $T$ is the total number of iteration rounds. Additionally, we
achieve the state-of-the-art $O left(n+sqrtnepsilon^-1right)$ sample
complexities for non-convex settings and $Oleft(n+ sqrtn ln
(mu/epsilon)right)$ for strongly convex settings, where $epsilon>0$ is a
desired stationarity error. Moreover, to alleviate the periodic full gradient
evaluation requirement in STIMULUS and STIMULUS-M, we further propose enhanced
versions with adaptive batching called STIMULUS+/ STIMULUS-M+ and provide their
theoretical analysis.

| Search Query: ArXiv Query: search_query=au:”Jia Liu”&id_list=&start=0&max_results=3

Read More