Non-Asymptotic Analysis for Single-Loop (Natural) Actor-Critic with Compatible Function Approximation

Kavli Affiliate: Yi Zhou

| First 5 Authors: Yudan Wang, Yue Wang, Yi Zhou, Shaofeng Zou,

| Summary:

Actor-critic (AC) is a powerful method for learning an optimal policy in
reinforcement learning, where the critic uses algorithms, e.g., temporal
difference (TD) learning with function approximation, to evaluate the current
policy and the actor updates the policy along an approximate gradient direction
using information from the critic. This paper provides the textit{tightest}
non-asymptotic convergence bounds for both the AC and natural AC (NAC)
algorithms. Specifically, existing studies show that AC converges to an
$epsilon+varepsilon_{text{critic}}$ neighborhood of stationary points with
the best known sample complexity of $mathcal{O}(epsilon^{-2})$ (up to a log
factor), and NAC converges to an
$epsilon+varepsilon_{text{critic}}+sqrt{varepsilon_{text{actor}}}$
neighborhood of the global optimum with the best known sample complexity of
$mathcal{O}(epsilon^{-3})$, where $varepsilon_{text{critic}}$ is the
approximation error of the critic and $varepsilon_{text{actor}}$ is the
approximation error induced by the insufficient expressive power of the
parameterized policy class. This paper analyzes the convergence of both AC and
NAC algorithms with compatible function approximation. Our analysis eliminates
the term $varepsilon_{text{critic}}$ from the error bounds while still
achieving the best known sample complexities. Moreover, we focus on the
challenging single-loop setting with a single Markovian sample trajectory. Our
major technical novelty lies in analyzing the stochastic bias due to
policy-dependent and time-varying compatible function approximation in the
critic, and handling the non-ergodicity of the MDP due to the single Markovian
sample trajectory. Numerical results are also provided in the appendix.

| Search Query: ArXiv Query: search_query=au:”Yi Zhou”&id_list=&start=0&max_results=3

Read More