Kavli Affiliate: Ran Wang
| First 5 Authors: Mohamed Naveed Gul Mohamed, Suman Chakravorty, Raman Goyal, Ran Wang,
| Summary:
We consider the problem of nonlinear stochastic optimal control. This problem
is thought to be fundamentally intractable owing to Bellman’s infamous "curse
of dimensionality". We present a result that shows that repeatedly solving an
open-loop deterministic problem from the current state, similar to Model
Predictive Control (MPC), results in a feedback policy that is $O(epsilon^4)$
near to the true global stochastic optimal policy. Furthermore, empirical
results show that solving the Stochastic Dynamic Programming (DP) problem is
highly susceptible to noise, even when tractable, and in practice, the MPC-type
feedback law offers superior performance even for stochastic systems.
| Search Query: ArXiv Query: search_query=au:”Ran Wang”&id_list=&start=0&max_results=10