Kavli Affiliate: Biao Huang
| First 5 Authors: Runze Lin, Junghui Chen, Biao Huang, Lei Xie, Hongye Su
| Summary:
In the era of Industry 4.0 and smart manufacturing, process systems
engineering must adapt to digital transformation. While reinforcement learning
offers a model-free approach to process control, its applications are limited
by the dependence on accurate digital twins and well-designed reward functions.
To address these limitations, this paper introduces a novel framework that
integrates inverse reinforcement learning (IRL) with multi-task learning for
data-driven, multi-mode control design. Using historical closed-loop data as
expert demonstrations, IRL extracts optimal reward functions and control
policies. A latent-context variable is incorporated to distinguish modes,
enabling the training of mode-specific controllers. Case studies on a
continuous stirred tank reactor and a fed-batch bioreactor validate the
effectiveness of this framework in handling multi-mode data and training
adaptable controllers.
| Search Query: ArXiv Query: search_query=au:”Biao Huang”&id_list=&start=0&max_results=3