Kavli Affiliate: Biao Huang
| First 5 Authors: Runze Lin, Yangyang Luo, Xialai Wu, Junghui Chen, Biao Huang
| Summary:
The Organic Rankine Cycle (ORC) is widely used in industrial waste heat
recovery due to its simple structure and easy maintenance. However, in the
context of smart manufacturing in the process industry, traditional model-based
optimization control methods are unable to adapt to the varying operating
conditions of the ORC system or sudden changes in operating modes. Deep
reinforcement learning (DRL) has significant advantages in situations with
uncertainty as it directly achieves control objectives by interacting with the
environment without requiring an explicit model of the controlled plant.
Nevertheless, direct application of DRL to physical ORC systems presents
unacceptable safety risks, and its generalization performance under model-plant
mismatch is insufficient to support ORC control requirements. Therefore, this
paper proposes a Sim2Real transfer learning-based DRL control method for ORC
superheat control, which aims to provide a new simple, feasible, and
user-friendly solution for energy system optimization control. Experimental
results show that the proposed method greatly improves the training speed of
DRL in ORC control problems and solves the generalization performance issue of
the agent under multiple operating conditions through Sim2Real transfer.
| Search Query: ArXiv Query: search_query=au:”Biao Huang”&id_list=&start=0&max_results=3