期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
Reinforcement Learning in Process Industries:Review and Perspective
1
作者 oguzhan dogru Junyao Xie +6 位作者 Om Prakash Ranjith Chiplunkar Jansen Soesanto Hongtian Chen Kirubakaran Velswamy Fadi Ibrahim Biao Huang 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第2期283-300,共18页
This survey paper provides a review and perspective on intermediate and advanced reinforcement learning(RL)techniques in process industries. It offers a holistic approach by covering all levels of the process control ... This survey paper provides a review and perspective on intermediate and advanced reinforcement learning(RL)techniques in process industries. It offers a holistic approach by covering all levels of the process control hierarchy. The survey paper presents a comprehensive overview of RL algorithms,including fundamental concepts like Markov decision processes and different approaches to RL, such as value-based, policy-based, and actor-critic methods, while also discussing the relationship between classical control and RL. It further reviews the wide-ranging applications of RL in process industries, such as soft sensors, low-level control, high-level control, distributed process control, fault detection and fault tolerant control, optimization,planning, scheduling, and supply chain. The survey paper discusses the limitations and advantages, trends and new applications, and opportunities and future prospects for RL in process industries. Moreover, it highlights the need for a holistic approach in complex systems due to the growing importance of digitalization in the process industries. 展开更多
关键词 Process control process systems engineering reinforcement learning
下载PDF
Actor-Critic Reinforcement Learning and Application in Developing Computer-Vision-Based Interface Tracking 被引量:1
2
作者 oguzhan dogru Kirubakaran Velswamy Biao Huang 《Engineering》 SCIE EI 2021年第9期1248-1261,共14页
This paper synchronizes control theory with computer vision by formalizing object tracking as a sequential decision-making process.A reinforcement learning(RL)agent successfully tracks an interface between two liquids... This paper synchronizes control theory with computer vision by formalizing object tracking as a sequential decision-making process.A reinforcement learning(RL)agent successfully tracks an interface between two liquids,which is often a critical variable to track in many chemical,petrochemical,metallurgical,and oil industries.This method utilizes less than 100 images for creating an environment,from which the agent generates its own data without the need for expert knowledge.Unlike supervised learning(SL)methods that rely on a huge number of parameters,this approach requires far fewer parameters,which naturally reduces its maintenance cost.Besides its frugal nature,the agent is robust to environmental uncertainties such as occlusion,intensity changes,and excessive noise.From a closed-loop control context,an interface location-based deviation is chosen as the optimization goal during training.The methodology showcases RL for real-time object-tracking applications in the oil sands industry.Along with a presentation of the interface tracking problem,this paper provides a detailed review of one of the most effective RL methodologies:actor–critic policy. 展开更多
关键词 Interface tracking Object tracking OCCLUSION Reinforcement learning Uniform manifold approximation and projection
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部