This study proposes an active surge control method based on deep reinforcement learning to ensure the stability of compressors when adhering to the pressure rise command across the wide operating range of an aeroengin...This study proposes an active surge control method based on deep reinforcement learning to ensure the stability of compressors when adhering to the pressure rise command across the wide operating range of an aeroengine.Initially,the study establishes the compressor dynamic model with uncertainties,disturbances,and Close-Coupled Valve(CCV)actuator delay.Building upon this foundation,a Partially Observable Markov Decision Process(POMDP)is defined to facilitate active surge control.To address the issue of unobservability,a nonlinear state observer is designed using a finite-time high-order sliding mode.Furthermore,an Improved Soft Actor-Critic(ISAC)algorithm is developed,incorporating prioritized experience replay and adaptive temperature parameter techniques,to strike a balance between exploration and convergence during training.In addition,reasonable observation variables,error-segmented reward functions,and random initialization of model parameters are employed to enhance the robustness and generalization capability.Finally,to assess the effectiveness of the proposed method,numerical simulations are conducted,and it is compared with the fuzzy adaptive backstepping method and Second-Order Sliding Mode Control(SOSMC)method.The simulation results demonstrate that the deep reinforcement learning based controller outperforms other methods in both tracking accuracy and robustness.Consequently,the proposed active surge controller can effectively ensure stable operation of compressors in the high-pressure-ratio and high-efficiency region.展开更多
基金co-supported by the National Natural Science Foundation of China(No.51976089)the Science Center for Gas Turbine Project,China(No.P2023-B-V-001-001)the China Scholarship Council(No.202306830092).
文摘This study proposes an active surge control method based on deep reinforcement learning to ensure the stability of compressors when adhering to the pressure rise command across the wide operating range of an aeroengine.Initially,the study establishes the compressor dynamic model with uncertainties,disturbances,and Close-Coupled Valve(CCV)actuator delay.Building upon this foundation,a Partially Observable Markov Decision Process(POMDP)is defined to facilitate active surge control.To address the issue of unobservability,a nonlinear state observer is designed using a finite-time high-order sliding mode.Furthermore,an Improved Soft Actor-Critic(ISAC)algorithm is developed,incorporating prioritized experience replay and adaptive temperature parameter techniques,to strike a balance between exploration and convergence during training.In addition,reasonable observation variables,error-segmented reward functions,and random initialization of model parameters are employed to enhance the robustness and generalization capability.Finally,to assess the effectiveness of the proposed method,numerical simulations are conducted,and it is compared with the fuzzy adaptive backstepping method and Second-Order Sliding Mode Control(SOSMC)method.The simulation results demonstrate that the deep reinforcement learning based controller outperforms other methods in both tracking accuracy and robustness.Consequently,the proposed active surge controller can effectively ensure stable operation of compressors in the high-pressure-ratio and high-efficiency region.