Led by four generations of leadership from late Prof.JIANG Sichang(academician,Chinese Academy of Engineering),Prof.YANG Weiyan(Honorary President,Division of Otolaryngology Head and Neck Surgery,Chinese Medical Assoc...Led by four generations of leadership from late Prof.JIANG Sichang(academician,Chinese Academy of Engineering),Prof.YANG Weiyan(Honorary President,Division of Otolaryngology Head and Neck Surgery,Chinese Medical Association),Prof.HAN Dongyi(President Elected,Division of Otolaryngology Head and Neck Surgery,Chinese Medical Association)to now Prof.YANG Shiming(President,Division of Otolaryngologists,展开更多
Introduction to HEC Harbin Electric Machinery Company Limited (HEC) is a pivotal company that produces large electric machinery and accessorial control equipment in China. The hydro units made by HEC have
Deep reinforcement learning(DRL) has demonstrated significant potential in industrial manufacturing domains such as workshop scheduling and energy system management.However, due to the model's inherent uncertainty...Deep reinforcement learning(DRL) has demonstrated significant potential in industrial manufacturing domains such as workshop scheduling and energy system management.However, due to the model's inherent uncertainty, rigorous validation is requisite for its application in real-world tasks. Specific tests may reveal inadequacies in the performance of pre-trained DRL models, while the “black-box” nature of DRL poses a challenge for testing model behavior. We propose a novel performance improvement framework based on probabilistic automata,which aims to proactively identify and correct critical vulnerabilities of DRL systems, so that the performance of DRL models in real tasks can be improved with minimal model modifications.First, a probabilistic automaton is constructed from the historical trajectory of the DRL system by abstracting the state to generate probabilistic decision-making units(PDMUs), and a reverse breadth-first search(BFS) method is used to identify the key PDMU-action pairs that have the greatest impact on adverse outcomes. This process relies only on the state-action sequence and final result of each trajectory. Then, under the key PDMU, we search for the new action that has the greatest impact on favorable results. Finally, the key PDMU, undesirable action and new action are encapsulated as monitors to guide the DRL system to obtain more favorable results through real-time monitoring and correction mechanisms. Evaluations in two standard reinforcement learning environments and three actual job scheduling scenarios confirmed the effectiveness of the method, providing certain guarantees for the deployment of DRL models in real-world applications.展开更多
How to determine the weight value and how to determine the numbers of variables are tWo difficult questions for the inequality weight moving average forecasting model.Based n explanations of the concept of the weight ...How to determine the weight value and how to determine the numbers of variables are tWo difficult questions for the inequality weight moving average forecasting model.Based n explanations of the concept of the weight contribution rate and that of the key neural node,a new method by which the weight value and the variable number can be determined has been put forward in this paper,and reality-imitating experiments have been made to prove that by way of the neural network,the difficulties existed in the traditional prediction method can be solved and the predictive precision can be improved at the same time.展开更多
文摘Led by four generations of leadership from late Prof.JIANG Sichang(academician,Chinese Academy of Engineering),Prof.YANG Weiyan(Honorary President,Division of Otolaryngology Head and Neck Surgery,Chinese Medical Association),Prof.HAN Dongyi(President Elected,Division of Otolaryngology Head and Neck Surgery,Chinese Medical Association)to now Prof.YANG Shiming(President,Division of Otolaryngologists,
文摘Introduction to HEC Harbin Electric Machinery Company Limited (HEC) is a pivotal company that produces large electric machinery and accessorial control equipment in China. The hydro units made by HEC have
基金supported by the Shanghai Science and Technology Committee (22511105500)the National Nature Science Foundation of China (62172299, 62032019)+2 种基金the Space Optoelectronic Measurement and Perception LaboratoryBeijing Institute of Control Engineering(LabSOMP-2023-03)the Central Universities of China (2023-4-YB-05)。
文摘Deep reinforcement learning(DRL) has demonstrated significant potential in industrial manufacturing domains such as workshop scheduling and energy system management.However, due to the model's inherent uncertainty, rigorous validation is requisite for its application in real-world tasks. Specific tests may reveal inadequacies in the performance of pre-trained DRL models, while the “black-box” nature of DRL poses a challenge for testing model behavior. We propose a novel performance improvement framework based on probabilistic automata,which aims to proactively identify and correct critical vulnerabilities of DRL systems, so that the performance of DRL models in real tasks can be improved with minimal model modifications.First, a probabilistic automaton is constructed from the historical trajectory of the DRL system by abstracting the state to generate probabilistic decision-making units(PDMUs), and a reverse breadth-first search(BFS) method is used to identify the key PDMU-action pairs that have the greatest impact on adverse outcomes. This process relies only on the state-action sequence and final result of each trajectory. Then, under the key PDMU, we search for the new action that has the greatest impact on favorable results. Finally, the key PDMU, undesirable action and new action are encapsulated as monitors to guide the DRL system to obtain more favorable results through real-time monitoring and correction mechanisms. Evaluations in two standard reinforcement learning environments and three actual job scheduling scenarios confirmed the effectiveness of the method, providing certain guarantees for the deployment of DRL models in real-world applications.
文摘How to determine the weight value and how to determine the numbers of variables are tWo difficult questions for the inequality weight moving average forecasting model.Based n explanations of the concept of the weight contribution rate and that of the key neural node,a new method by which the weight value and the variable number can be determined has been put forward in this paper,and reality-imitating experiments have been made to prove that by way of the neural network,the difficulties existed in the traditional prediction method can be solved and the predictive precision can be improved at the same time.