Suitable rescue path selection is very important to rescue lives and reduce the loss of disasters, and has been a key issue in the field of disaster response management. In this paper, we present a path selection algo...Suitable rescue path selection is very important to rescue lives and reduce the loss of disasters, and has been a key issue in the field of disaster response management. In this paper, we present a path selection algorithm based on Q-learning for disaster response applications. We assume that a rescue team is an agent, which is operating in a dynamic and dangerous environment and needs to find a safe and short path in the least time. We first propose a path selection model for disaster response management, and deduce that path selection based on our model is a Markov decision process. Then, we introduce Q-learning and design strategies for action selection and to avoid cyclic path. Finally, experimental results show that our algorithm can find a safe and short path in the dynamic and dangerous environment, which can provide a specific and significant reference for practical management in disaster response applications.展开更多
This paper examines the role of transformational leadership in transforming an organization into a knowledge based, then into learning organization so that it becomes an innovative company. Important features of the l...This paper examines the role of transformational leadership in transforming an organization into a knowledge based, then into learning organization so that it becomes an innovative company. Important features of the leader such and ability to assist in developing and accommodating the implementation of knowledge management programs, learning organization concepts and innovation protocols are discussed in this paper. This paper demonstrates that shifting the organization to become a knowledge based and then to be learning organization and finally to become innovative company could involve some unique attributes of a transformation leadership. In that regards, the paper also demonstrates that organizations need first to create, capture, transfer, and mobilize knowledge before it can be used for learning and then for innovation. The paper will present a method of a studying how successful innovation leaders of companies could found themselves acting in three roles namely: knowledge leader, learning leader and then innovation leader.展开更多
Trucks consume a lot of energy. Hybrid technology maintains a long range while realizing energy savings. Hybrid is therefore an effective energy-saving technology for trucks. Recovery of engine waste heat through the ...Trucks consume a lot of energy. Hybrid technology maintains a long range while realizing energy savings. Hybrid is therefore an effective energy-saving technology for trucks. Recovery of engine waste heat through the organic Rankine cycle further enhances engine efficiency and provides effective thermal management. However, the powertrain greatly increases the complexity of energy management system. In order to design an energy management system with high efficiency and robustness, this study proposes a deep reinforcement learning embedded rule-based energy management system. This method optimises the key parameters of the rule-based energy management system by inserting deep reinforcement learning into it. Therefore, this scheme combines the good optimization effect of deep reinforcement learning and the excellent robustness of rule. In order to verify the feasibility of this scheme, this study builds the system dynamic model and carries out a simulation study. Subsequently, a hybrid powertrain semi physical experimental bench was constructed and a rapid control prototype experimental study was carried out. The simulation results show that the deep reinforcement learning embedded rule-based energy management system can reduce the energy consumption by 4.31 % compared with the rule-based energy management system under the C-WTVC driving cycle. In addition, energy saving and safe operation can also be achieved under other unfamiliar untrained driving cycles. The rapid control prototype experimental study shows that the deep reinforcement learning embedded rule-based energy management system has good agreement in experiment and simulation, which demonstrates the potential for real vehicle engineering applications and promotes the engineering application of deep reinforcement learning.展开更多
Hybrid electric vehicles(HEVs)are acknowledged to be an effective way to improve the efficiency of internal combustion engines(ICEs)and reduce fuel consumption.Although the ICE in an HEV can maintain high efficiency d...Hybrid electric vehicles(HEVs)are acknowledged to be an effective way to improve the efficiency of internal combustion engines(ICEs)and reduce fuel consumption.Although the ICE in an HEV can maintain high efficiency during driving,its thermal efficiency is approximately 40%,and the rest of the fuel energy is discharged through different kinds of waste heat.Therefore,it is important to recover the engine waste heat.Because of the great waste heat recovery performance of the organic Rankine cycle(ORC),an HEV integrated with an ORC(HEV-ORC)has been proposed.However,the addition of ORC creates a stiff and multi-energy problem,greatly increasing the complexity of the energy management system(EMS).Considering the great potential of deep reinforcement learning(DRL)for solving complex control problems,this work proposes a DRL-based EMS for an HEV-ORC.The simulation results demonstrate that the DRL-based EMS can save 2%more fuel energy than the rule-based EMS because the former provides higher average efficiencies for both engine and motor,as well as more stable ORC power and battery state.Furthermore,the battery always has sufficient capacity to store the ORC power.Consequently,DRL showed great potential for solving complex energy management problems.展开更多
基金supported by National Basic Research Program of China (973 Program) (No. 2009CB326203)National Natural Science Foundation of China (No. 61004103)+5 种基金the National Research Foundation for the Doctoral Program of Higher Education of China (No. 20100111110005)China Postdoctoral Science Foundation (No. 20090460742)National Engineering Research Center of Special Display Technology (No. 2008HGXJ0350)Natural Science Foundation of Anhui Province (No. 090412058, No. 070412035)Natural Science Foundation of Anhui Province of China (No. 11040606Q44, No. 090412058)Specialized Research Fund for Doctoral Scholars of Hefei University of Technology (No. GDBJ2009-003, No. GDBJ2009-067)
文摘Suitable rescue path selection is very important to rescue lives and reduce the loss of disasters, and has been a key issue in the field of disaster response management. In this paper, we present a path selection algorithm based on Q-learning for disaster response applications. We assume that a rescue team is an agent, which is operating in a dynamic and dangerous environment and needs to find a safe and short path in the least time. We first propose a path selection model for disaster response management, and deduce that path selection based on our model is a Markov decision process. Then, we introduce Q-learning and design strategies for action selection and to avoid cyclic path. Finally, experimental results show that our algorithm can find a safe and short path in the dynamic and dangerous environment, which can provide a specific and significant reference for practical management in disaster response applications.
文摘This paper examines the role of transformational leadership in transforming an organization into a knowledge based, then into learning organization so that it becomes an innovative company. Important features of the leader such and ability to assist in developing and accommodating the implementation of knowledge management programs, learning organization concepts and innovation protocols are discussed in this paper. This paper demonstrates that shifting the organization to become a knowledge based and then to be learning organization and finally to become innovative company could involve some unique attributes of a transformation leadership. In that regards, the paper also demonstrates that organizations need first to create, capture, transfer, and mobilize knowledge before it can be used for learning and then for innovation. The paper will present a method of a studying how successful innovation leaders of companies could found themselves acting in three roles namely: knowledge leader, learning leader and then innovation leader.
基金supported by the National Key R&D Program of China(2022YFE0100100).
文摘Trucks consume a lot of energy. Hybrid technology maintains a long range while realizing energy savings. Hybrid is therefore an effective energy-saving technology for trucks. Recovery of engine waste heat through the organic Rankine cycle further enhances engine efficiency and provides effective thermal management. However, the powertrain greatly increases the complexity of energy management system. In order to design an energy management system with high efficiency and robustness, this study proposes a deep reinforcement learning embedded rule-based energy management system. This method optimises the key parameters of the rule-based energy management system by inserting deep reinforcement learning into it. Therefore, this scheme combines the good optimization effect of deep reinforcement learning and the excellent robustness of rule. In order to verify the feasibility of this scheme, this study builds the system dynamic model and carries out a simulation study. Subsequently, a hybrid powertrain semi physical experimental bench was constructed and a rapid control prototype experimental study was carried out. The simulation results show that the deep reinforcement learning embedded rule-based energy management system can reduce the energy consumption by 4.31 % compared with the rule-based energy management system under the C-WTVC driving cycle. In addition, energy saving and safe operation can also be achieved under other unfamiliar untrained driving cycles. The rapid control prototype experimental study shows that the deep reinforcement learning embedded rule-based energy management system has good agreement in experiment and simulation, which demonstrates the potential for real vehicle engineering applications and promotes the engineering application of deep reinforcement learning.
基金supported by the National Natural Science Foundation of China(Grant No.51906173)。
文摘Hybrid electric vehicles(HEVs)are acknowledged to be an effective way to improve the efficiency of internal combustion engines(ICEs)and reduce fuel consumption.Although the ICE in an HEV can maintain high efficiency during driving,its thermal efficiency is approximately 40%,and the rest of the fuel energy is discharged through different kinds of waste heat.Therefore,it is important to recover the engine waste heat.Because of the great waste heat recovery performance of the organic Rankine cycle(ORC),an HEV integrated with an ORC(HEV-ORC)has been proposed.However,the addition of ORC creates a stiff and multi-energy problem,greatly increasing the complexity of the energy management system(EMS).Considering the great potential of deep reinforcement learning(DRL)for solving complex control problems,this work proposes a DRL-based EMS for an HEV-ORC.The simulation results demonstrate that the DRL-based EMS can save 2%more fuel energy than the rule-based EMS because the former provides higher average efficiencies for both engine and motor,as well as more stable ORC power and battery state.Furthermore,the battery always has sufficient capacity to store the ORC power.Consequently,DRL showed great potential for solving complex energy management problems.