High concentrations of indoor CO_(2)pose severe health risks to building occupants.Often,mechanical equipment is used to provide sufficient ventilation as a remedy to high indoor CO_(2)concentrations.However,such equi...High concentrations of indoor CO_(2)pose severe health risks to building occupants.Often,mechanical equipment is used to provide sufficient ventilation as a remedy to high indoor CO_(2)concentrations.However,such equipment consumes large amounts of energy,substantially increasing building energy consumption.In the end,the issue becomes an optimization problem that revolves around maintaining CO_(2)levels below a certain threshold while utilizing the minimum amount of energy possible.To that end,we propose an intelligent approach that consists of a supervised learning-based virtual sensor that interacts with a deep reinforcement learning(DRL)-based control to efficiently control indoor CO_(2)while utilizing the minimum amount of energy possible.The data used to train and test the DRL agent is based on a 3-month field experiment conducted at a kindergarten equipped with a heat recovery ventilator.The results show that,unlike the manual control initially employed at the kindergarten,the DRL agent could always maintain the CO_(2)concentrations below sufficient levels.Furthermore,a 58%reduction in the energy consumption of the ventilator under the DRL control compared to the manual control was estimated.The demonstrated approach illustrates the potential leveraging of Internet of Things and machine learning algorithms to create comfortable and healthy indoor environments with minimal energy requirements.展开更多
基金supported by the National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(No.2020R1A2C1099611).
文摘High concentrations of indoor CO_(2)pose severe health risks to building occupants.Often,mechanical equipment is used to provide sufficient ventilation as a remedy to high indoor CO_(2)concentrations.However,such equipment consumes large amounts of energy,substantially increasing building energy consumption.In the end,the issue becomes an optimization problem that revolves around maintaining CO_(2)levels below a certain threshold while utilizing the minimum amount of energy possible.To that end,we propose an intelligent approach that consists of a supervised learning-based virtual sensor that interacts with a deep reinforcement learning(DRL)-based control to efficiently control indoor CO_(2)while utilizing the minimum amount of energy possible.The data used to train and test the DRL agent is based on a 3-month field experiment conducted at a kindergarten equipped with a heat recovery ventilator.The results show that,unlike the manual control initially employed at the kindergarten,the DRL agent could always maintain the CO_(2)concentrations below sufficient levels.Furthermore,a 58%reduction in the energy consumption of the ventilator under the DRL control compared to the manual control was estimated.The demonstrated approach illustrates the potential leveraging of Internet of Things and machine learning algorithms to create comfortable and healthy indoor environments with minimal energy requirements.