The available modelling data shortage issue makes it difficult to guarantee the performance of data-driven building energy prediction(BEP)models for both the newly built buildings and existing information-poor buildin...The available modelling data shortage issue makes it difficult to guarantee the performance of data-driven building energy prediction(BEP)models for both the newly built buildings and existing information-poor buildings.Both knowledge transfer learning(KTL)and data incremental learning(DIL)can address the data shortage issue of such buildings.For new building scenarios with continuous data accumulation,the performance of BEP models has not been fully investigated considering the data accumulation dynamics.DIL,which can learn dynamic features from accumulated data adapting to the developing trend of new building time-series data and extend BEP model's knowledge,has been rarely studied.Previous studies have shown that the performance of KTL models trained with fixed data can be further improved in scenarios with dynamically changing data.Hence,this study proposes an improved transfer learning cross-BEP strategy continuously updated using the coarse data incremental(CDI)manner.The hybrid KTL-DIL strategy(LSTM-DANN-CDI)uses domain adversarial neural network(DANN)for KLT and long short-term memory(LSTM)as the Baseline BEP model.Performance evaluation is conducted to systematically qualify the effectiveness and applicability of KTL and improved KTL-DIL.Real-world data from six-type 36 buildings of six types are adopted to evaluate the performance of KTL and KTL-DIL in data-driven BEP tasks considering factors like the model increment time interval,the available target and source building data volumes.Compared with LSTM,results indicate that KTL(LSTM-DANN)and the proposed KTL-DIL(LSTM-DANN-CDI)can significantly improve the BEP performance for new buildings with limited data.Compared with the pure KTL strategy LSTM-DANN,the improved KTL-DIL strategy LSTM-DANN-CDI has better prediction performance with an average performance improvement ratio of 60%.展开更多
Recently, deep convolutional neural networks (DCNNs) have achieved remarkable results in image classification tasks. Despite convolutional networks’ great successes, their training process relies on a large amount of...Recently, deep convolutional neural networks (DCNNs) have achieved remarkable results in image classification tasks. Despite convolutional networks’ great successes, their training process relies on a large amount of data prepared in advance, which is often challenging in real-world applications, such as streaming data and concept drift. For this reason, incremental learning (continual learning) has attracted increasing attention from scholars. However, incremental learning is associated with the challenge of catastrophic forgetting: the performance on previous tasks drastically degrades after learning a new task. In this paper, we propose a new strategy to alleviate catastrophic forgetting when neural networks are trained in continual domains. Specifically, two components are applied: data translation based on transfer learning and knowledge distillation. The former translates a portion of new data to reconstruct the partial data distribution of the old domain. The latter uses an old model as a teacher to guide a new model. The experimental results on three datasets have shown that our work can effectively alleviate catastrophic forgetting by a combination of the two methods aforementioned.展开更多
The four-dimensional variational (4D-Var) data assimilation systems used in most operational and research centers use initial condition increments as control variables and adjust initial increments to find optimal a...The four-dimensional variational (4D-Var) data assimilation systems used in most operational and research centers use initial condition increments as control variables and adjust initial increments to find optimal analysis solutions. This approach may sometimes create discontinuities in analysis fields and produce undesirable spin ups and spin downs. This study explores using incremental analysis updates (IAU) in 4D-Var to reduce the analysis discontinuities. IAU-based 4D-Var has almost the same mathematical formula as conventional 4D-Var if the initial condition increments are replaced with time-integrated increments as control variables. The IAU technique was implemented in the NASA/GSFC 4D-Var prototype and compared against a control run without IAU. The results showed that the initial precipitation spikes were removed and that other discontinuities were also reduced, especially for the analysis of surface temperature.展开更多
基金jointly supported by the Opening Fund of Key Laboratory of Low-grade Energy Utilization Technologies and Systems of Ministry of Education of China(Chongqing University)(LLEUTS-202305)the Opening Fund of State Key Laboratory of Green Building in Western China(LSKF202316)+4 种基金the open Foundation of Anhui Province Key Laboratory of Intelligent Building and Building Energy-saving(IBES2022KF11)“The 14th Five-Year Plan”Hubei Provincial advantaged characteristic disciplines(groups)project of Wuhan University of Science and Technology(2023D0504,2023D0501)the National Natural Science Foundation of China(51906181)the 2021 Construction Technology Plan Project of Hubei Province(2021-83)the Science and Technology Project of Guizhou Province:Integrated Support of Guizhou[2023]General 393.
文摘The available modelling data shortage issue makes it difficult to guarantee the performance of data-driven building energy prediction(BEP)models for both the newly built buildings and existing information-poor buildings.Both knowledge transfer learning(KTL)and data incremental learning(DIL)can address the data shortage issue of such buildings.For new building scenarios with continuous data accumulation,the performance of BEP models has not been fully investigated considering the data accumulation dynamics.DIL,which can learn dynamic features from accumulated data adapting to the developing trend of new building time-series data and extend BEP model's knowledge,has been rarely studied.Previous studies have shown that the performance of KTL models trained with fixed data can be further improved in scenarios with dynamically changing data.Hence,this study proposes an improved transfer learning cross-BEP strategy continuously updated using the coarse data incremental(CDI)manner.The hybrid KTL-DIL strategy(LSTM-DANN-CDI)uses domain adversarial neural network(DANN)for KLT and long short-term memory(LSTM)as the Baseline BEP model.Performance evaluation is conducted to systematically qualify the effectiveness and applicability of KTL and improved KTL-DIL.Real-world data from six-type 36 buildings of six types are adopted to evaluate the performance of KTL and KTL-DIL in data-driven BEP tasks considering factors like the model increment time interval,the available target and source building data volumes.Compared with LSTM,results indicate that KTL(LSTM-DANN)and the proposed KTL-DIL(LSTM-DANN-CDI)can significantly improve the BEP performance for new buildings with limited data.Compared with the pure KTL strategy LSTM-DANN,the improved KTL-DIL strategy LSTM-DANN-CDI has better prediction performance with an average performance improvement ratio of 60%.
文摘Recently, deep convolutional neural networks (DCNNs) have achieved remarkable results in image classification tasks. Despite convolutional networks’ great successes, their training process relies on a large amount of data prepared in advance, which is often challenging in real-world applications, such as streaming data and concept drift. For this reason, incremental learning (continual learning) has attracted increasing attention from scholars. However, incremental learning is associated with the challenge of catastrophic forgetting: the performance on previous tasks drastically degrades after learning a new task. In this paper, we propose a new strategy to alleviate catastrophic forgetting when neural networks are trained in continual domains. Specifically, two components are applied: data translation based on transfer learning and knowledge distillation. The former translates a portion of new data to reconstruct the partial data distribution of the old domain. The latter uses an old model as a teacher to guide a new model. The experimental results on three datasets have shown that our work can effectively alleviate catastrophic forgetting by a combination of the two methods aforementioned.
基金supported by NOAA’s Hurricane Forecast Improvement Project
文摘The four-dimensional variational (4D-Var) data assimilation systems used in most operational and research centers use initial condition increments as control variables and adjust initial increments to find optimal analysis solutions. This approach may sometimes create discontinuities in analysis fields and produce undesirable spin ups and spin downs. This study explores using incremental analysis updates (IAU) in 4D-Var to reduce the analysis discontinuities. IAU-based 4D-Var has almost the same mathematical formula as conventional 4D-Var if the initial condition increments are replaced with time-integrated increments as control variables. The IAU technique was implemented in the NASA/GSFC 4D-Var prototype and compared against a control run without IAU. The results showed that the initial precipitation spikes were removed and that other discontinuities were also reduced, especially for the analysis of surface temperature.