The four-dimensional variational (4D-Var) data assimilation systems used in most operational and research centers use initial condition increments as control variables and adjust initial increments to find optimal a...The four-dimensional variational (4D-Var) data assimilation systems used in most operational and research centers use initial condition increments as control variables and adjust initial increments to find optimal analysis solutions. This approach may sometimes create discontinuities in analysis fields and produce undesirable spin ups and spin downs. This study explores using incremental analysis updates (IAU) in 4D-Var to reduce the analysis discontinuities. IAU-based 4D-Var has almost the same mathematical formula as conventional 4D-Var if the initial condition increments are replaced with time-integrated increments as control variables. The IAU technique was implemented in the NASA/GSFC 4D-Var prototype and compared against a control run without IAU. The results showed that the initial precipitation spikes were removed and that other discontinuities were also reduced, especially for the analysis of surface temperature.展开更多
Recently, deep convolutional neural networks (DCNNs) have achieved remarkable results in image classification tasks. Despite convolutional networks’ great successes, their training process relies on a large amount of...Recently, deep convolutional neural networks (DCNNs) have achieved remarkable results in image classification tasks. Despite convolutional networks’ great successes, their training process relies on a large amount of data prepared in advance, which is often challenging in real-world applications, such as streaming data and concept drift. For this reason, incremental learning (continual learning) has attracted increasing attention from scholars. However, incremental learning is associated with the challenge of catastrophic forgetting: the performance on previous tasks drastically degrades after learning a new task. In this paper, we propose a new strategy to alleviate catastrophic forgetting when neural networks are trained in continual domains. Specifically, two components are applied: data translation based on transfer learning and knowledge distillation. The former translates a portion of new data to reconstruct the partial data distribution of the old domain. The latter uses an old model as a teacher to guide a new model. The experimental results on three datasets have shown that our work can effectively alleviate catastrophic forgetting by a combination of the two methods aforementioned.展开更多
It is nontrivial to maintain such discovered frequent query patterns in real XML-DBMS because the transaction database of queries may allow frequent updates and such updates may not only invalidate some existing frequ...It is nontrivial to maintain such discovered frequent query patterns in real XML-DBMS because the transaction database of queries may allow frequent updates and such updates may not only invalidate some existing frequent query patterns but also generate some new frequent query patterns. In this paper, two incremental updating algorithms, FUX-QMiner and FUXQMiner, are proposed for efficient maintenance of discovered frequent query patterns and generation the new frequent query patterns when new XMI, queries are added into the database. Experimental results from our implementation show that the proposed algorithms have good performance. Key words XML - frequent query pattern - incremental algorithm - data mining CLC number TP 311 Foudation item: Supported by the Youthful Foundation for Scientific Research of University of Shanghai for Science and TechnologyBiography: PENG Dun-lu (1974-), male, Associate professor, Ph.D, research direction: data mining, Web service and its application, peerto-peer computing.展开更多
A fault detection method based on incremental locally linear embedding(LLE)is presented to improve fault detecting accuracy for satellites with telemetry data.Since conventional LLE algorithm cannot handle incremental...A fault detection method based on incremental locally linear embedding(LLE)is presented to improve fault detecting accuracy for satellites with telemetry data.Since conventional LLE algorithm cannot handle incremental learning,an incremental LLE method is proposed to acquire low-dimensional feature embedded in high-dimensional space.Then,telemetry data of Satellite TX-I are analyzed.Therefore,fault detection are performed by analyzing feature information extracted from the telemetry data with the statistical indexes T2 and squared prediction error(SPE)and SPE.Simulation results verify the fault detection scheme.展开更多
A new incremental clustering framework is presented, the basis of which is the induction as inverted deduction. Induction is inherently risky because it is not truth-preserving. If the clustering is considered as an i...A new incremental clustering framework is presented, the basis of which is the induction as inverted deduction. Induction is inherently risky because it is not truth-preserving. If the clustering is considered as an induction process, the key to build a valid clustering is to minimize the risk of clustering. From the viewpoint of modal logic, the clustering can be described as Kripke frames and Kripke models which are reflexive and symmetric. Based on the theory of modal logic, its properties can be described by system B in syntax. Thus, the risk of clustering can be calculated by the deduction relation of system B and proximity induction theorem described. Since the new proposed framework imposes no additional restrictive conditions of clustering algorithm, it is therefore a universal framework. An incremental clustering algorithm can be easily constructed by this framework from any given nonincremental clustering algorithm. The experiments show that the lower the a priori risk is, the more effective this framework is. It can be demonstrated that this framework is generally valid.展开更多
The available modelling data shortage issue makes it difficult to guarantee the performance of data-driven building energy prediction(BEP)models for both the newly built buildings and existing information-poor buildin...The available modelling data shortage issue makes it difficult to guarantee the performance of data-driven building energy prediction(BEP)models for both the newly built buildings and existing information-poor buildings.Both knowledge transfer learning(KTL)and data incremental learning(DIL)can address the data shortage issue of such buildings.For new building scenarios with continuous data accumulation,the performance of BEP models has not been fully investigated considering the data accumulation dynamics.DIL,which can learn dynamic features from accumulated data adapting to the developing trend of new building time-series data and extend BEP model's knowledge,has been rarely studied.Previous studies have shown that the performance of KTL models trained with fixed data can be further improved in scenarios with dynamically changing data.Hence,this study proposes an improved transfer learning cross-BEP strategy continuously updated using the coarse data incremental(CDI)manner.The hybrid KTL-DIL strategy(LSTM-DANN-CDI)uses domain adversarial neural network(DANN)for KLT and long short-term memory(LSTM)as the Baseline BEP model.Performance evaluation is conducted to systematically qualify the effectiveness and applicability of KTL and improved KTL-DIL.Real-world data from six-type 36 buildings of six types are adopted to evaluate the performance of KTL and KTL-DIL in data-driven BEP tasks considering factors like the model increment time interval,the available target and source building data volumes.Compared with LSTM,results indicate that KTL(LSTM-DANN)and the proposed KTL-DIL(LSTM-DANN-CDI)can significantly improve the BEP performance for new buildings with limited data.Compared with the pure KTL strategy LSTM-DANN,the improved KTL-DIL strategy LSTM-DANN-CDI has better prediction performance with an average performance improvement ratio of 60%.展开更多
基金supported by NOAA’s Hurricane Forecast Improvement Project
文摘The four-dimensional variational (4D-Var) data assimilation systems used in most operational and research centers use initial condition increments as control variables and adjust initial increments to find optimal analysis solutions. This approach may sometimes create discontinuities in analysis fields and produce undesirable spin ups and spin downs. This study explores using incremental analysis updates (IAU) in 4D-Var to reduce the analysis discontinuities. IAU-based 4D-Var has almost the same mathematical formula as conventional 4D-Var if the initial condition increments are replaced with time-integrated increments as control variables. The IAU technique was implemented in the NASA/GSFC 4D-Var prototype and compared against a control run without IAU. The results showed that the initial precipitation spikes were removed and that other discontinuities were also reduced, especially for the analysis of surface temperature.
文摘Recently, deep convolutional neural networks (DCNNs) have achieved remarkable results in image classification tasks. Despite convolutional networks’ great successes, their training process relies on a large amount of data prepared in advance, which is often challenging in real-world applications, such as streaming data and concept drift. For this reason, incremental learning (continual learning) has attracted increasing attention from scholars. However, incremental learning is associated with the challenge of catastrophic forgetting: the performance on previous tasks drastically degrades after learning a new task. In this paper, we propose a new strategy to alleviate catastrophic forgetting when neural networks are trained in continual domains. Specifically, two components are applied: data translation based on transfer learning and knowledge distillation. The former translates a portion of new data to reconstruct the partial data distribution of the old domain. The latter uses an old model as a teacher to guide a new model. The experimental results on three datasets have shown that our work can effectively alleviate catastrophic forgetting by a combination of the two methods aforementioned.
文摘It is nontrivial to maintain such discovered frequent query patterns in real XML-DBMS because the transaction database of queries may allow frequent updates and such updates may not only invalidate some existing frequent query patterns but also generate some new frequent query patterns. In this paper, two incremental updating algorithms, FUX-QMiner and FUXQMiner, are proposed for efficient maintenance of discovered frequent query patterns and generation the new frequent query patterns when new XMI, queries are added into the database. Experimental results from our implementation show that the proposed algorithms have good performance. Key words XML - frequent query pattern - incremental algorithm - data mining CLC number TP 311 Foudation item: Supported by the Youthful Foundation for Scientific Research of University of Shanghai for Science and TechnologyBiography: PENG Dun-lu (1974-), male, Associate professor, Ph.D, research direction: data mining, Web service and its application, peerto-peer computing.
基金supported by the Fundamental Research Funds for the Central Universities(No.2016083)
文摘A fault detection method based on incremental locally linear embedding(LLE)is presented to improve fault detecting accuracy for satellites with telemetry data.Since conventional LLE algorithm cannot handle incremental learning,an incremental LLE method is proposed to acquire low-dimensional feature embedded in high-dimensional space.Then,telemetry data of Satellite TX-I are analyzed.Therefore,fault detection are performed by analyzing feature information extracted from the telemetry data with the statistical indexes T2 and squared prediction error(SPE)and SPE.Simulation results verify the fault detection scheme.
基金supported by the National High-Tech Research and Development Program of China(2006AA12A106).
文摘A new incremental clustering framework is presented, the basis of which is the induction as inverted deduction. Induction is inherently risky because it is not truth-preserving. If the clustering is considered as an induction process, the key to build a valid clustering is to minimize the risk of clustering. From the viewpoint of modal logic, the clustering can be described as Kripke frames and Kripke models which are reflexive and symmetric. Based on the theory of modal logic, its properties can be described by system B in syntax. Thus, the risk of clustering can be calculated by the deduction relation of system B and proximity induction theorem described. Since the new proposed framework imposes no additional restrictive conditions of clustering algorithm, it is therefore a universal framework. An incremental clustering algorithm can be easily constructed by this framework from any given nonincremental clustering algorithm. The experiments show that the lower the a priori risk is, the more effective this framework is. It can be demonstrated that this framework is generally valid.
基金jointly supported by the Opening Fund of Key Laboratory of Low-grade Energy Utilization Technologies and Systems of Ministry of Education of China(Chongqing University)(LLEUTS-202305)the Opening Fund of State Key Laboratory of Green Building in Western China(LSKF202316)+4 种基金the open Foundation of Anhui Province Key Laboratory of Intelligent Building and Building Energy-saving(IBES2022KF11)“The 14th Five-Year Plan”Hubei Provincial advantaged characteristic disciplines(groups)project of Wuhan University of Science and Technology(2023D0504,2023D0501)the National Natural Science Foundation of China(51906181)the 2021 Construction Technology Plan Project of Hubei Province(2021-83)the Science and Technology Project of Guizhou Province:Integrated Support of Guizhou[2023]General 393.
文摘The available modelling data shortage issue makes it difficult to guarantee the performance of data-driven building energy prediction(BEP)models for both the newly built buildings and existing information-poor buildings.Both knowledge transfer learning(KTL)and data incremental learning(DIL)can address the data shortage issue of such buildings.For new building scenarios with continuous data accumulation,the performance of BEP models has not been fully investigated considering the data accumulation dynamics.DIL,which can learn dynamic features from accumulated data adapting to the developing trend of new building time-series data and extend BEP model's knowledge,has been rarely studied.Previous studies have shown that the performance of KTL models trained with fixed data can be further improved in scenarios with dynamically changing data.Hence,this study proposes an improved transfer learning cross-BEP strategy continuously updated using the coarse data incremental(CDI)manner.The hybrid KTL-DIL strategy(LSTM-DANN-CDI)uses domain adversarial neural network(DANN)for KLT and long short-term memory(LSTM)as the Baseline BEP model.Performance evaluation is conducted to systematically qualify the effectiveness and applicability of KTL and improved KTL-DIL.Real-world data from six-type 36 buildings of six types are adopted to evaluate the performance of KTL and KTL-DIL in data-driven BEP tasks considering factors like the model increment time interval,the available target and source building data volumes.Compared with LSTM,results indicate that KTL(LSTM-DANN)and the proposed KTL-DIL(LSTM-DANN-CDI)can significantly improve the BEP performance for new buildings with limited data.Compared with the pure KTL strategy LSTM-DANN,the improved KTL-DIL strategy LSTM-DANN-CDI has better prediction performance with an average performance improvement ratio of 60%.