The amount of oxygen blown into the converter is one of the key parameters for the control of the converter blowing process,which directly affects the tap-to-tap time of converter. In this study, a hybrid model based ...The amount of oxygen blown into the converter is one of the key parameters for the control of the converter blowing process,which directly affects the tap-to-tap time of converter. In this study, a hybrid model based on oxygen balance mechanism (OBM) and deep neural network (DNN) was established for predicting oxygen blowing time in converter. A three-step method was utilized in the hybrid model. First, the oxygen consumption volume was predicted by the OBM model and DNN model, respectively. Second, a more accurate oxygen consumption volume was obtained by integrating the OBM model and DNN model. Finally, the converter oxygen blowing time was calculated according to the oxygen consumption volume and the oxygen supply intensity of each heat. The proposed hybrid model was verified using the actual data collected from an integrated steel plant in China, and compared with multiple linear regression model, OBM model, and neural network model including extreme learning machine, back propagation neural network, and DNN. The test results indicate that the hybrid model with a network structure of 3 hidden layer layers, 32-16-8 neurons per hidden layer, and 0.1 learning rate has the best prediction accuracy and stronger generalization ability compared with other models. The predicted hit ratio of oxygen consumption volume within the error±300 m^(3)is 96.67%;determination coefficient (R^(2)) and root mean square error (RMSE) are0.6984 and 150.03 m^(3), respectively. The oxygen blow time prediction hit ratio within the error±0.6 min is 89.50%;R2and RMSE are0.9486 and 0.3592 min, respectively. As a result, the proposed model can effectively predict the oxygen consumption volume and oxygen blowing time in the converter.展开更多
In the Industrial Internet of Things(IIoT),sensors generate time series data to reflect the working state.When the systems are attacked,timely identification of outliers in time series is critical to ensure security.A...In the Industrial Internet of Things(IIoT),sensors generate time series data to reflect the working state.When the systems are attacked,timely identification of outliers in time series is critical to ensure security.Although many anomaly detection methods have been proposed,the temporal correlation of the time series over the same sensor and the state(spatial)correlation between different sensors are rarely considered simultaneously in these methods.Owing to the superior capability of Transformer in learning time series features.This paper proposes a time series anomaly detection method based on a spatial-temporal network and an improved Transformer.Additionally,the methods based on graph neural networks typically include a graph structure learning module and an anomaly detection module,which are interdependent.However,in the initial phase of training,since neither of the modules has reached an optimal state,their performance may influence each other.This scenario makes the end-to-end training approach hard to effectively direct the learning trajectory of each module.This interdependence between the modules,coupled with the initial instability,may cause the model to find it hard to find the optimal solution during the training process,resulting in unsatisfactory results.We introduce an adaptive graph structure learning method to obtain the optimal model parameters and graph structure.Experiments on two publicly available datasets demonstrate that the proposed method attains higher anomaly detection results than other methods.展开更多
The composite time scale(CTS) provides an accurate and stable time-frequency reference for modern science and technology. Conventional CTS always features a centralized network topology, which means that the CTS is ac...The composite time scale(CTS) provides an accurate and stable time-frequency reference for modern science and technology. Conventional CTS always features a centralized network topology, which means that the CTS is accompanied by a local master clock. This largely restricts the stability and reliability of the CTS. We simulate the restriction and analyze the influence of the master clock on the CTS. It proves that the CTS's long-term stability is also positively related to that of the master clock, until the region dominated by the frequency drift of the H-maser(averaging time longer than ~10~5s).Aiming at this restriction, a real-time clock network is utilized. Based on the network, a real-time CTS referenced by a stable remote master clock is achieved. The experiment comparing two real-time CTSs referenced by a local and a remote master clock respectively reveals that under open-loop steering, the stability of the CTS is improved by referencing to a remote and more stable master clock instead of a local and less stable master clock. In this way, with the help of the proposed scheme, the CTS can be referenced to the most stable master clock within the network in real time, no matter whether it is local or remote, making democratic polycentric timekeeping possible.展开更多
Time series forecasting plays an important role in various fields, such as energy, finance, transport, and weather. Temporal convolutional networks (TCNs) based on dilated causal convolution have been widely used in t...Time series forecasting plays an important role in various fields, such as energy, finance, transport, and weather. Temporal convolutional networks (TCNs) based on dilated causal convolution have been widely used in time series forecasting. However, two problems weaken the performance of TCNs. One is that in dilated casual convolution, causal convolution leads to the receptive fields of outputs being concentrated in the earlier part of the input sequence, whereas the recent input information will be severely lost. The other is that the distribution shift problem in time series has not been adequately solved. To address the first problem, we propose a subsequence-based dilated convolution method (SDC). By using multiple convolutional filters to convolve elements of neighboring subsequences, the method extracts temporal features from a growing receptive field via a growing subsequence rather than a single element. Ultimately, the receptive field of each output element can cover the whole input sequence. To address the second problem, we propose a difference and compensation method (DCM). The method reduces the discrepancies between and within the input sequences by difference operations and then compensates the outputs for the information lost due to difference operations. Based on SDC and DCM, we further construct a temporal subsequence-based convolutional network with difference (TSCND) for time series forecasting. The experimental results show that TSCND can reduce prediction mean squared error by 7.3% and save runtime, compared with state-of-the-art models and vanilla TCN.展开更多
Long-term time series forecasting stands as a crucial research domain within the realm of automated machine learning(AutoML).At present,forecasting,whether rooted in machine learning or statistical learning,typically ...Long-term time series forecasting stands as a crucial research domain within the realm of automated machine learning(AutoML).At present,forecasting,whether rooted in machine learning or statistical learning,typically relies on expert input and necessitates substantial manual involvement.This manual effort spans model development,feature engineering,hyper-parameter tuning,and the intricate construction of time series models.The complexity of these tasks renders complete automation unfeasible,as they inherently demand human intervention at multiple junctures.To surmount these challenges,this article proposes leveraging Long Short-Term Memory,which is the variant of Recurrent Neural Networks,harnessing memory cells and gating mechanisms to facilitate long-term time series prediction.However,forecasting accuracy by particular neural network and traditional models can degrade significantly,when addressing long-term time-series tasks.Therefore,our research demonstrates that this innovative approach outperforms the traditional Autoregressive Integrated Moving Average(ARIMA)method in forecasting long-term univariate time series.ARIMA is a high-quality and competitive model in time series prediction,and yet it requires significant preprocessing efforts.Using multiple accuracy metrics,we have evaluated both ARIMA and proposed method on the simulated time-series data and real data in both short and long term.Furthermore,our findings indicate its superiority over alternative network architectures,including Fully Connected Neural Networks,Convolutional Neural Networks,and Nonpooling Convolutional Neural Networks.Our AutoML approach enables non-professional to attain highly accurate and effective time series forecasting,and can be widely applied to various domains,particularly in business and finance.展开更多
The prediction for Multivariate Time Series(MTS)explores the interrelationships among variables at historical moments,extracts their relevant characteristics,and is widely used in finance,weather,complex industries an...The prediction for Multivariate Time Series(MTS)explores the interrelationships among variables at historical moments,extracts their relevant characteristics,and is widely used in finance,weather,complex industries and other fields.Furthermore,it is important to construct a digital twin system.However,existing methods do not take full advantage of the potential properties of variables,which results in poor predicted accuracy.In this paper,we propose the Adaptive Fused Spatial-Temporal Graph Convolutional Network(AFSTGCN).First,to address the problem of the unknown spatial-temporal structure,we construct the Adaptive Fused Spatial-Temporal Graph(AFSTG)layer.Specifically,we fuse the spatial-temporal graph based on the interrelationship of spatial graphs.Simultaneously,we construct the adaptive adjacency matrix of the spatial-temporal graph using node embedding methods.Subsequently,to overcome the insufficient extraction of disordered correlation features,we construct the Adaptive Fused Spatial-Temporal Graph Convolutional(AFSTGC)module.The module forces the reordering of disordered temporal,spatial and spatial-temporal dependencies into rule-like data.AFSTGCN dynamically and synchronously acquires potential temporal,spatial and spatial-temporal correlations,thereby fully extracting rich hierarchical feature information to enhance the predicted accuracy.Experiments on different types of MTS datasets demonstrate that the model achieves state-of-the-art single-step and multi-step performance compared with eight other deep learning models.展开更多
Ocean temperature is an important physical variable in marine ecosystems,and ocean temperature prediction is an important research objective in ocean-related fields.Currently,one of the commonly used methods for ocean...Ocean temperature is an important physical variable in marine ecosystems,and ocean temperature prediction is an important research objective in ocean-related fields.Currently,one of the commonly used methods for ocean temperature prediction is based on data-driven,but research on this method is mostly limited to the sea surface,with few studies on the prediction of internal ocean temperature.Existing graph neural network-based methods usually use predefined graphs or learned static graphs,which cannot capture the dynamic associations among data.In this study,we propose a novel dynamic spatiotemporal graph neural network(DSTGN)to predict threedimensional ocean temperature(3D-OT),which combines static graph learning and dynamic graph learning to automatically mine two unknown dependencies between sequences based on the original 3D-OT data without prior knowledge.Temporal and spatial dependencies in the time series were then captured using temporal and graph convolutions.We also integrated dynamic graph learning,static graph learning,graph convolution,and temporal convolution into an end-to-end framework for 3D-OT prediction using time-series grid data.In this study,we conducted prediction experiments using high-resolution 3D-OT from the Copernicus global ocean physical reanalysis,with data covering the vertical variation of temperature from the sea surface to 1000 m below the sea surface.We compared five mainstream models that are commonly used for ocean temperature prediction,and the results showed that the method achieved the best prediction results at all prediction scales.展开更多
Tunnel boring machines(TBMs)have been widely utilised in tunnel construction due to their high efficiency and reliability.Accurately predicting TBM performance can improve project time management,cost control,and risk...Tunnel boring machines(TBMs)have been widely utilised in tunnel construction due to their high efficiency and reliability.Accurately predicting TBM performance can improve project time management,cost control,and risk management.This study aims to use deep learning to develop real-time models for predicting the penetration rate(PR).The models are built using data from the Changsha metro project,and their performances are evaluated using unseen data from the Zhengzhou Metro project.In one-step forecast,the predicted penetration rate follows the trend of the measured penetration rate in both training and testing.The autoregressive integrated moving average(ARIMA)model is compared with the recurrent neural network(RNN)model.The results show that univariate models,which only consider historical penetration rate itself,perform better than multivariate models that take into account multiple geological and operational parameters(GEO and OP).Next,an RNN variant combining time series of penetration rate with the last-step geological and operational parameters is developed,and it performs better than other models.A sensitivity analysis shows that the penetration rate is the most important parameter,while other parameters have a smaller impact on time series forecasting.It is also found that smoothed data are easier to predict with high accuracy.Nevertheless,over-simplified data can lose real characteristics in time series.In conclusion,the RNN variant can accurately predict the next-step penetration rate,and data smoothing is crucial in time series forecasting.This study provides practical guidance for TBM performance forecasting in practical engineering.展开更多
This paper presents a new method for finding the natural frequency set of a linear time invariant network. In the paper deriving and proving of a common equation are described. It is for the first time that in the co...This paper presents a new method for finding the natural frequency set of a linear time invariant network. In the paper deriving and proving of a common equation are described. It is for the first time that in the common equation the natural frequencies of an n th order network are correlated with the n port parameters. The equation is simple and dual in form and clear in its physical meaning. The procedure of finding the solution is simplified and standardized, and it will not cause the loss of roots. The common equation would find wide use and be systematized.展开更多
As one of the most widespread renewable energy sources,wind energy is now an important part of the power system.Accurate and appropriate wind speed forecasting has an essential impact on wind energy utilisation.Howeve...As one of the most widespread renewable energy sources,wind energy is now an important part of the power system.Accurate and appropriate wind speed forecasting has an essential impact on wind energy utilisation.However,due to the stochastic and un-certain nature of wind energy,more accurate forecasting is necessary for its more stable and safer utilisation.This paper proposes a Legendre multiwavelet‐based neural network model for non‐linear wind speed prediction.It combines the excellent properties of Legendre multi‐wavelets with the self‐learning capability of neural networks,which has rigorous mathematical theory support.It learns input‐output data pairs and shares weights within divided subintervals,which can greatly reduce computing costs.We explore the effectiveness of Legendre multi‐wavelets as an activation function.Mean-while,it is successfully being applied to wind speed prediction.In addition,the appli-cation of Legendre multi‐wavelet neural networks in a hybrid model in decomposition‐reconstruction mode to wind speed prediction problems is also discussed.Numerical results on real data sets show that the proposed model is able to achieve optimal per-formance and high prediction accuracy.In particular,the model shows a more stable performance in multi‐step prediction,illustrating its superiority.展开更多
With the vigorous development of automobile industry,in-vehicle network is also constantly upgraded to meet data transmission requirements of emerging applications.The main transmission requirements are low latency an...With the vigorous development of automobile industry,in-vehicle network is also constantly upgraded to meet data transmission requirements of emerging applications.The main transmission requirements are low latency and certainty especially for autonomous driving.Time sensitive networking(TSN)based on Ethernet gives a possible solution to these requirements.Previous surveys usually investigated TSN from a general perspective,which referred to TSN of various application fields.In this paper,we focus on the application of TSN to the in-vehicle networks.For in-vehicle networks,we discuss all related TSN standards specified by IEEE 802.1 work group up to now.We further overview and analyze recent literature on various aspects of TSN for automotive applications,including synchronization,resource reservation,scheduling,certainty,software and hardware.Application scenarios of TSN for in-vehicle networks are analyzed one by one.Since TSN of in-vehicle network is still at a very initial stage,this paper also gives insights on open issues,future research directions and possible solutions.展开更多
Time series forecasting and analysis are widely used in many fields and application scenarios.Time series historical data reflects the change pattern and trend,which can serve the application and decision in each appl...Time series forecasting and analysis are widely used in many fields and application scenarios.Time series historical data reflects the change pattern and trend,which can serve the application and decision in each application scenario to a certain extent.In this paper,we select the time series prediction problem in the atmospheric environment scenario to start the application research.In terms of data support,we obtain the data of nearly 3500 vehicles in some cities in China fromRunwoda Research Institute,focusing on the major pollutant emission data of non-road mobile machinery and high emission vehicles in Beijing and Bozhou,Anhui Province to build the dataset and conduct the time series prediction analysis experiments on them.This paper proposes a P-gLSTNet model,and uses Autoregressive Integrated Moving Average model(ARIMA),long and short-term memory(LSTM),and Prophet to predict and compare the emissions in the future period.The experiments are validated on four public data sets and one self-collected data set,and the mean absolute error(MAE),root mean square error(RMSE),and mean absolute percentage error(MAPE)are selected as the evaluationmetrics.The experimental results show that the proposed P-gLSTNet fusion model predicts less error,outperforms the backbone method,and is more suitable for the prediction of time-series data in this scenario.展开更多
A deep-learning-based framework is proposed to predict the impedance response and underlying electrochemical behavior of the reversible protonic ceramic cell(PCC) across a wide variety of different operating condition...A deep-learning-based framework is proposed to predict the impedance response and underlying electrochemical behavior of the reversible protonic ceramic cell(PCC) across a wide variety of different operating conditions.Electrochemical impedance spectra(EIS) of PCCs were first acquired under a variety of opera ting conditions to provide a dataset containing 36 sets of EIS spectra for the model.An artificial neural network(ANN) was then trained to model the relationship between the cell operating condition and EIS response.Finally,ANN model-predicted EIS spectra were analyzed by the distribution of relaxation times(DRT) and compared to DRT spectra obtained from the experimental EIS data,enabling an assessment of the accumulative errors from the predicted EIS data vs the predicted DRT.We show that in certain cases,although the R^(2)of the predicted EIS curve may be> 0.98,the R^(2)of the predicted DRT may be as low as~0.3.This can lead to an inaccurate ANN prediction of the underlying time-resolved electrochemical response,although the apparent accuracy as evaluated from the EIS prediction may seem acceptable.After adjustment of the parameters of the ANN framework,the average R^(2)of the DRTs derived from the predicted EIS can be improved to 0.9667.Thus,we demonstrate that a properly tuned ANN model can be used as an effective tool to predict not only the EIS,but also the DRT of complex electrochemical systems.展开更多
Noise and time delay are inevitable in real-world networks. In this article, the framework of master stability function is generalized to stochastic complex networks with time-delayed coupling. The focus is on the eff...Noise and time delay are inevitable in real-world networks. In this article, the framework of master stability function is generalized to stochastic complex networks with time-delayed coupling. The focus is on the effects of noise, time delay,and their inner interactions on the network synchronization. It is found that when there exists time-delayed coupling in the network and noise diffuses through all state variables of nodes, appropriately increasing the noise intensity can effectively improve the network synchronizability;otherwise, noise can be either beneficial or harmful. For stochastic networks, large time delays will lead to desynchronization. These findings provide valuable references for designing optimal complex networks in practical applications.展开更多
Time series classification(TSC)has attracted a lot of attention for time series data mining tasks and has been applied in various fields.With the success of deep learning(DL)in computer vision recognition,people are s...Time series classification(TSC)has attracted a lot of attention for time series data mining tasks and has been applied in various fields.With the success of deep learning(DL)in computer vision recognition,people are starting to use deep learning to tackle TSC tasks.Quantum neural networks(QNN)have recently demonstrated their superiority over traditional machine learning in methods such as image processing and natural language processing,but research using quantum neural networks to handle TSC tasks has not received enough attention.Therefore,we proposed a learning framework based on multiple imaging and hybrid QNN(MIHQNN)for TSC tasks.We investigate the possibility of converting 1D time series to 2D images and classifying the converted images using hybrid QNN.We explored the differences between MIHQNN based on single time series imaging and MIHQNN based on the fusion of multiple time series imaging.Four quantum circuits were also selected and designed to study the impact of quantum circuits on TSC tasks.We tested our method on several standard datasets and achieved significant results compared to several current TSC methods,demonstrating the effectiveness of MIHQNN.This research highlights the potential of applying quantum computing to TSC and provides the theoretical and experimental background for future research.展开更多
At present,the interpretation of regional economic development(RED)has changed from a simple evaluation of economic growth to a focus on economic growth and the optimization of economic structure,the improvement of ec...At present,the interpretation of regional economic development(RED)has changed from a simple evaluation of economic growth to a focus on economic growth and the optimization of economic structure,the improvement of economic relations,and the change of institutional innovation.This article uses the RED trend as the research object and constructs the RED index to conduct the theoretical analysis.Then this paper uses the attention mechanism based on digital twins and the time series network model to verify the actual data.Finally,the regional economy is predicted according to the theoretical model.The specific research work mainly includes the following aspects:1)This paper introduced the development status of research on time series networks and economic forecasting at home and abroad.2)This paper introduces the basic principles and structures of long and short-term memory(LSTM)and convolutional neural network(CNN),constructs an improved CNN-LSTM model combined with the attention mechanism,and then constructs a regional economic prediction index system.3)The best parameters of the model are selected through experiments,and the trained model is used for simulation experiment prediction.The results show that the CNN-LSTM model based on the attentionmechanism proposed in this paper has high accuracy in predicting regional economies.展开更多
Time synchronization is one of the base techniques in wireless sensor networks(WSNs).This paper proposes a novel time synchronization protocol which is a robust consensusbased algorithm in the existence of transmissio...Time synchronization is one of the base techniques in wireless sensor networks(WSNs).This paper proposes a novel time synchronization protocol which is a robust consensusbased algorithm in the existence of transmission delay and packet loss.It compensates for transmission delay and packet loss firstly,and then,estimates clock skew and clock offset in two steps.Simulation and experiment results show that the proposed protocol can keep synchronization error below 2μs in the grid network of 10 nodes or the random network of 90 nodes.Moreover,the synchronization accuracy in the proposed protocol can keep constant when the WSN works up to a month.展开更多
Landslides are destructive natural disasters that cause catastrophic damage and loss of life worldwide.Accurately predicting landslide displacement enables effective early warning and risk management.However,the limit...Landslides are destructive natural disasters that cause catastrophic damage and loss of life worldwide.Accurately predicting landslide displacement enables effective early warning and risk management.However,the limited availability of on-site measurement data has been a substantial obstacle in developing data-driven models,such as state-of-the-art machine learning(ML)models.To address these challenges,this study proposes a data augmentation framework that uses generative adversarial networks(GANs),a recent advance in generative artificial intelligence(AI),to improve the accuracy of landslide displacement prediction.The framework provides effective data augmentation to enhance limited datasets.A recurrent GAN model,RGAN-LS,is proposed,specifically designed to generate realistic synthetic multivariate time series that mimics the characteristics of real landslide on-site measurement data.A customized moment-matching loss is incorporated in addition to the adversarial loss in GAN during the training of RGAN-LS to capture the temporal dynamics and correlations in real time series data.Then,the synthetic data generated by RGAN-LS is used to enhance the training of long short-term memory(LSTM)networks and particle swarm optimization-support vector machine(PSO-SVM)models for landslide displacement prediction tasks.Results on two landslides in the Three Gorges Reservoir(TGR)region show a significant improvement in LSTM model prediction performance when trained on augmented data.For instance,in the case of the Baishuihe landslide,the average root mean square error(RMSE)increases by 16.11%,and the mean absolute error(MAE)by 17.59%.More importantly,the model’s responsiveness during mutational stages is enhanced for early warning purposes.However,the results have shown that the static PSO-SVM model only sees marginal gains compared to recurrent models such as LSTM.Further analysis indicates that an optimal synthetic-to-real data ratio(50%on the illustration cases)maximizes the improvements.This also demonstrates the robustness and effectiveness of supplementing training data for dynamic models to obtain better results.By using the powerful generative AI approach,RGAN-LS can generate high-fidelity synthetic landslide data.This is critical for improving the performance of advanced ML models in predicting landslide displacement,particularly when there are limited training data.Additionally,this approach has the potential to expand the use of generative AI in geohazard risk management and other research areas.展开更多
In recent times,real time wireless networks have found their applicability in several practical applications such as smart city,healthcare,surveillance,environmental monitoring,etc.At the same time,proper localization...In recent times,real time wireless networks have found their applicability in several practical applications such as smart city,healthcare,surveillance,environmental monitoring,etc.At the same time,proper localization of nodes in real time wireless networks helps to improve the overall functioning of networks.This study presents an Improved Metaheuristics based Energy Efficient Clustering with Node Localization(IM-EECNL)approach for real-time wireless networks.The proposed IM-EECNL technique involves two major processes namely node localization and clustering.Firstly,Chaotic Water Strider Algorithm based Node Localization(CWSANL)technique to determine the unknown position of the nodes.Secondly,an Oppositional Archimedes Optimization Algorithm based Clustering(OAOAC)technique is applied to accomplish energy efficiency in the network.Besides,the OAOAC technique derives afitness function comprising residual energy,distance to cluster heads(CHs),distance to base station(BS),and load.The performance validation of the IM-EECNL technique is carried out under several aspects such as localization and energy efficiency.A wide ranging comparative outcomes analysis highlighted the improved performance of the IM-EECNL approach on the recent approaches with the maximum packet delivery ratio(PDR)of 0.985.展开更多
基金financially supported by the National Natural Science Foundation of China (Nos.51974023 and52374321)the funding of State Key Laboratory of Advanced Metallurgy,University of Science and Technology Beijing,China (No.41620007)。
文摘The amount of oxygen blown into the converter is one of the key parameters for the control of the converter blowing process,which directly affects the tap-to-tap time of converter. In this study, a hybrid model based on oxygen balance mechanism (OBM) and deep neural network (DNN) was established for predicting oxygen blowing time in converter. A three-step method was utilized in the hybrid model. First, the oxygen consumption volume was predicted by the OBM model and DNN model, respectively. Second, a more accurate oxygen consumption volume was obtained by integrating the OBM model and DNN model. Finally, the converter oxygen blowing time was calculated according to the oxygen consumption volume and the oxygen supply intensity of each heat. The proposed hybrid model was verified using the actual data collected from an integrated steel plant in China, and compared with multiple linear regression model, OBM model, and neural network model including extreme learning machine, back propagation neural network, and DNN. The test results indicate that the hybrid model with a network structure of 3 hidden layer layers, 32-16-8 neurons per hidden layer, and 0.1 learning rate has the best prediction accuracy and stronger generalization ability compared with other models. The predicted hit ratio of oxygen consumption volume within the error±300 m^(3)is 96.67%;determination coefficient (R^(2)) and root mean square error (RMSE) are0.6984 and 150.03 m^(3), respectively. The oxygen blow time prediction hit ratio within the error±0.6 min is 89.50%;R2and RMSE are0.9486 and 0.3592 min, respectively. As a result, the proposed model can effectively predict the oxygen consumption volume and oxygen blowing time in the converter.
基金This work is partly supported by the National Key Research and Development Program of China(Grant No.2020YFB1805403)the National Natural Science Foundation of China(Grant No.62032002)the 111 Project(Grant No.B21049).
文摘In the Industrial Internet of Things(IIoT),sensors generate time series data to reflect the working state.When the systems are attacked,timely identification of outliers in time series is critical to ensure security.Although many anomaly detection methods have been proposed,the temporal correlation of the time series over the same sensor and the state(spatial)correlation between different sensors are rarely considered simultaneously in these methods.Owing to the superior capability of Transformer in learning time series features.This paper proposes a time series anomaly detection method based on a spatial-temporal network and an improved Transformer.Additionally,the methods based on graph neural networks typically include a graph structure learning module and an anomaly detection module,which are interdependent.However,in the initial phase of training,since neither of the modules has reached an optimal state,their performance may influence each other.This scenario makes the end-to-end training approach hard to effectively direct the learning trajectory of each module.This interdependence between the modules,coupled with the initial instability,may cause the model to find it hard to find the optimal solution during the training process,resulting in unsatisfactory results.We introduce an adaptive graph structure learning method to obtain the optimal model parameters and graph structure.Experiments on two publicly available datasets demonstrate that the proposed method attains higher anomaly detection results than other methods.
基金supported in part by the National Natural Science Foundation of China (Grant No.61971259)the National Key R&D Program of China (Grant No.2021YFA1402102)Tsinghua University Initiative Scientific Research Program。
文摘The composite time scale(CTS) provides an accurate and stable time-frequency reference for modern science and technology. Conventional CTS always features a centralized network topology, which means that the CTS is accompanied by a local master clock. This largely restricts the stability and reliability of the CTS. We simulate the restriction and analyze the influence of the master clock on the CTS. It proves that the CTS's long-term stability is also positively related to that of the master clock, until the region dominated by the frequency drift of the H-maser(averaging time longer than ~10~5s).Aiming at this restriction, a real-time clock network is utilized. Based on the network, a real-time CTS referenced by a stable remote master clock is achieved. The experiment comparing two real-time CTSs referenced by a local and a remote master clock respectively reveals that under open-loop steering, the stability of the CTS is improved by referencing to a remote and more stable master clock instead of a local and less stable master clock. In this way, with the help of the proposed scheme, the CTS can be referenced to the most stable master clock within the network in real time, no matter whether it is local or remote, making democratic polycentric timekeeping possible.
基金supported by the National Key Research and Development Program of China(No.2018YFB2101300)the National Natural Science Foundation of China(Grant No.61871186)the Dean’s Fund of Engineering Research Center of Software/Hardware Co-Design Technology and Application,Ministry of Education(East China Normal University).
文摘Time series forecasting plays an important role in various fields, such as energy, finance, transport, and weather. Temporal convolutional networks (TCNs) based on dilated causal convolution have been widely used in time series forecasting. However, two problems weaken the performance of TCNs. One is that in dilated casual convolution, causal convolution leads to the receptive fields of outputs being concentrated in the earlier part of the input sequence, whereas the recent input information will be severely lost. The other is that the distribution shift problem in time series has not been adequately solved. To address the first problem, we propose a subsequence-based dilated convolution method (SDC). By using multiple convolutional filters to convolve elements of neighboring subsequences, the method extracts temporal features from a growing receptive field via a growing subsequence rather than a single element. Ultimately, the receptive field of each output element can cover the whole input sequence. To address the second problem, we propose a difference and compensation method (DCM). The method reduces the discrepancies between and within the input sequences by difference operations and then compensates the outputs for the information lost due to difference operations. Based on SDC and DCM, we further construct a temporal subsequence-based convolutional network with difference (TSCND) for time series forecasting. The experimental results show that TSCND can reduce prediction mean squared error by 7.3% and save runtime, compared with state-of-the-art models and vanilla TCN.
文摘Long-term time series forecasting stands as a crucial research domain within the realm of automated machine learning(AutoML).At present,forecasting,whether rooted in machine learning or statistical learning,typically relies on expert input and necessitates substantial manual involvement.This manual effort spans model development,feature engineering,hyper-parameter tuning,and the intricate construction of time series models.The complexity of these tasks renders complete automation unfeasible,as they inherently demand human intervention at multiple junctures.To surmount these challenges,this article proposes leveraging Long Short-Term Memory,which is the variant of Recurrent Neural Networks,harnessing memory cells and gating mechanisms to facilitate long-term time series prediction.However,forecasting accuracy by particular neural network and traditional models can degrade significantly,when addressing long-term time-series tasks.Therefore,our research demonstrates that this innovative approach outperforms the traditional Autoregressive Integrated Moving Average(ARIMA)method in forecasting long-term univariate time series.ARIMA is a high-quality and competitive model in time series prediction,and yet it requires significant preprocessing efforts.Using multiple accuracy metrics,we have evaluated both ARIMA and proposed method on the simulated time-series data and real data in both short and long term.Furthermore,our findings indicate its superiority over alternative network architectures,including Fully Connected Neural Networks,Convolutional Neural Networks,and Nonpooling Convolutional Neural Networks.Our AutoML approach enables non-professional to attain highly accurate and effective time series forecasting,and can be widely applied to various domains,particularly in business and finance.
基金supported by the China Scholarship Council and the CERNET Innovation Project under grant No.20170111.
文摘The prediction for Multivariate Time Series(MTS)explores the interrelationships among variables at historical moments,extracts their relevant characteristics,and is widely used in finance,weather,complex industries and other fields.Furthermore,it is important to construct a digital twin system.However,existing methods do not take full advantage of the potential properties of variables,which results in poor predicted accuracy.In this paper,we propose the Adaptive Fused Spatial-Temporal Graph Convolutional Network(AFSTGCN).First,to address the problem of the unknown spatial-temporal structure,we construct the Adaptive Fused Spatial-Temporal Graph(AFSTG)layer.Specifically,we fuse the spatial-temporal graph based on the interrelationship of spatial graphs.Simultaneously,we construct the adaptive adjacency matrix of the spatial-temporal graph using node embedding methods.Subsequently,to overcome the insufficient extraction of disordered correlation features,we construct the Adaptive Fused Spatial-Temporal Graph Convolutional(AFSTGC)module.The module forces the reordering of disordered temporal,spatial and spatial-temporal dependencies into rule-like data.AFSTGCN dynamically and synchronously acquires potential temporal,spatial and spatial-temporal correlations,thereby fully extracting rich hierarchical feature information to enhance the predicted accuracy.Experiments on different types of MTS datasets demonstrate that the model achieves state-of-the-art single-step and multi-step performance compared with eight other deep learning models.
基金The National Key R&D Program of China under contract No.2021YFC3101603.
文摘Ocean temperature is an important physical variable in marine ecosystems,and ocean temperature prediction is an important research objective in ocean-related fields.Currently,one of the commonly used methods for ocean temperature prediction is based on data-driven,but research on this method is mostly limited to the sea surface,with few studies on the prediction of internal ocean temperature.Existing graph neural network-based methods usually use predefined graphs or learned static graphs,which cannot capture the dynamic associations among data.In this study,we propose a novel dynamic spatiotemporal graph neural network(DSTGN)to predict threedimensional ocean temperature(3D-OT),which combines static graph learning and dynamic graph learning to automatically mine two unknown dependencies between sequences based on the original 3D-OT data without prior knowledge.Temporal and spatial dependencies in the time series were then captured using temporal and graph convolutions.We also integrated dynamic graph learning,static graph learning,graph convolution,and temporal convolution into an end-to-end framework for 3D-OT prediction using time-series grid data.In this study,we conducted prediction experiments using high-resolution 3D-OT from the Copernicus global ocean physical reanalysis,with data covering the vertical variation of temperature from the sea surface to 1000 m below the sea surface.We compared five mainstream models that are commonly used for ocean temperature prediction,and the results showed that the method achieved the best prediction results at all prediction scales.
文摘Tunnel boring machines(TBMs)have been widely utilised in tunnel construction due to their high efficiency and reliability.Accurately predicting TBM performance can improve project time management,cost control,and risk management.This study aims to use deep learning to develop real-time models for predicting the penetration rate(PR).The models are built using data from the Changsha metro project,and their performances are evaluated using unseen data from the Zhengzhou Metro project.In one-step forecast,the predicted penetration rate follows the trend of the measured penetration rate in both training and testing.The autoregressive integrated moving average(ARIMA)model is compared with the recurrent neural network(RNN)model.The results show that univariate models,which only consider historical penetration rate itself,perform better than multivariate models that take into account multiple geological and operational parameters(GEO and OP).Next,an RNN variant combining time series of penetration rate with the last-step geological and operational parameters is developed,and it performs better than other models.A sensitivity analysis shows that the penetration rate is the most important parameter,while other parameters have a smaller impact on time series forecasting.It is also found that smoothed data are easier to predict with high accuracy.Nevertheless,over-simplified data can lose real characteristics in time series.In conclusion,the RNN variant can accurately predict the next-step penetration rate,and data smoothing is crucial in time series forecasting.This study provides practical guidance for TBM performance forecasting in practical engineering.
文摘This paper presents a new method for finding the natural frequency set of a linear time invariant network. In the paper deriving and proving of a common equation are described. It is for the first time that in the common equation the natural frequencies of an n th order network are correlated with the n port parameters. The equation is simple and dual in form and clear in its physical meaning. The procedure of finding the solution is simplified and standardized, and it will not cause the loss of roots. The common equation would find wide use and be systematized.
基金funded by Fundamental and Advanced Research Project of Chongqing CSTC of China(No.cstc2019jcyj‐msxmX0386 and No.cstc2020jcyj‐msxmX0232)National Statistical Science Research Project(No.2020LY100).
文摘As one of the most widespread renewable energy sources,wind energy is now an important part of the power system.Accurate and appropriate wind speed forecasting has an essential impact on wind energy utilisation.However,due to the stochastic and un-certain nature of wind energy,more accurate forecasting is necessary for its more stable and safer utilisation.This paper proposes a Legendre multiwavelet‐based neural network model for non‐linear wind speed prediction.It combines the excellent properties of Legendre multi‐wavelets with the self‐learning capability of neural networks,which has rigorous mathematical theory support.It learns input‐output data pairs and shares weights within divided subintervals,which can greatly reduce computing costs.We explore the effectiveness of Legendre multi‐wavelets as an activation function.Mean-while,it is successfully being applied to wind speed prediction.In addition,the appli-cation of Legendre multi‐wavelet neural networks in a hybrid model in decomposition‐reconstruction mode to wind speed prediction problems is also discussed.Numerical results on real data sets show that the proposed model is able to achieve optimal per-formance and high prediction accuracy.In particular,the model shows a more stable performance in multi‐step prediction,illustrating its superiority.
文摘With the vigorous development of automobile industry,in-vehicle network is also constantly upgraded to meet data transmission requirements of emerging applications.The main transmission requirements are low latency and certainty especially for autonomous driving.Time sensitive networking(TSN)based on Ethernet gives a possible solution to these requirements.Previous surveys usually investigated TSN from a general perspective,which referred to TSN of various application fields.In this paper,we focus on the application of TSN to the in-vehicle networks.For in-vehicle networks,we discuss all related TSN standards specified by IEEE 802.1 work group up to now.We further overview and analyze recent literature on various aspects of TSN for automotive applications,including synchronization,resource reservation,scheduling,certainty,software and hardware.Application scenarios of TSN for in-vehicle networks are analyzed one by one.Since TSN of in-vehicle network is still at a very initial stage,this paper also gives insights on open issues,future research directions and possible solutions.
基金the Beijing Chaoyang District Collaborative Innovation Project(No.CYXT2013)the subject support of Beijing Municipal Science and Technology Key R&D Program-Capital Blue Sky Action Cultivation Project(Z19110900910000)+1 种基金“Research and Demonstration ofHigh Emission Vehicle Monitoring Equipment System Based on Sensor Integration Technology”(Z19110000911003)This work was supported by the Academic Research Projects of Beijing Union University(No.ZK80202103).
文摘Time series forecasting and analysis are widely used in many fields and application scenarios.Time series historical data reflects the change pattern and trend,which can serve the application and decision in each application scenario to a certain extent.In this paper,we select the time series prediction problem in the atmospheric environment scenario to start the application research.In terms of data support,we obtain the data of nearly 3500 vehicles in some cities in China fromRunwoda Research Institute,focusing on the major pollutant emission data of non-road mobile machinery and high emission vehicles in Beijing and Bozhou,Anhui Province to build the dataset and conduct the time series prediction analysis experiments on them.This paper proposes a P-gLSTNet model,and uses Autoregressive Integrated Moving Average model(ARIMA),long and short-term memory(LSTM),and Prophet to predict and compare the emissions in the future period.The experiments are validated on four public data sets and one self-collected data set,and the mean absolute error(MAE),root mean square error(RMSE),and mean absolute percentage error(MAPE)are selected as the evaluationmetrics.The experimental results show that the proposed P-gLSTNet fusion model predicts less error,outperforms the backbone method,and is more suitable for the prediction of time-series data in this scenario.
基金funding from the National Natural Science Foundation of China,China(12172104,52102226)the Shenzhen Science and Technology Innovation Commission,China(JCYJ20200109113439837)the Stable Supporting Fund of Shenzhen,China(GXWD2020123015542700320200728114835006)。
文摘A deep-learning-based framework is proposed to predict the impedance response and underlying electrochemical behavior of the reversible protonic ceramic cell(PCC) across a wide variety of different operating conditions.Electrochemical impedance spectra(EIS) of PCCs were first acquired under a variety of opera ting conditions to provide a dataset containing 36 sets of EIS spectra for the model.An artificial neural network(ANN) was then trained to model the relationship between the cell operating condition and EIS response.Finally,ANN model-predicted EIS spectra were analyzed by the distribution of relaxation times(DRT) and compared to DRT spectra obtained from the experimental EIS data,enabling an assessment of the accumulative errors from the predicted EIS data vs the predicted DRT.We show that in certain cases,although the R^(2)of the predicted EIS curve may be> 0.98,the R^(2)of the predicted DRT may be as low as~0.3.This can lead to an inaccurate ANN prediction of the underlying time-resolved electrochemical response,although the apparent accuracy as evaluated from the EIS prediction may seem acceptable.After adjustment of the parameters of the ANN framework,the average R^(2)of the DRTs derived from the predicted EIS can be improved to 0.9667.Thus,we demonstrate that a properly tuned ANN model can be used as an effective tool to predict not only the EIS,but also the DRT of complex electrochemical systems.
基金Project supported in part by the National Natural Science Foundation of China (Grant No. 61973064)the Natural Science Foundation of Hebei Province of China (Grant Nos. F2019501126 and F2022501024)+1 种基金the Natural Science Foundation of Liaoning Province, China (Grant No. 2020-KF11-03)the Fund from Hong Kong Research Grants Council (Grant No. CityU11206320)。
文摘Noise and time delay are inevitable in real-world networks. In this article, the framework of master stability function is generalized to stochastic complex networks with time-delayed coupling. The focus is on the effects of noise, time delay,and their inner interactions on the network synchronization. It is found that when there exists time-delayed coupling in the network and noise diffuses through all state variables of nodes, appropriately increasing the noise intensity can effectively improve the network synchronizability;otherwise, noise can be either beneficial or harmful. For stochastic networks, large time delays will lead to desynchronization. These findings provide valuable references for designing optimal complex networks in practical applications.
基金Project supported by the National Natural Science Foundation of China (Grant Nos.61772295 and 61572270)the PHD foundation of Chongqing Normal University (Grant No.19XLB003)Chongqing Technology Foresight and Institutional Innovation Project (Grant No.cstc2021jsyjyzysbAX0011)。
文摘Time series classification(TSC)has attracted a lot of attention for time series data mining tasks and has been applied in various fields.With the success of deep learning(DL)in computer vision recognition,people are starting to use deep learning to tackle TSC tasks.Quantum neural networks(QNN)have recently demonstrated their superiority over traditional machine learning in methods such as image processing and natural language processing,but research using quantum neural networks to handle TSC tasks has not received enough attention.Therefore,we proposed a learning framework based on multiple imaging and hybrid QNN(MIHQNN)for TSC tasks.We investigate the possibility of converting 1D time series to 2D images and classifying the converted images using hybrid QNN.We explored the differences between MIHQNN based on single time series imaging and MIHQNN based on the fusion of multiple time series imaging.Four quantum circuits were also selected and designed to study the impact of quantum circuits on TSC tasks.We tested our method on several standard datasets and achieved significant results compared to several current TSC methods,demonstrating the effectiveness of MIHQNN.This research highlights the potential of applying quantum computing to TSC and provides the theoretical and experimental background for future research.
文摘At present,the interpretation of regional economic development(RED)has changed from a simple evaluation of economic growth to a focus on economic growth and the optimization of economic structure,the improvement of economic relations,and the change of institutional innovation.This article uses the RED trend as the research object and constructs the RED index to conduct the theoretical analysis.Then this paper uses the attention mechanism based on digital twins and the time series network model to verify the actual data.Finally,the regional economy is predicted according to the theoretical model.The specific research work mainly includes the following aspects:1)This paper introduced the development status of research on time series networks and economic forecasting at home and abroad.2)This paper introduces the basic principles and structures of long and short-term memory(LSTM)and convolutional neural network(CNN),constructs an improved CNN-LSTM model combined with the attention mechanism,and then constructs a regional economic prediction index system.3)The best parameters of the model are selected through experiments,and the trained model is used for simulation experiment prediction.The results show that the CNN-LSTM model based on the attentionmechanism proposed in this paper has high accuracy in predicting regional economies.
文摘Time synchronization is one of the base techniques in wireless sensor networks(WSNs).This paper proposes a novel time synchronization protocol which is a robust consensusbased algorithm in the existence of transmission delay and packet loss.It compensates for transmission delay and packet loss firstly,and then,estimates clock skew and clock offset in two steps.Simulation and experiment results show that the proposed protocol can keep synchronization error below 2μs in the grid network of 10 nodes or the random network of 90 nodes.Moreover,the synchronization accuracy in the proposed protocol can keep constant when the WSN works up to a month.
基金supported by the Natural Science Foundation of Jiangsu Province(Grant No.BK20220421)the State Key Program of the National Natural Science Foundation of China(Grant No.42230702)the National Natural Science Foundation of China(Grant No.82302352).
文摘Landslides are destructive natural disasters that cause catastrophic damage and loss of life worldwide.Accurately predicting landslide displacement enables effective early warning and risk management.However,the limited availability of on-site measurement data has been a substantial obstacle in developing data-driven models,such as state-of-the-art machine learning(ML)models.To address these challenges,this study proposes a data augmentation framework that uses generative adversarial networks(GANs),a recent advance in generative artificial intelligence(AI),to improve the accuracy of landslide displacement prediction.The framework provides effective data augmentation to enhance limited datasets.A recurrent GAN model,RGAN-LS,is proposed,specifically designed to generate realistic synthetic multivariate time series that mimics the characteristics of real landslide on-site measurement data.A customized moment-matching loss is incorporated in addition to the adversarial loss in GAN during the training of RGAN-LS to capture the temporal dynamics and correlations in real time series data.Then,the synthetic data generated by RGAN-LS is used to enhance the training of long short-term memory(LSTM)networks and particle swarm optimization-support vector machine(PSO-SVM)models for landslide displacement prediction tasks.Results on two landslides in the Three Gorges Reservoir(TGR)region show a significant improvement in LSTM model prediction performance when trained on augmented data.For instance,in the case of the Baishuihe landslide,the average root mean square error(RMSE)increases by 16.11%,and the mean absolute error(MAE)by 17.59%.More importantly,the model’s responsiveness during mutational stages is enhanced for early warning purposes.However,the results have shown that the static PSO-SVM model only sees marginal gains compared to recurrent models such as LSTM.Further analysis indicates that an optimal synthetic-to-real data ratio(50%on the illustration cases)maximizes the improvements.This also demonstrates the robustness and effectiveness of supplementing training data for dynamic models to obtain better results.By using the powerful generative AI approach,RGAN-LS can generate high-fidelity synthetic landslide data.This is critical for improving the performance of advanced ML models in predicting landslide displacement,particularly when there are limited training data.Additionally,this approach has the potential to expand the use of generative AI in geohazard risk management and other research areas.
基金supported by Ulsan Metropolitan City-ETRI joint cooperation project[21AS1600,Development of intelligent technology for key industriesautonomous human-mobile-space autonomous collaboration intelligence technology].
文摘In recent times,real time wireless networks have found their applicability in several practical applications such as smart city,healthcare,surveillance,environmental monitoring,etc.At the same time,proper localization of nodes in real time wireless networks helps to improve the overall functioning of networks.This study presents an Improved Metaheuristics based Energy Efficient Clustering with Node Localization(IM-EECNL)approach for real-time wireless networks.The proposed IM-EECNL technique involves two major processes namely node localization and clustering.Firstly,Chaotic Water Strider Algorithm based Node Localization(CWSANL)technique to determine the unknown position of the nodes.Secondly,an Oppositional Archimedes Optimization Algorithm based Clustering(OAOAC)technique is applied to accomplish energy efficiency in the network.Besides,the OAOAC technique derives afitness function comprising residual energy,distance to cluster heads(CHs),distance to base station(BS),and load.The performance validation of the IM-EECNL technique is carried out under several aspects such as localization and energy efficiency.A wide ranging comparative outcomes analysis highlighted the improved performance of the IM-EECNL approach on the recent approaches with the maximum packet delivery ratio(PDR)of 0.985.