期刊文献+
共找到713篇文章
< 1 2 36 >
每页显示 20 50 100
Rainfall prediction by using ANFIS times series technique in South Tangerang, Indonesia
1
作者 Wayan Suparta Azizan Abu Samah 《Geodesy and Geodynamics》 2020年第6期411-417,共7页
Excessive rainfall is one of the triggers for the flooding phenomenon,especially in the tropics with flat or concave areas.Some critical points in the South Tangerang region,which are currently one of the most rapidly... Excessive rainfall is one of the triggers for the flooding phenomenon,especially in the tropics with flat or concave areas.Some critical points in the South Tangerang region,which are currently one of the most rapidly developing cities,cannot be ignored from the flooding problem.Floods cause disturbing human activities,loss of life and property,and in turn affect the economic stretch in an area.This paper aimed to predict rainfall by exploring the application of artificial intelligence techniques such as ANFIS(Adaptive Neuro Fuzzy Inference System).The proposed technique combines neural network learning abilities with transparent linguistic representations of fuzzy systems.The ANFIS model with various input structures and membership functions was built,trained,and tested to evaluate the capability of a model.Analyses of six-year rainfall data on a monthly basis in South Tangerang City,Banten found that rainfall prediction based on ANFIS time series is promising where 80%of data testing is well predicted. 展开更多
关键词 ANFIS Time series RAINFALL South Tangerang
下载PDF
Times Series Prediction to Basis of a Neural Network Conceived by a Real Genetic Algorithm
2
作者 Raihane Mechgoug Nourddine Golea Abdelmalik Taleb-Ahmed 《Computer Technology and Application》 2011年第3期219-226,共8页
Neural network and genetic algorithms are complementary technologies in the design of adaptive intelligent system. Neural network learns from scratch by adjusting the interconnections betweens layers. Genetic algorith... Neural network and genetic algorithms are complementary technologies in the design of adaptive intelligent system. Neural network learns from scratch by adjusting the interconnections betweens layers. Genetic algorithms are a popular computing framework that uses principals from natural population genetics to evolve solutions to problems. Various forecasting methods have been developed on the basis of neural network, but accuracy has been matter of concern in these forecasts. In neural network methods forecasted values depend to the choose of neural predictor structure, the number of the input, the lag. To remedy to these problem, in this paper, the authors are investing the applicability of an automatic design of a neural predictor realized by real Genetic Algorithms to predict the future value of a time series. The prediction method is tested by using meteorology time series that are daily and weekly mean temperatures in Melbourne, Australia, 1980-1990. 展开更多
关键词 PREDICTION time series artificial neural network genetic algorithm.
下载PDF
Times Series Applied to Study Vitamin D Seasonality in Argentina
3
作者 José Bavio Carina Fernández +1 位作者 Patricia Fernández Beatriz Marrón 《Applied Mathematics》 2021年第7期546-555,共10页
In this study, we analyze how vitamin D (VD) serum levels flow with latitude and throughout seasons of the year within a population sample over three years, taking into account that VD is mainly photosynthesized in th... In this study, we analyze how vitamin D (VD) serum levels flow with latitude and throughout seasons of the year within a population sample over three years, taking into account that VD is mainly photosynthesized in the skin from sun exposure. Vitamin D levels have been measured in 80,763 patients during 2013, 2014, and 2015. To accomplish the objectives, we first perform some inference tests like two-way Analysis of Variance (ANOVA) followed by post-hoc tests. Secondly, we develop time series techniques including cross correlation calculations. Least than 10% of the sample had healthy VD levels, which should be a fact of public health major concern. The effect of the interaction between the two factors, zones and seasons, was proved by ANOVA. The mean values which are significantly different were determined by post hoc test. Furthermore, we find that mean serum VD levels, measured as 25-hydroxy-VD, follow a seasonal lag pattern of 9 weeks, a delay for minimum and maximum values after the respective equinoxes and daily sunlight duration. Reliable estimates of the population are provided in the present study, since one of the strengths is its huge sample size. We have quantitatively characterized the seasonality of serum vitamin D levels in the Argentine and the seasonal lag pattern has been determined for the study region. 展开更多
关键词 Vitamin D Status Population Study Time series Correlation Interaction Effects
下载PDF
Defect Detection Model Using Time Series Data Augmentation and Transformation 被引量:1
4
作者 Gyu-Il Kim Hyun Yoo +1 位作者 Han-Jin Cho Kyungyong Chung 《Computers, Materials & Continua》 SCIE EI 2024年第2期1713-1730,共18页
Time-series data provide important information in many fields,and their processing and analysis have been the focus of much research.However,detecting anomalies is very difficult due to data imbalance,temporal depende... Time-series data provide important information in many fields,and their processing and analysis have been the focus of much research.However,detecting anomalies is very difficult due to data imbalance,temporal dependence,and noise.Therefore,methodologies for data augmentation and conversion of time series data into images for analysis have been studied.This paper proposes a fault detection model that uses time series data augmentation and transformation to address the problems of data imbalance,temporal dependence,and robustness to noise.The method of data augmentation is set as the addition of noise.It involves adding Gaussian noise,with the noise level set to 0.002,to maximize the generalization performance of the model.In addition,we use the Markov Transition Field(MTF)method to effectively visualize the dynamic transitions of the data while converting the time series data into images.It enables the identification of patterns in time series data and assists in capturing the sequential dependencies of the data.For anomaly detection,the PatchCore model is applied to show excellent performance,and the detected anomaly areas are represented as heat maps.It allows for the detection of anomalies,and by applying an anomaly map to the original image,it is possible to capture the areas where anomalies occur.The performance evaluation shows that both F1-score and Accuracy are high when time series data is converted to images.Additionally,when processed as images rather than as time series data,there was a significant reduction in both the size of the data and the training time.The proposed method can provide an important springboard for research in the field of anomaly detection using time series data.Besides,it helps solve problems such as analyzing complex patterns in data lightweight. 展开更多
关键词 Defect detection time series deep learning data augmentation data transformation
下载PDF
Improved Responses with Multitaper Spectral Analysis for Magnetotelluric Time Series Data Processing:Examples from Field Data
5
作者 Matthew J.COMEAU Rafael RIGAUD +2 位作者 Johanna PLETT Michael BECKEN Alexey KUVSHINOV 《Acta Geologica Sinica(English Edition)》 SCIE CAS CSCD 2024年第S01期14-17,共4页
In order to attain good quality transfer function estimates from magnetotelluric field data(i.e.,smooth behavior and small uncertainties across all frequencies),we compare time series data processing with and without ... In order to attain good quality transfer function estimates from magnetotelluric field data(i.e.,smooth behavior and small uncertainties across all frequencies),we compare time series data processing with and without a multitaper approach for spectral estimation.There are several common ways to increase the reliability of the Fourier spectral estimation from experimental(noisy)data;for example to subdivide the experimental time series into segments,taper these segments(using single taper),perform the Fourier transform of the individual segments,and average the resulting spectra. 展开更多
关键词 MAGNETOTELLURICS electrical resistivity time series PROCESSING Fourier analysis multitaper
下载PDF
Periodic signal extraction of GNSS height time series based on adaptive singular spectrum analysis
6
作者 Chenfeng Li Peibing Yang +1 位作者 Tengxu Zhang Jiachun Guo 《Geodesy and Geodynamics》 EI CSCD 2024年第1期50-60,共11页
Singular spectrum analysis is widely used in geodetic time series analysis.However,when extracting time-varying periodic signals from a large number of Global Navigation Satellite System(GNSS)time series,the selection... Singular spectrum analysis is widely used in geodetic time series analysis.However,when extracting time-varying periodic signals from a large number of Global Navigation Satellite System(GNSS)time series,the selection of appropriate embedding window size and principal components makes this method cumbersome and inefficient.To improve the efficiency and accuracy of singular spectrum analysis,this paper proposes an adaptive singular spectrum analysis method by combining spectrum analysis with a new trace matrix.The running time and correlation analysis indicate that the proposed method can adaptively set the embedding window size to extract the time-varying periodic signals from GNSS time series,and the extraction efficiency of a single time series is six times that of singular spectrum analysis.The method is also accurate and more suitable for time-varying periodic signal analysis of global GNSS sites. 展开更多
关键词 GNSS Time series Singular spectrum analysis Trace matrix Periodic signal
下载PDF
Deep Learning for Financial Time Series Prediction:A State-of-the-Art Review of Standalone and HybridModels
7
作者 Weisi Chen Walayat Hussain +1 位作者 Francesco Cauteruccio Xu Zhang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第4期187-224,共38页
Financial time series prediction,whether for classification or regression,has been a heated research topic over the last decade.While traditional machine learning algorithms have experienced mediocre results,deep lear... Financial time series prediction,whether for classification or regression,has been a heated research topic over the last decade.While traditional machine learning algorithms have experienced mediocre results,deep learning has largely contributed to the elevation of the prediction performance.Currently,the most up-to-date review of advanced machine learning techniques for financial time series prediction is still lacking,making it challenging for finance domain experts and relevant practitioners to determine which model potentially performs better,what techniques and components are involved,and how themodel can be designed and implemented.This review article provides an overview of techniques,components and frameworks for financial time series prediction,with an emphasis on state-of-the-art deep learning models in the literature from2015 to 2023,including standalonemodels like convolutional neural networks(CNN)that are capable of extracting spatial dependencies within data,and long short-term memory(LSTM)that is designed for handling temporal dependencies;and hybrid models integrating CNN,LSTM,attention mechanism(AM)and other techniques.For illustration and comparison purposes,models proposed in recent studies are mapped to relevant elements of a generalized framework comprised of input,output,feature extraction,prediction,and related processes.Among the state-of-the-artmodels,hybrid models like CNNLSTMand CNN-LSTM-AM in general have been reported superior in performance to stand-alone models like the CNN-only model.Some remaining challenges have been discussed,including non-friendliness for finance domain experts,delayed prediction,domain knowledge negligence,lack of standards,and inability of real-time and highfrequency predictions.The principal contributions of this paper are to provide a one-stop guide for both academia and industry to review,compare and summarize technologies and recent advances in this area,to facilitate smooth and informed implementation,and to highlight future research directions. 展开更多
关键词 Financial time series prediction convolutional neural network long short-term memory deep learning attention mechanism FINANCE
下载PDF
An Innovative Deep Architecture for Flight Safety Risk Assessment Based on Time Series Data
8
作者 Hong Sun Fangquan Yang +2 位作者 Peiwen Zhang Yang Jiao Yunxiang Zhao 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第3期2549-2569,共21页
With the development of the integration of aviation safety and artificial intelligence,research on the combination of risk assessment and artificial intelligence is particularly important in the field of risk manageme... With the development of the integration of aviation safety and artificial intelligence,research on the combination of risk assessment and artificial intelligence is particularly important in the field of risk management,but searching for an efficient and accurate risk assessment algorithm has become a challenge for the civil aviation industry.Therefore,an improved risk assessment algorithm(PS-AE-LSTM)based on long short-term memory network(LSTM)with autoencoder(AE)is proposed for the various supervised deep learning algorithms in flight safety that cannot adequately address the problem of the quality on risk level labels.Firstly,based on the normal distribution characteristics of flight data,a probability severity(PS)model is established to enhance the quality of risk assessment labels.Secondly,autoencoder is introduced to reconstruct the flight parameter data to improve the data quality.Finally,utilizing the time-series nature of flight data,a long and short-termmemory network is used to classify the risk level and improve the accuracy of risk assessment.Thus,a risk assessment experimentwas conducted to analyze a fleet landing phase dataset using the PS-AE-LSTMalgorithm to assess the risk level associated with aircraft hard landing events.The results show that the proposed algorithm achieves an accuracy of 86.45%compared with seven baseline models and has excellent risk assessment capability. 展开更多
关键词 Safety engineering risk assessment time series data autoencoder LSTM
下载PDF
Time series prediction of reservoir bank landslide failure probability considering the spatial variability of soil properties
9
作者 Luqi Wang Lin Wang +3 位作者 Wengang Zhang Xuanyu Meng Songlin Liu Chun Zhu 《Journal of Rock Mechanics and Geotechnical Engineering》 SCIE CSCD 2024年第10期3951-3960,共10页
Historically,landslides have been the primary type of geological disaster worldwide.Generally,the stability of reservoir banks is primarily affected by rainfall and reservoir water level fluctuations.Moreover,the stab... Historically,landslides have been the primary type of geological disaster worldwide.Generally,the stability of reservoir banks is primarily affected by rainfall and reservoir water level fluctuations.Moreover,the stability of reservoir banks changes with the long-term dynamics of external disastercausing factors.Thus,assessing the time-varying reliability of reservoir landslides remains a challenge.In this paper,a machine learning(ML)based approach is proposed to analyze the long-term reliability of reservoir bank landslides in spatially variable soils through time series prediction.This study systematically investigated the prediction performances of three ML algorithms,i.e.multilayer perceptron(MLP),convolutional neural network(CNN),and long short-term memory(LSTM).Additionally,the effects of the data quantity and data ratio on the predictive power of deep learning models are considered.The results show that all three ML models can accurately depict the changes in the time-varying failure probability of reservoir landslides.The CNN model outperforms both the MLP and LSTM models in predicting the failure probability.Furthermore,selecting the right data ratio can improve the prediction accuracy of the failure probability obtained by ML models. 展开更多
关键词 Machine learning(ML) Reservoir bank landslide Spatial variability Time series prediction Failure probability
下载PDF
TSCND:Temporal Subsequence-Based Convolutional Network with Difference for Time Series Forecasting
10
作者 Haoran Huang Weiting Chen Zheming Fan 《Computers, Materials & Continua》 SCIE EI 2024年第3期3665-3681,共17页
Time series forecasting plays an important role in various fields, such as energy, finance, transport, and weather. Temporal convolutional networks (TCNs) based on dilated causal convolution have been widely used in t... Time series forecasting plays an important role in various fields, such as energy, finance, transport, and weather. Temporal convolutional networks (TCNs) based on dilated causal convolution have been widely used in time series forecasting. However, two problems weaken the performance of TCNs. One is that in dilated casual convolution, causal convolution leads to the receptive fields of outputs being concentrated in the earlier part of the input sequence, whereas the recent input information will be severely lost. The other is that the distribution shift problem in time series has not been adequately solved. To address the first problem, we propose a subsequence-based dilated convolution method (SDC). By using multiple convolutional filters to convolve elements of neighboring subsequences, the method extracts temporal features from a growing receptive field via a growing subsequence rather than a single element. Ultimately, the receptive field of each output element can cover the whole input sequence. To address the second problem, we propose a difference and compensation method (DCM). The method reduces the discrepancies between and within the input sequences by difference operations and then compensates the outputs for the information lost due to difference operations. Based on SDC and DCM, we further construct a temporal subsequence-based convolutional network with difference (TSCND) for time series forecasting. The experimental results show that TSCND can reduce prediction mean squared error by 7.3% and save runtime, compared with state-of-the-art models and vanilla TCN. 展开更多
关键词 DIFFERENCE data prediction time series temporal convolutional network dilated convolution
下载PDF
AFSTGCN:Prediction for multivariate time series using an adaptive fused spatial-temporal graph convolutional network
11
作者 Yuteng Xiao Kaijian Xia +5 位作者 Hongsheng Yin Yu-Dong Zhang Zhenjiang Qian Zhaoyang Liu Yuehan Liang Xiaodan Li 《Digital Communications and Networks》 SCIE CSCD 2024年第2期292-303,共12页
The prediction for Multivariate Time Series(MTS)explores the interrelationships among variables at historical moments,extracts their relevant characteristics,and is widely used in finance,weather,complex industries an... The prediction for Multivariate Time Series(MTS)explores the interrelationships among variables at historical moments,extracts their relevant characteristics,and is widely used in finance,weather,complex industries and other fields.Furthermore,it is important to construct a digital twin system.However,existing methods do not take full advantage of the potential properties of variables,which results in poor predicted accuracy.In this paper,we propose the Adaptive Fused Spatial-Temporal Graph Convolutional Network(AFSTGCN).First,to address the problem of the unknown spatial-temporal structure,we construct the Adaptive Fused Spatial-Temporal Graph(AFSTG)layer.Specifically,we fuse the spatial-temporal graph based on the interrelationship of spatial graphs.Simultaneously,we construct the adaptive adjacency matrix of the spatial-temporal graph using node embedding methods.Subsequently,to overcome the insufficient extraction of disordered correlation features,we construct the Adaptive Fused Spatial-Temporal Graph Convolutional(AFSTGC)module.The module forces the reordering of disordered temporal,spatial and spatial-temporal dependencies into rule-like data.AFSTGCN dynamically and synchronously acquires potential temporal,spatial and spatial-temporal correlations,thereby fully extracting rich hierarchical feature information to enhance the predicted accuracy.Experiments on different types of MTS datasets demonstrate that the model achieves state-of-the-art single-step and multi-step performance compared with eight other deep learning models. 展开更多
关键词 Adaptive adjacency matrix Digital twin Graph convolutional network Multivariate time series prediction Spatial-temporal graph
下载PDF
A Time Series Short-Term Prediction Method Based on Multi-Granularity Event Matching and Alignment
12
作者 Haibo Li Yongbo Yu +1 位作者 Zhenbo Zhao Xiaokang Tang 《Computers, Materials & Continua》 SCIE EI 2024年第1期653-676,共24页
Accurate forecasting of time series is crucial across various domains.Many prediction tasks rely on effectively segmenting,matching,and time series data alignment.For instance,regardless of time series with the same g... Accurate forecasting of time series is crucial across various domains.Many prediction tasks rely on effectively segmenting,matching,and time series data alignment.For instance,regardless of time series with the same granularity,segmenting them into different granularity events can effectively mitigate the impact of varying time scales on prediction accuracy.However,these events of varying granularity frequently intersect with each other,which may possess unequal durations.Even minor differences can result in significant errors when matching time series with future trends.Besides,directly using matched events but unaligned events as state vectors in machine learning-based prediction models can lead to insufficient prediction accuracy.Therefore,this paper proposes a short-term forecasting method for time series based on a multi-granularity event,MGE-SP(multi-granularity event-based short-termprediction).First,amethodological framework for MGE-SP established guides the implementation steps.The framework consists of three key steps,including multi-granularity event matching based on the LTF(latest time first)strategy,multi-granularity event alignment using a piecewise aggregate approximation based on the compression ratio,and a short-term prediction model based on XGBoost.The data from a nationwide online car-hailing service in China ensures the method’s reliability.The average RMSE(root mean square error)and MAE(mean absolute error)of the proposed method are 3.204 and 2.360,lower than the respective values of 4.056 and 3.101 obtained using theARIMA(autoregressive integratedmoving average)method,as well as the values of 4.278 and 2.994 obtained using k-means-SVR(support vector regression)method.The other experiment is conducted on stock data froma public data set.The proposed method achieved an average RMSE and MAE of 0.836 and 0.696,lower than the respective values of 1.019 and 0.844 obtained using the ARIMA method,as well as the values of 1.350 and 1.172 obtained using the k-means-SVR method. 展开更多
关键词 Time series short-term prediction multi-granularity event ALIGNMENT event matching
下载PDF
Advancing Autoencoder Architectures for Enhanced Anomaly Detection in Multivariate Industrial Time Series
13
作者 Byeongcheon Lee Sangmin Kim +2 位作者 Muazzam Maqsood Jihoon Moon Seungmin Rho 《Computers, Materials & Continua》 SCIE EI 2024年第10期1275-1300,共26页
In the context of rapid digitization in industrial environments,how effective are advanced unsupervised learning models,particularly hybrid autoencoder models,at detecting anomalies in industrial control system(ICS)da... In the context of rapid digitization in industrial environments,how effective are advanced unsupervised learning models,particularly hybrid autoencoder models,at detecting anomalies in industrial control system(ICS)datasets?This study is crucial because it addresses the challenge of identifying rare and complex anomalous patterns in the vast amounts of time series data generated by Internet of Things(IoT)devices,which can significantly improve the reliability and safety of these systems.In this paper,we propose a hybrid autoencoder model,called ConvBiLSTMAE,which combines convolutional neural network(CNN)and bidirectional long short-term memory(BiLSTM)to more effectively train complex temporal data patterns in anomaly detection.On the hardware-in-the-loopbased extended industrial control system dataset,the ConvBiLSTM-AE model demonstrated remarkable anomaly detection performance,achieving F1 scores of 0.78 and 0.41 for the first and second datasets,respectively.The results suggest that hybrid autoencoder models are not only viable,but potentially superior alternatives for unsupervised anomaly detection in complex industrial systems,offering a promising approach to improving their reliability and safety. 展开更多
关键词 Advanced anomaly detection autoencoder innovations unsupervised learning industrial security multivariate time series analysis
下载PDF
Unsupervised Time Series Segmentation: A Survey on Recent Advances
14
作者 Chengyu Wang Xionglve Li +1 位作者 Tongqing Zhou Zhiping Cai 《Computers, Materials & Continua》 SCIE EI 2024年第8期2657-2673,共17页
Time series segmentation has attracted more interests in recent years,which aims to segment time series into different segments,each reflects a state of the monitored objects.Although there have been many surveys on t... Time series segmentation has attracted more interests in recent years,which aims to segment time series into different segments,each reflects a state of the monitored objects.Although there have been many surveys on time series segmentation,most of them focus more on change point detection(CPD)methods and overlook the advances in boundary detection(BD)and state detection(SD)methods.In this paper,we categorize time series segmentation methods into CPD,BD,and SD methods,with a specific focus on recent advances in BD and SD methods.Within the scope of BD and SD,we subdivide the methods based on their underlying models/techniques and focus on the milestones that have shaped the development trajectory of each category.As a conclusion,we found that:(1)Existing methods failed to provide sufficient support for online working,with only a few methods supporting online deployment;(2)Most existing methods require the specification of parameters,which hinders their ability to work adaptively;(3)Existing SD methods do not attach importance to accurate detection of boundary points in evaluation,which may lead to limitations in boundary point detection.We highlight the ability to working online and adaptively as important attributes of segmentation methods,the boundary detection accuracy as a neglected metrics for SD methods. 展开更多
关键词 Time series segmentation time series state detection boundary detection change point detection
下载PDF
A Time Series Intrusion Detection Method Based on SSAE,TCN and Bi-LSTM
15
作者 Zhenxiang He Xunxi Wang Chunwei Li 《Computers, Materials & Continua》 SCIE EI 2024年第1期845-871,共27页
In the fast-evolving landscape of digital networks,the incidence of network intrusions has escalated alarmingly.Simultaneously,the crucial role of time series data in intrusion detection remains largely underappreciat... In the fast-evolving landscape of digital networks,the incidence of network intrusions has escalated alarmingly.Simultaneously,the crucial role of time series data in intrusion detection remains largely underappreciated,with most systems failing to capture the time-bound nuances of network traffic.This leads to compromised detection accuracy and overlooked temporal patterns.Addressing this gap,we introduce a novel SSAE-TCN-BiLSTM(STL)model that integrates time series analysis,significantly enhancing detection capabilities.Our approach reduces feature dimensionalitywith a Stacked Sparse Autoencoder(SSAE)and extracts temporally relevant features through a Temporal Convolutional Network(TCN)and Bidirectional Long Short-term Memory Network(Bi-LSTM).By meticulously adjusting time steps,we underscore the significance of temporal data in bolstering detection accuracy.On the UNSW-NB15 dataset,ourmodel achieved an F1-score of 99.49%,Accuracy of 99.43%,Precision of 99.38%,Recall of 99.60%,and an inference time of 4.24 s.For the CICDS2017 dataset,we recorded an F1-score of 99.53%,Accuracy of 99.62%,Precision of 99.27%,Recall of 99.79%,and an inference time of 5.72 s.These findings not only confirm the STL model’s superior performance but also its operational efficiency,underpinning its significance in real-world cybersecurity scenarios where rapid response is paramount.Our contribution represents a significant advance in cybersecurity,proposing a model that excels in accuracy and adaptability to the dynamic nature of network traffic,setting a new benchmark for intrusion detection systems. 展开更多
关键词 Network intrusion detection bidirectional long short-term memory network time series stacked sparse autoencoder temporal convolutional network time steps
下载PDF
Multivariate Time Series Anomaly Detection Based on Spatial-Temporal Network and Transformer in Industrial Internet of Things
16
作者 Mengmeng Zhao Haipeng Peng +1 位作者 Lixiang Li Yeqing Ren 《Computers, Materials & Continua》 SCIE EI 2024年第8期2815-2837,共23页
In the Industrial Internet of Things(IIoT),sensors generate time series data to reflect the working state.When the systems are attacked,timely identification of outliers in time series is critical to ensure security.A... In the Industrial Internet of Things(IIoT),sensors generate time series data to reflect the working state.When the systems are attacked,timely identification of outliers in time series is critical to ensure security.Although many anomaly detection methods have been proposed,the temporal correlation of the time series over the same sensor and the state(spatial)correlation between different sensors are rarely considered simultaneously in these methods.Owing to the superior capability of Transformer in learning time series features.This paper proposes a time series anomaly detection method based on a spatial-temporal network and an improved Transformer.Additionally,the methods based on graph neural networks typically include a graph structure learning module and an anomaly detection module,which are interdependent.However,in the initial phase of training,since neither of the modules has reached an optimal state,their performance may influence each other.This scenario makes the end-to-end training approach hard to effectively direct the learning trajectory of each module.This interdependence between the modules,coupled with the initial instability,may cause the model to find it hard to find the optimal solution during the training process,resulting in unsatisfactory results.We introduce an adaptive graph structure learning method to obtain the optimal model parameters and graph structure.Experiments on two publicly available datasets demonstrate that the proposed method attains higher anomaly detection results than other methods. 展开更多
关键词 Multivariate time series anomaly detection spatial-temporal network TRANSFORMER
下载PDF
Prediction of three-dimensional ocean temperature in the South China Sea based on time series gridded data and a dynamic spatiotemporal graph neural network
17
作者 Feng Nan Zhuolin Li +3 位作者 Jie Yu Suixiang Shi Xinrong Wu Lingyu Xu 《Acta Oceanologica Sinica》 SCIE CAS CSCD 2024年第7期26-39,共14页
Ocean temperature is an important physical variable in marine ecosystems,and ocean temperature prediction is an important research objective in ocean-related fields.Currently,one of the commonly used methods for ocean... Ocean temperature is an important physical variable in marine ecosystems,and ocean temperature prediction is an important research objective in ocean-related fields.Currently,one of the commonly used methods for ocean temperature prediction is based on data-driven,but research on this method is mostly limited to the sea surface,with few studies on the prediction of internal ocean temperature.Existing graph neural network-based methods usually use predefined graphs or learned static graphs,which cannot capture the dynamic associations among data.In this study,we propose a novel dynamic spatiotemporal graph neural network(DSTGN)to predict threedimensional ocean temperature(3D-OT),which combines static graph learning and dynamic graph learning to automatically mine two unknown dependencies between sequences based on the original 3D-OT data without prior knowledge.Temporal and spatial dependencies in the time series were then captured using temporal and graph convolutions.We also integrated dynamic graph learning,static graph learning,graph convolution,and temporal convolution into an end-to-end framework for 3D-OT prediction using time-series grid data.In this study,we conducted prediction experiments using high-resolution 3D-OT from the Copernicus global ocean physical reanalysis,with data covering the vertical variation of temperature from the sea surface to 1000 m below the sea surface.We compared five mainstream models that are commonly used for ocean temperature prediction,and the results showed that the method achieved the best prediction results at all prediction scales. 展开更多
关键词 dynamic associations three-dimensional ocean temperature prediction graph neural network time series gridded data
下载PDF
Automated Machine Learning Algorithm Using Recurrent Neural Network to Perform Long-Term Time Series Forecasting
18
作者 Ying Su Morgan C.Wang Shuai Liu 《Computers, Materials & Continua》 SCIE EI 2024年第3期3529-3549,共21页
Long-term time series forecasting stands as a crucial research domain within the realm of automated machine learning(AutoML).At present,forecasting,whether rooted in machine learning or statistical learning,typically ... Long-term time series forecasting stands as a crucial research domain within the realm of automated machine learning(AutoML).At present,forecasting,whether rooted in machine learning or statistical learning,typically relies on expert input and necessitates substantial manual involvement.This manual effort spans model development,feature engineering,hyper-parameter tuning,and the intricate construction of time series models.The complexity of these tasks renders complete automation unfeasible,as they inherently demand human intervention at multiple junctures.To surmount these challenges,this article proposes leveraging Long Short-Term Memory,which is the variant of Recurrent Neural Networks,harnessing memory cells and gating mechanisms to facilitate long-term time series prediction.However,forecasting accuracy by particular neural network and traditional models can degrade significantly,when addressing long-term time-series tasks.Therefore,our research demonstrates that this innovative approach outperforms the traditional Autoregressive Integrated Moving Average(ARIMA)method in forecasting long-term univariate time series.ARIMA is a high-quality and competitive model in time series prediction,and yet it requires significant preprocessing efforts.Using multiple accuracy metrics,we have evaluated both ARIMA and proposed method on the simulated time-series data and real data in both short and long term.Furthermore,our findings indicate its superiority over alternative network architectures,including Fully Connected Neural Networks,Convolutional Neural Networks,and Nonpooling Convolutional Neural Networks.Our AutoML approach enables non-professional to attain highly accurate and effective time series forecasting,and can be widely applied to various domains,particularly in business and finance. 展开更多
关键词 Automated machine learning autoregressive integrated moving average neural networks time series analysis
下载PDF
Cross-Dimension Attentive Feature Fusion Network for Unsupervised Time-Series Anomaly Detection
19
作者 Rui Wang Yao Zhou +2 位作者 Guangchun Luo Peng Chen Dezhong Peng 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第6期3011-3027,共17页
Time series anomaly detection is crucial in various industrial applications to identify unusual behaviors within the time series data.Due to the challenges associated with annotating anomaly events,time series reconst... Time series anomaly detection is crucial in various industrial applications to identify unusual behaviors within the time series data.Due to the challenges associated with annotating anomaly events,time series reconstruction has become a prevalent approach for unsupervised anomaly detection.However,effectively learning representations and achieving accurate detection results remain challenging due to the intricate temporal patterns and dependencies in real-world time series.In this paper,we propose a cross-dimension attentive feature fusion network for time series anomaly detection,referred to as CAFFN.Specifically,a series and feature mixing block is introduced to learn representations in 1D space.Additionally,a fast Fourier transform is employed to convert the time series into 2D space,providing the capability for 2D feature extraction.Finally,a cross-dimension attentive feature fusion mechanism is designed that adaptively integrates features across different dimensions for anomaly detection.Experimental results on real-world time series datasets demonstrate that CAFFN performs better than other competing methods in time series anomaly detection. 展开更多
关键词 Time series anomaly detection unsupervised feature learning feature fusion
下载PDF
CNN-LSTM based incremental attention mechanism enabled phase-space reconstruction for chaotic time series prediction
20
作者 Xiao-Qian Lu Jun Tian +2 位作者 Qiang Liao Zheng-Wu Xu Lu Gan 《Journal of Electronic Science and Technology》 EI CAS CSCD 2024年第2期77-90,共14页
To improve the prediction accuracy of chaotic time series and reconstruct a more reasonable phase space structure of the prediction network,we propose a convolutional neural network-long short-term memory(CNN-LSTM)pre... To improve the prediction accuracy of chaotic time series and reconstruct a more reasonable phase space structure of the prediction network,we propose a convolutional neural network-long short-term memory(CNN-LSTM)prediction model based on the incremental attention mechanism.Firstly,a traversal search is conducted through the traversal layer for finite parameters in the phase space.Then,an incremental attention layer is utilized for parameter judgment based on the dimension weight criteria(DWC).The phase space parameters that best meet DWC are selected and fed into the input layer.Finally,the constructed CNN-LSTM network extracts spatio-temporal features and provides the final prediction results.The model is verified using Logistic,Lorenz,and sunspot chaotic time series,and the performance is compared from the two dimensions of prediction accuracy and network phase space structure.Additionally,the CNN-LSTM network based on incremental attention is compared with long short-term memory(LSTM),convolutional neural network(CNN),recurrent neural network(RNN),and support vector regression(SVR)for prediction accuracy.The experiment results indicate that the proposed composite network model possesses enhanced capability in extracting temporal features and achieves higher prediction accuracy.Also,the algorithm to estimate the phase space parameter is compared with the traditional CAO,false nearest neighbor,and C-C,three typical methods for determining the chaotic phase space parameters.The experiments reveal that the phase space parameter estimation algorithm based on the incremental attention mechanism is superior in prediction accuracy compared with the traditional phase space reconstruction method in five networks,including CNN-LSTM,LSTM,CNN,RNN,and SVR. 展开更多
关键词 Chaotic time series Incremental attention mechanism Phase-space reconstruction
下载PDF
上一页 1 2 36 下一页 到第
使用帮助 返回顶部