The amount of oxygen blown into the converter is one of the key parameters for the control of the converter blowing process,which directly affects the tap-to-tap time of converter. In this study, a hybrid model based ...The amount of oxygen blown into the converter is one of the key parameters for the control of the converter blowing process,which directly affects the tap-to-tap time of converter. In this study, a hybrid model based on oxygen balance mechanism (OBM) and deep neural network (DNN) was established for predicting oxygen blowing time in converter. A three-step method was utilized in the hybrid model. First, the oxygen consumption volume was predicted by the OBM model and DNN model, respectively. Second, a more accurate oxygen consumption volume was obtained by integrating the OBM model and DNN model. Finally, the converter oxygen blowing time was calculated according to the oxygen consumption volume and the oxygen supply intensity of each heat. The proposed hybrid model was verified using the actual data collected from an integrated steel plant in China, and compared with multiple linear regression model, OBM model, and neural network model including extreme learning machine, back propagation neural network, and DNN. The test results indicate that the hybrid model with a network structure of 3 hidden layer layers, 32-16-8 neurons per hidden layer, and 0.1 learning rate has the best prediction accuracy and stronger generalization ability compared with other models. The predicted hit ratio of oxygen consumption volume within the error±300 m^(3)is 96.67%;determination coefficient (R^(2)) and root mean square error (RMSE) are0.6984 and 150.03 m^(3), respectively. The oxygen blow time prediction hit ratio within the error±0.6 min is 89.50%;R2and RMSE are0.9486 and 0.3592 min, respectively. As a result, the proposed model can effectively predict the oxygen consumption volume and oxygen blowing time in the converter.展开更多
Financial time series prediction,whether for classification or regression,has been a heated research topic over the last decade.While traditional machine learning algorithms have experienced mediocre results,deep lear...Financial time series prediction,whether for classification or regression,has been a heated research topic over the last decade.While traditional machine learning algorithms have experienced mediocre results,deep learning has largely contributed to the elevation of the prediction performance.Currently,the most up-to-date review of advanced machine learning techniques for financial time series prediction is still lacking,making it challenging for finance domain experts and relevant practitioners to determine which model potentially performs better,what techniques and components are involved,and how themodel can be designed and implemented.This review article provides an overview of techniques,components and frameworks for financial time series prediction,with an emphasis on state-of-the-art deep learning models in the literature from2015 to 2023,including standalonemodels like convolutional neural networks(CNN)that are capable of extracting spatial dependencies within data,and long short-term memory(LSTM)that is designed for handling temporal dependencies;and hybrid models integrating CNN,LSTM,attention mechanism(AM)and other techniques.For illustration and comparison purposes,models proposed in recent studies are mapped to relevant elements of a generalized framework comprised of input,output,feature extraction,prediction,and related processes.Among the state-of-the-artmodels,hybrid models like CNNLSTMand CNN-LSTM-AM in general have been reported superior in performance to stand-alone models like the CNN-only model.Some remaining challenges have been discussed,including non-friendliness for finance domain experts,delayed prediction,domain knowledge negligence,lack of standards,and inability of real-time and highfrequency predictions.The principal contributions of this paper are to provide a one-stop guide for both academia and industry to review,compare and summarize technologies and recent advances in this area,to facilitate smooth and informed implementation,and to highlight future research directions.展开更多
With the rapid development of mechanical equipment, the mechanical health monitoring field has entered the era of big data. However, the method of manual feature extraction has the disadvantages of low efficiency and ...With the rapid development of mechanical equipment, the mechanical health monitoring field has entered the era of big data. However, the method of manual feature extraction has the disadvantages of low efficiency and poor accuracy, when handling big data. In this study, the research object was the asynchronous motor in the drivetrain diagnostics simulator system. The vibration signals of different fault motors were collected. The raw signal was pretreated using short time Fourier transform (STFT) to obtain the corresponding time-frequency map. Then, the feature of the time-frequency map was adap- tively extracted by using a convolutional neural network (CNN). The effects of the pretreatment method, and the hyper parameters of network diagnostic accuracy, were investigated experimentally. The experimental results showed that the influence of the preprocessing method is small, and that the batch-size is the main factor affecting accuracy and training efficiency. By investigating feature visualization, it was shown that, in the case of big data, the extracted CNN features can represent complex mapping relationships between signal and health status, and can also overcome the prior knowledge and engineering experience requirement for feature extraction, which is used by tra- ditional diagnosis methods. This paper proposes a new method, based on STFT and CNN, which can complete motor fault diagnosis tasks more intelligently and accurately.展开更多
Multivariate time-series forecasting(MTSF)plays an important role in diverse real-world applications.To achieve better accuracy in MTSF,time-series patterns in each variable and interrelationship patterns between vari...Multivariate time-series forecasting(MTSF)plays an important role in diverse real-world applications.To achieve better accuracy in MTSF,time-series patterns in each variable and interrelationship patterns between variables should be considered together.Recently,graph neural networks(GNNs)has gained much attention as they can learn both patterns using a graph.For accurate forecasting through GNN,a well-defined graph is required.However,existing GNNs have limitations in reflecting the spectral similarity and time delay between nodes,and consider all nodes with the same weight when constructing graph.In this paper,we propose a novel graph construction method that solves aforementioned limitations.We first calculate the Fourier transform-based spectral similarity and then update this similarity to reflect the time delay.Then,we weight each node according to the number of edge connections to get the final graph and utilize it to train the GNN model.Through experiments on various datasets,we demonstrated that the proposed method enhanced the performance of GNN-based MTSF models,and the proposed forecasting model achieve of up to 18.1%predictive performance improvement over the state-of-the-art model.展开更多
Sensors produce a large amount of multivariate time series data to record the states of Internet of Things(IoT)systems.Multivariate time series timestamp anomaly detection(TSAD)can identify timestamps of attacks and m...Sensors produce a large amount of multivariate time series data to record the states of Internet of Things(IoT)systems.Multivariate time series timestamp anomaly detection(TSAD)can identify timestamps of attacks and malfunctions.However,it is necessary to determine which sensor or indicator is abnormal to facilitate a more detailed diagnosis,a process referred to as fine-grained anomaly detection(FGAD).Although further FGAD can be extended based on TSAD methods,existing works do not provide a quantitative evaluation,and the performance is unknown.Therefore,to tackle the FGAD problem,this paper first verifies that the TSAD methods achieve low performance when applied to the FGAD task directly because of the excessive fusion of features and the ignoring of the relationship’s dynamic changes between indicators.Accordingly,this paper proposes a mul-tivariate time series fine-grained anomaly detection(MFGAD)framework.To avoid excessive fusion of features,MFGAD constructs two sub-models to independently identify the abnormal timestamp and abnormal indicator instead of a single model and then combines the two kinds of abnormal results to detect the fine-grained anomaly.Based on this framework,an algorithm based on Graph Attention Neural Network(GAT)and Attention Convolutional Long-Short Term Memory(A-ConvLSTM)is proposed,in which GAT learns temporal features of multiple indicators to detect abnormal timestamps and A-ConvLSTM captures the dynamic relationship between indicators to identify abnormal indicators.Extensive simulations on a real-world dataset demonstrate that the proposed algorithm can achieve a higher F1 score and hit rate than the extension of existing TSAD methods with the benefit of two independent sub-models for timestamp and indicator detection.展开更多
Although previous studies have made some clear leap in learning latent dynamics from high-dimensional representations,the performances in terms of accuracy and inference time of long-term model prediction still need t...Although previous studies have made some clear leap in learning latent dynamics from high-dimensional representations,the performances in terms of accuracy and inference time of long-term model prediction still need to be improved.In this study,a deep convolutional network based on the Koopman operator(CKNet)is proposed to model non-linear systems with pixel-level measurements for long-term prediction.CKNet adopts an autoencoder network architecture,consisting of an encoder to generate latent states and a linear dynamical model(i.e.,the Koopman operator)which evolves in the latent state space spanned by the encoder.The decoder is used to recover images from latent states.According to a multi-step ahead prediction loss function,the system matrices for approximating the Koopman operator are trained synchronously with the autoencoder in a mini-batch manner.In this manner,gradients can be synchronously transmitted to both the system matrices and the autoencoder to help the encoder self-adaptively tune the latent state space in the training process,and the resulting model is time-invariant in the latent space.Therefore,the proposed CKNet has the advantages of less inference time and high accuracy for long-term prediction.Experiments are per-formed on OpenAI Gym and Mujoco environments,including two and four non-linear forced dynamical systems with continuous action spaces.The experimental results show that CKNet has strong long-term prediction capabilities with sufficient precision.展开更多
In this paper, a modeling algorithm developed by transferring the adaptive fuzzy inference neural network into an on-line real time algorithm, combining the algorithm with conventional system identification method and...In this paper, a modeling algorithm developed by transferring the adaptive fuzzy inference neural network into an on-line real time algorithm, combining the algorithm with conventional system identification method and applying them to separate identification of nonlinear multi-variable systems is introduced and discussed.展开更多
Leg amputations are common in accidents and diseases.The present active bionic legs use Electromyography(EMG)signals in lower limbs(just before the location of the amputation)to generate active control signals.The act...Leg amputations are common in accidents and diseases.The present active bionic legs use Electromyography(EMG)signals in lower limbs(just before the location of the amputation)to generate active control signals.The active control with EMGs greatly limits the potential of using these bionic legs because most accidents and diseases cause severe damages to tissues/muscles which originates EMG signals.As an alternative,the present research attempted to use an upper limb swing pattern to control an active bionic leg.A deep neural network(DNN)model is implemented to recognize the patterns in upper limb swing,and it is used to translate these signals into active control input of a bionic leg.The proposed approach can generate a full gait cycle within 1082 milliseconds,and it is comparable to the normal(a person without any disability)1070 milliseconds gait cycle.展开更多
Considering the recent developments in deep learning, it has become increasingly important to verify what methods are valid for the prediction of multivariate time-series data. In this study, we propose a novel method...Considering the recent developments in deep learning, it has become increasingly important to verify what methods are valid for the prediction of multivariate time-series data. In this study, we propose a novel method of time-series prediction employing multiple deep learners combined with a Bayesian network where training data is divided into clusters using K-means clustering. We decided how many clusters are the best for K-means with the Bayesian information criteria. Depending on each cluster, the multiple deep learners are trained. We used three types of deep learners: deep neural network (DNN), recurrent neural network (RNN), and long short-term memory (LSTM). A naive Bayes classifier is used to determine which deep learner is in charge of predicting a particular time-series. Our proposed method will be applied to a set of financial time-series data, the Nikkei Average Stock price, to assess the accuracy of the predictions made. Compared with the conventional method of employing a single deep learner to acquire all the data, it is demonstrated by our proposed method that F-value and accuracy are improved.展开更多
There are many techniques using sensors and wearable devices for detecting and monitoring patients with Parkinson’s disease(PD).A recent development is the utilization of human interaction with computer keyboards for...There are many techniques using sensors and wearable devices for detecting and monitoring patients with Parkinson’s disease(PD).A recent development is the utilization of human interaction with computer keyboards for analyzing and identifying motor signs in the early stages of the disease.Current designs for classification of time series of computer-key hold durations recorded from healthy control and PD subjects require the time series of length to be considerably long.With an attempt to avoid discomfort to participants in performing long physical tasks for data recording,this paper introduces the use of fuzzy recurrence plots of very short time series as input data for the machine training and classification with long short-term memory(LSTM)neural networks.Being an original approach that is able to both significantly increase the feature dimensions and provides the property of deterministic dynamical systems of very short time series for information processing carried out by an LSTM layer architecture,fuzzy recurrence plots provide promising results and outperform the direct input of the time series for the classification of healthy control and early PD subjects.展开更多
Water is a vital resource.It supports a multitude of industries,civilizations,and agriculture.However,climatic conditions impact water availability,particularly in desert areas where the temperature is high,and rain i...Water is a vital resource.It supports a multitude of industries,civilizations,and agriculture.However,climatic conditions impact water availability,particularly in desert areas where the temperature is high,and rain is scarce.Therefore,it is crucial to forecast water demand to provide it to sectors either on regular or emergency days.The study aims to develop an accurate model to forecast daily water demand under the impact of climatic conditions.This forecasting is known as a multivariate time series because it uses both the historical data of water demand and climatic conditions to forecast the future.Focusing on the collected data of Jeddah city,Saudi Arabia in the period between 2004 and 2018,we develop a hybrid approach that uses Artificial Neural Networks(ANN)for forecasting and Particle Swarm Optimization algorithm(PSO)for tuning ANNs’hyperparameters.Based on the Root Mean Square Error(RMSE)metric,results show that the(PSO-ANN)is an accurate model for multivariate time series forecasting.Also,the first day is the most difficult day for prediction(highest error rate),while the second day is the easiest to predict(lowest error rate).Finally,correlation analysis shows that the dew point is the most climatic factor affecting water demand.展开更多
Stack Overflow provides a platform for developers to seek suitable solutions by asking questions and receiving answers on various topics.However,many questions are usually not answered quickly enough.Since the questio...Stack Overflow provides a platform for developers to seek suitable solutions by asking questions and receiving answers on various topics.However,many questions are usually not answered quickly enough.Since the questioners are eager to know the specific time interval at which a question can be answered,it becomes an important task for Stack Overflow to feedback the answer time to the question.To address this issue,we propose a model for predicting the answer time of questions,named Predicting Answer Time(i.e.,PAT model),which consists of two parts:a feature acquisition and fusion model,and a deep neural network model.The framework uses a variety of features mined from questions in Stack Overflow,including the question description,question title,question tags,the creation time of the question,and other temporal features.These features are fused and fed into the deep neural network to predict the answer time of the question.As a case study,post data from Stack Overflow are used to assess the model.We use traditional regression algorithms as the baselines,such as Linear Regression,K-Nearest Neighbors Regression,Support Vector Regression,Multilayer Perceptron Regression,and Random Forest Regression.Experimental results show that the PAT model can predict the answer time of questions more accurately than traditional regression algorithms,and shorten the error of the predicted answer time by nearly 10 hours.展开更多
Multivariate time series with missing values are common in a wide range of applications,including energy data.Existing imputation methods often fail to focus on the temporal dynamics and the cross-dimensional correlat...Multivariate time series with missing values are common in a wide range of applications,including energy data.Existing imputation methods often fail to focus on the temporal dynamics and the cross-dimensional correlation simultaneously.In this paper we propose a two-step method based on an attention model to impute missing values in multivariate energy time series.First,the underlying distribution of the missing values in the data is learned.This information is then further used to train an attention based imputation model.By learning the distribution prior to the imputation process,the model can respond flexibly to the specific characteristics of the underlying data.The developed model is applied to European energy data,obtained from the European Network of Transmission System Operators for Electricity.Using different evaluation metrics and benchmarks,the conducted experiments show that the proposed model is preferable to the benchmarks and is able to accurately impute missing values.展开更多
Time series classification is related to many dif- ferent domains, such as health informatics, finance, and bioinformatics. Due to its broad applications, researchers have developed many algorithms for this kind of ta...Time series classification is related to many dif- ferent domains, such as health informatics, finance, and bioinformatics. Due to its broad applications, researchers have developed many algorithms for this kind of tasks, e.g., multivariate time series classification. Among the classifi- cation algorithms, k-nearest neighbor (k-NN) classification (particularly 1-NN) combined with dynamic time warping (DTW) achieves the state of the art performance. The defi- ciency is that when the data set grows large, the time con- sumption of 1-NN with DTW will be very expensive. In con- trast to 1-NN with DTW, it is more efficient but less ef- fective for feature-based classification methods since their performance usually depends on the quality of hand-crafted features. In this paper, we aim to improve the performance of traditional feature-based approaches through the feature learning techniques. Specifically, we propose a novel deep learning framework, multi-channels deep convolutional neu- ral networks (MC-DCNN), for multivariate time series classi- fication. This model first learns features from individual uni- variate time series in each channel, and combines information from all channels as feature representation at the final layer. Then, the learnt features are applied into a multilayer percep- tron (MLP) for classification. Finally, the extensive experi- ments on real-world data sets show that our model is not only more efficient than the state of the art but also competitive in accuracy. This study implies that feature learning is worth to be investigated for the problem of time series classification.展开更多
基金financially supported by the National Natural Science Foundation of China (Nos.51974023 and52374321)the funding of State Key Laboratory of Advanced Metallurgy,University of Science and Technology Beijing,China (No.41620007)。
文摘The amount of oxygen blown into the converter is one of the key parameters for the control of the converter blowing process,which directly affects the tap-to-tap time of converter. In this study, a hybrid model based on oxygen balance mechanism (OBM) and deep neural network (DNN) was established for predicting oxygen blowing time in converter. A three-step method was utilized in the hybrid model. First, the oxygen consumption volume was predicted by the OBM model and DNN model, respectively. Second, a more accurate oxygen consumption volume was obtained by integrating the OBM model and DNN model. Finally, the converter oxygen blowing time was calculated according to the oxygen consumption volume and the oxygen supply intensity of each heat. The proposed hybrid model was verified using the actual data collected from an integrated steel plant in China, and compared with multiple linear regression model, OBM model, and neural network model including extreme learning machine, back propagation neural network, and DNN. The test results indicate that the hybrid model with a network structure of 3 hidden layer layers, 32-16-8 neurons per hidden layer, and 0.1 learning rate has the best prediction accuracy and stronger generalization ability compared with other models. The predicted hit ratio of oxygen consumption volume within the error±300 m^(3)is 96.67%;determination coefficient (R^(2)) and root mean square error (RMSE) are0.6984 and 150.03 m^(3), respectively. The oxygen blow time prediction hit ratio within the error±0.6 min is 89.50%;R2and RMSE are0.9486 and 0.3592 min, respectively. As a result, the proposed model can effectively predict the oxygen consumption volume and oxygen blowing time in the converter.
基金funded by the Natural Science Foundation of Fujian Province,China (Grant No.2022J05291)Xiamen Scientific Research Funding for Overseas Chinese Scholars.
文摘Financial time series prediction,whether for classification or regression,has been a heated research topic over the last decade.While traditional machine learning algorithms have experienced mediocre results,deep learning has largely contributed to the elevation of the prediction performance.Currently,the most up-to-date review of advanced machine learning techniques for financial time series prediction is still lacking,making it challenging for finance domain experts and relevant practitioners to determine which model potentially performs better,what techniques and components are involved,and how themodel can be designed and implemented.This review article provides an overview of techniques,components and frameworks for financial time series prediction,with an emphasis on state-of-the-art deep learning models in the literature from2015 to 2023,including standalonemodels like convolutional neural networks(CNN)that are capable of extracting spatial dependencies within data,and long short-term memory(LSTM)that is designed for handling temporal dependencies;and hybrid models integrating CNN,LSTM,attention mechanism(AM)and other techniques.For illustration and comparison purposes,models proposed in recent studies are mapped to relevant elements of a generalized framework comprised of input,output,feature extraction,prediction,and related processes.Among the state-of-the-artmodels,hybrid models like CNNLSTMand CNN-LSTM-AM in general have been reported superior in performance to stand-alone models like the CNN-only model.Some remaining challenges have been discussed,including non-friendliness for finance domain experts,delayed prediction,domain knowledge negligence,lack of standards,and inability of real-time and highfrequency predictions.The principal contributions of this paper are to provide a one-stop guide for both academia and industry to review,compare and summarize technologies and recent advances in this area,to facilitate smooth and informed implementation,and to highlight future research directions.
基金Supported by National Natural Science Foundation of China(Grant No.51405241,51505234,51575283)
文摘With the rapid development of mechanical equipment, the mechanical health monitoring field has entered the era of big data. However, the method of manual feature extraction has the disadvantages of low efficiency and poor accuracy, when handling big data. In this study, the research object was the asynchronous motor in the drivetrain diagnostics simulator system. The vibration signals of different fault motors were collected. The raw signal was pretreated using short time Fourier transform (STFT) to obtain the corresponding time-frequency map. Then, the feature of the time-frequency map was adap- tively extracted by using a convolutional neural network (CNN). The effects of the pretreatment method, and the hyper parameters of network diagnostic accuracy, were investigated experimentally. The experimental results showed that the influence of the preprocessing method is small, and that the batch-size is the main factor affecting accuracy and training efficiency. By investigating feature visualization, it was shown that, in the case of big data, the extracted CNN features can represent complex mapping relationships between signal and health status, and can also overcome the prior knowledge and engineering experience requirement for feature extraction, which is used by tra- ditional diagnosis methods. This paper proposes a new method, based on STFT and CNN, which can complete motor fault diagnosis tasks more intelligently and accurately.
基金supported by Energy Cloud R&D Program(grant number:2019M3F2A1073184)through the National Research Foundation of Korea(NRF)funded by the Ministry of Science and ICT.
文摘Multivariate time-series forecasting(MTSF)plays an important role in diverse real-world applications.To achieve better accuracy in MTSF,time-series patterns in each variable and interrelationship patterns between variables should be considered together.Recently,graph neural networks(GNNs)has gained much attention as they can learn both patterns using a graph.For accurate forecasting through GNN,a well-defined graph is required.However,existing GNNs have limitations in reflecting the spectral similarity and time delay between nodes,and consider all nodes with the same weight when constructing graph.In this paper,we propose a novel graph construction method that solves aforementioned limitations.We first calculate the Fourier transform-based spectral similarity and then update this similarity to reflect the time delay.Then,we weight each node according to the number of edge connections to get the final graph and utilize it to train the GNN model.Through experiments on various datasets,we demonstrated that the proposed method enhanced the performance of GNN-based MTSF models,and the proposed forecasting model achieve of up to 18.1%predictive performance improvement over the state-of-the-art model.
基金supported in part by the National Natural Science Foundation of China under Grant 62272062the Researchers Supporting Project number.(RSP2023R102)King Saud University+5 种基金Riyadh,Saudi Arabia,the Open Research Fund of the Hunan Provincial Key Laboratory of Network Investigational Technology under Grant 2018WLZC003the National Science Foundation of Hunan Province under Grant 2020JJ2029the Hunan Provincial Key Research and Development Program under Grant 2022GK2019the Science Fund for Creative Research Groups of Hunan Province under Grant 2020JJ1006the Scientific Research Fund of Hunan Provincial Transportation Department under Grant 202143the Open Fund of Key Laboratory of Safety Control of Bridge Engineering,Ministry of Education(Changsha University of Science Technology)under Grant 21KB07.
文摘Sensors produce a large amount of multivariate time series data to record the states of Internet of Things(IoT)systems.Multivariate time series timestamp anomaly detection(TSAD)can identify timestamps of attacks and malfunctions.However,it is necessary to determine which sensor or indicator is abnormal to facilitate a more detailed diagnosis,a process referred to as fine-grained anomaly detection(FGAD).Although further FGAD can be extended based on TSAD methods,existing works do not provide a quantitative evaluation,and the performance is unknown.Therefore,to tackle the FGAD problem,this paper first verifies that the TSAD methods achieve low performance when applied to the FGAD task directly because of the excessive fusion of features and the ignoring of the relationship’s dynamic changes between indicators.Accordingly,this paper proposes a mul-tivariate time series fine-grained anomaly detection(MFGAD)framework.To avoid excessive fusion of features,MFGAD constructs two sub-models to independently identify the abnormal timestamp and abnormal indicator instead of a single model and then combines the two kinds of abnormal results to detect the fine-grained anomaly.Based on this framework,an algorithm based on Graph Attention Neural Network(GAT)and Attention Convolutional Long-Short Term Memory(A-ConvLSTM)is proposed,in which GAT learns temporal features of multiple indicators to detect abnormal timestamps and A-ConvLSTM captures the dynamic relationship between indicators to identify abnormal indicators.Extensive simulations on a real-world dataset demonstrate that the proposed algorithm can achieve a higher F1 score and hit rate than the extension of existing TSAD methods with the benefit of two independent sub-models for timestamp and indicator detection.
基金National Natural Science Foundation of China,Grant/Award Numbers:61825305,62003361,U21A20518China Postdoctoral Science Foundation,Grant/Award Number:47680。
文摘Although previous studies have made some clear leap in learning latent dynamics from high-dimensional representations,the performances in terms of accuracy and inference time of long-term model prediction still need to be improved.In this study,a deep convolutional network based on the Koopman operator(CKNet)is proposed to model non-linear systems with pixel-level measurements for long-term prediction.CKNet adopts an autoencoder network architecture,consisting of an encoder to generate latent states and a linear dynamical model(i.e.,the Koopman operator)which evolves in the latent state space spanned by the encoder.The decoder is used to recover images from latent states.According to a multi-step ahead prediction loss function,the system matrices for approximating the Koopman operator are trained synchronously with the autoencoder in a mini-batch manner.In this manner,gradients can be synchronously transmitted to both the system matrices and the autoencoder to help the encoder self-adaptively tune the latent state space in the training process,and the resulting model is time-invariant in the latent space.Therefore,the proposed CKNet has the advantages of less inference time and high accuracy for long-term prediction.Experiments are per-formed on OpenAI Gym and Mujoco environments,including two and four non-linear forced dynamical systems with continuous action spaces.The experimental results show that CKNet has strong long-term prediction capabilities with sufficient precision.
文摘In this paper, a modeling algorithm developed by transferring the adaptive fuzzy inference neural network into an on-line real time algorithm, combining the algorithm with conventional system identification method and applying them to separate identification of nonlinear multi-variable systems is introduced and discussed.
文摘Leg amputations are common in accidents and diseases.The present active bionic legs use Electromyography(EMG)signals in lower limbs(just before the location of the amputation)to generate active control signals.The active control with EMGs greatly limits the potential of using these bionic legs because most accidents and diseases cause severe damages to tissues/muscles which originates EMG signals.As an alternative,the present research attempted to use an upper limb swing pattern to control an active bionic leg.A deep neural network(DNN)model is implemented to recognize the patterns in upper limb swing,and it is used to translate these signals into active control input of a bionic leg.The proposed approach can generate a full gait cycle within 1082 milliseconds,and it is comparable to the normal(a person without any disability)1070 milliseconds gait cycle.
文摘Considering the recent developments in deep learning, it has become increasingly important to verify what methods are valid for the prediction of multivariate time-series data. In this study, we propose a novel method of time-series prediction employing multiple deep learners combined with a Bayesian network where training data is divided into clusters using K-means clustering. We decided how many clusters are the best for K-means with the Bayesian information criteria. Depending on each cluster, the multiple deep learners are trained. We used three types of deep learners: deep neural network (DNN), recurrent neural network (RNN), and long short-term memory (LSTM). A naive Bayes classifier is used to determine which deep learner is in charge of predicting a particular time-series. Our proposed method will be applied to a set of financial time-series data, the Nikkei Average Stock price, to assess the accuracy of the predictions made. Compared with the conventional method of employing a single deep learner to acquire all the data, it is demonstrated by our proposed method that F-value and accuracy are improved.
文摘There are many techniques using sensors and wearable devices for detecting and monitoring patients with Parkinson’s disease(PD).A recent development is the utilization of human interaction with computer keyboards for analyzing and identifying motor signs in the early stages of the disease.Current designs for classification of time series of computer-key hold durations recorded from healthy control and PD subjects require the time series of length to be considerably long.With an attempt to avoid discomfort to participants in performing long physical tasks for data recording,this paper introduces the use of fuzzy recurrence plots of very short time series as input data for the machine training and classification with long short-term memory(LSTM)neural networks.Being an original approach that is able to both significantly increase the feature dimensions and provides the property of deterministic dynamical systems of very short time series for information processing carried out by an LSTM layer architecture,fuzzy recurrence plots provide promising results and outperform the direct input of the time series for the classification of healthy control and early PD subjects.
文摘Water is a vital resource.It supports a multitude of industries,civilizations,and agriculture.However,climatic conditions impact water availability,particularly in desert areas where the temperature is high,and rain is scarce.Therefore,it is crucial to forecast water demand to provide it to sectors either on regular or emergency days.The study aims to develop an accurate model to forecast daily water demand under the impact of climatic conditions.This forecasting is known as a multivariate time series because it uses both the historical data of water demand and climatic conditions to forecast the future.Focusing on the collected data of Jeddah city,Saudi Arabia in the period between 2004 and 2018,we develop a hybrid approach that uses Artificial Neural Networks(ANN)for forecasting and Particle Swarm Optimization algorithm(PSO)for tuning ANNs’hyperparameters.Based on the Root Mean Square Error(RMSE)metric,results show that the(PSO-ANN)is an accurate model for multivariate time series forecasting.Also,the first day is the most difficult day for prediction(highest error rate),while the second day is the easiest to predict(lowest error rate).Finally,correlation analysis shows that the dew point is the most climatic factor affecting water demand.
基金supported by the National Natural Science Foundation of China under Grant Nos.61902050,61602077 and 61672122the China Postdoctoral Science Foundation under Grant No.2020M670736+1 种基金the Fundamental Research Funds for the Central Universities of China under Grant Nos.3132019355 and 2020cxxmss14the High Education Science and Technology Planning Program of Shandong Provincial Education Department of China under Grant Nos.J18KA340 and J18KA385.
文摘Stack Overflow provides a platform for developers to seek suitable solutions by asking questions and receiving answers on various topics.However,many questions are usually not answered quickly enough.Since the questioners are eager to know the specific time interval at which a question can be answered,it becomes an important task for Stack Overflow to feedback the answer time to the question.To address this issue,we propose a model for predicting the answer time of questions,named Predicting Answer Time(i.e.,PAT model),which consists of two parts:a feature acquisition and fusion model,and a deep neural network model.The framework uses a variety of features mined from questions in Stack Overflow,including the question description,question title,question tags,the creation time of the question,and other temporal features.These features are fused and fed into the deep neural network to predict the answer time of the question.As a case study,post data from Stack Overflow are used to assess the model.We use traditional regression algorithms as the baselines,such as Linear Regression,K-Nearest Neighbors Regression,Support Vector Regression,Multilayer Perceptron Regression,and Random Forest Regression.Experimental results show that the PAT model can predict the answer time of questions more accurately than traditional regression algorithms,and shorten the error of the predicted answer time by nearly 10 hours.
文摘Multivariate time series with missing values are common in a wide range of applications,including energy data.Existing imputation methods often fail to focus on the temporal dynamics and the cross-dimensional correlation simultaneously.In this paper we propose a two-step method based on an attention model to impute missing values in multivariate energy time series.First,the underlying distribution of the missing values in the data is learned.This information is then further used to train an attention based imputation model.By learning the distribution prior to the imputation process,the model can respond flexibly to the specific characteristics of the underlying data.The developed model is applied to European energy data,obtained from the European Network of Transmission System Operators for Electricity.Using different evaluation metrics and benchmarks,the conducted experiments show that the proposed model is preferable to the benchmarks and is able to accurately impute missing values.
文摘Time series classification is related to many dif- ferent domains, such as health informatics, finance, and bioinformatics. Due to its broad applications, researchers have developed many algorithms for this kind of tasks, e.g., multivariate time series classification. Among the classifi- cation algorithms, k-nearest neighbor (k-NN) classification (particularly 1-NN) combined with dynamic time warping (DTW) achieves the state of the art performance. The defi- ciency is that when the data set grows large, the time con- sumption of 1-NN with DTW will be very expensive. In con- trast to 1-NN with DTW, it is more efficient but less ef- fective for feature-based classification methods since their performance usually depends on the quality of hand-crafted features. In this paper, we aim to improve the performance of traditional feature-based approaches through the feature learning techniques. Specifically, we propose a novel deep learning framework, multi-channels deep convolutional neu- ral networks (MC-DCNN), for multivariate time series classi- fication. This model first learns features from individual uni- variate time series in each channel, and combines information from all channels as feature representation at the final layer. Then, the learnt features are applied into a multilayer percep- tron (MLP) for classification. Finally, the extensive experi- ments on real-world data sets show that our model is not only more efficient than the state of the art but also competitive in accuracy. This study implies that feature learning is worth to be investigated for the problem of time series classification.