期刊文献+
共找到371篇文章
< 1 2 19 >
每页显示 20 50 100
Hybrid model for BOF oxygen blowing time prediction based on oxygen balance mechanism and deep neural network
1
作者 Xin Shao Qing Liu +3 位作者 Zicheng Xin Jiangshan Zhang Tao Zhou Shaoshuai Li 《International Journal of Minerals,Metallurgy and Materials》 SCIE EI CSCD 2024年第1期106-117,共12页
The amount of oxygen blown into the converter is one of the key parameters for the control of the converter blowing process,which directly affects the tap-to-tap time of converter. In this study, a hybrid model based ... The amount of oxygen blown into the converter is one of the key parameters for the control of the converter blowing process,which directly affects the tap-to-tap time of converter. In this study, a hybrid model based on oxygen balance mechanism (OBM) and deep neural network (DNN) was established for predicting oxygen blowing time in converter. A three-step method was utilized in the hybrid model. First, the oxygen consumption volume was predicted by the OBM model and DNN model, respectively. Second, a more accurate oxygen consumption volume was obtained by integrating the OBM model and DNN model. Finally, the converter oxygen blowing time was calculated according to the oxygen consumption volume and the oxygen supply intensity of each heat. The proposed hybrid model was verified using the actual data collected from an integrated steel plant in China, and compared with multiple linear regression model, OBM model, and neural network model including extreme learning machine, back propagation neural network, and DNN. The test results indicate that the hybrid model with a network structure of 3 hidden layer layers, 32-16-8 neurons per hidden layer, and 0.1 learning rate has the best prediction accuracy and stronger generalization ability compared with other models. The predicted hit ratio of oxygen consumption volume within the error±300 m^(3)is 96.67%;determination coefficient (R^(2)) and root mean square error (RMSE) are0.6984 and 150.03 m^(3), respectively. The oxygen blow time prediction hit ratio within the error±0.6 min is 89.50%;R2and RMSE are0.9486 and 0.3592 min, respectively. As a result, the proposed model can effectively predict the oxygen consumption volume and oxygen blowing time in the converter. 展开更多
关键词 basic oxygen furnace oxygen consumption oxygen blowing time oxygen balance mechanism deep neural network hybrid model
下载PDF
Deep Learning for Financial Time Series Prediction:A State-of-the-Art Review of Standalone and HybridModels
2
作者 Weisi Chen Walayat Hussain +1 位作者 Francesco Cauteruccio Xu Zhang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第4期187-224,共38页
Financial time series prediction,whether for classification or regression,has been a heated research topic over the last decade.While traditional machine learning algorithms have experienced mediocre results,deep lear... Financial time series prediction,whether for classification or regression,has been a heated research topic over the last decade.While traditional machine learning algorithms have experienced mediocre results,deep learning has largely contributed to the elevation of the prediction performance.Currently,the most up-to-date review of advanced machine learning techniques for financial time series prediction is still lacking,making it challenging for finance domain experts and relevant practitioners to determine which model potentially performs better,what techniques and components are involved,and how themodel can be designed and implemented.This review article provides an overview of techniques,components and frameworks for financial time series prediction,with an emphasis on state-of-the-art deep learning models in the literature from2015 to 2023,including standalonemodels like convolutional neural networks(CNN)that are capable of extracting spatial dependencies within data,and long short-term memory(LSTM)that is designed for handling temporal dependencies;and hybrid models integrating CNN,LSTM,attention mechanism(AM)and other techniques.For illustration and comparison purposes,models proposed in recent studies are mapped to relevant elements of a generalized framework comprised of input,output,feature extraction,prediction,and related processes.Among the state-of-the-artmodels,hybrid models like CNNLSTMand CNN-LSTM-AM in general have been reported superior in performance to stand-alone models like the CNN-only model.Some remaining challenges have been discussed,including non-friendliness for finance domain experts,delayed prediction,domain knowledge negligence,lack of standards,and inability of real-time and highfrequency predictions.The principal contributions of this paper are to provide a one-stop guide for both academia and industry to review,compare and summarize technologies and recent advances in this area,to facilitate smooth and informed implementation,and to highlight future research directions. 展开更多
关键词 Financial time series prediction convolutional neural network long short-term memory deep learning attention mechanism FINANCE
下载PDF
Motor Fault Diagnosis Based on Short-time Fourier Transform and Convolutional Neural Network 被引量:38
3
作者 Li-Hua Wang Xiao-Ping Zhao +2 位作者 Jia-Xin Wu Yang-Yang Xie Yong-Hong Zhang 《Chinese Journal of Mechanical Engineering》 SCIE EI CAS CSCD 2017年第6期1357-1368,共12页
With the rapid development of mechanical equipment, the mechanical health monitoring field has entered the era of big data. However, the method of manual feature extraction has the disadvantages of low efficiency and ... With the rapid development of mechanical equipment, the mechanical health monitoring field has entered the era of big data. However, the method of manual feature extraction has the disadvantages of low efficiency and poor accuracy, when handling big data. In this study, the research object was the asynchronous motor in the drivetrain diagnostics simulator system. The vibration signals of different fault motors were collected. The raw signal was pretreated using short time Fourier transform (STFT) to obtain the corresponding time-frequency map. Then, the feature of the time-frequency map was adap- tively extracted by using a convolutional neural network (CNN). The effects of the pretreatment method, and the hyper parameters of network diagnostic accuracy, were investigated experimentally. The experimental results showed that the influence of the preprocessing method is small, and that the batch-size is the main factor affecting accuracy and training efficiency. By investigating feature visualization, it was shown that, in the case of big data, the extracted CNN features can represent complex mapping relationships between signal and health status, and can also overcome the prior knowledge and engineering experience requirement for feature extraction, which is used by tra- ditional diagnosis methods. This paper proposes a new method, based on STFT and CNN, which can complete motor fault diagnosis tasks more intelligently and accurately. 展开更多
关键词 Big data deep learning Short-time Fouriertransform Convolutional neural network MOTOR
下载PDF
Graph Construction Method for GNN-Based Multivariate Time-Series Forecasting
4
作者 Wonyong Chung Jaeuk Moon +1 位作者 Dongjun Kim Eenjun Hwang 《Computers, Materials & Continua》 SCIE EI 2023年第6期5817-5836,共20页
Multivariate time-series forecasting(MTSF)plays an important role in diverse real-world applications.To achieve better accuracy in MTSF,time-series patterns in each variable and interrelationship patterns between vari... Multivariate time-series forecasting(MTSF)plays an important role in diverse real-world applications.To achieve better accuracy in MTSF,time-series patterns in each variable and interrelationship patterns between variables should be considered together.Recently,graph neural networks(GNNs)has gained much attention as they can learn both patterns using a graph.For accurate forecasting through GNN,a well-defined graph is required.However,existing GNNs have limitations in reflecting the spectral similarity and time delay between nodes,and consider all nodes with the same weight when constructing graph.In this paper,we propose a novel graph construction method that solves aforementioned limitations.We first calculate the Fourier transform-based spectral similarity and then update this similarity to reflect the time delay.Then,we weight each node according to the number of edge connections to get the final graph and utilize it to train the GNN model.Through experiments on various datasets,we demonstrated that the proposed method enhanced the performance of GNN-based MTSF models,and the proposed forecasting model achieve of up to 18.1%predictive performance improvement over the state-of-the-art model. 展开更多
关键词 deep learning graph neural network multivariate time-series forecasting
下载PDF
Fine-Grained Multivariate Time Series Anomaly Detection in IoT
5
作者 Shiming He Meng Guo +4 位作者 Bo Yang Osama Alfarraj Amr Tolba Pradip Kumar Sharma Xi’ai Yan 《Computers, Materials & Continua》 SCIE EI 2023年第6期5027-5047,共21页
Sensors produce a large amount of multivariate time series data to record the states of Internet of Things(IoT)systems.Multivariate time series timestamp anomaly detection(TSAD)can identify timestamps of attacks and m... Sensors produce a large amount of multivariate time series data to record the states of Internet of Things(IoT)systems.Multivariate time series timestamp anomaly detection(TSAD)can identify timestamps of attacks and malfunctions.However,it is necessary to determine which sensor or indicator is abnormal to facilitate a more detailed diagnosis,a process referred to as fine-grained anomaly detection(FGAD).Although further FGAD can be extended based on TSAD methods,existing works do not provide a quantitative evaluation,and the performance is unknown.Therefore,to tackle the FGAD problem,this paper first verifies that the TSAD methods achieve low performance when applied to the FGAD task directly because of the excessive fusion of features and the ignoring of the relationship’s dynamic changes between indicators.Accordingly,this paper proposes a mul-tivariate time series fine-grained anomaly detection(MFGAD)framework.To avoid excessive fusion of features,MFGAD constructs two sub-models to independently identify the abnormal timestamp and abnormal indicator instead of a single model and then combines the two kinds of abnormal results to detect the fine-grained anomaly.Based on this framework,an algorithm based on Graph Attention Neural Network(GAT)and Attention Convolutional Long-Short Term Memory(A-ConvLSTM)is proposed,in which GAT learns temporal features of multiple indicators to detect abnormal timestamps and A-ConvLSTM captures the dynamic relationship between indicators to identify abnormal indicators.Extensive simulations on a real-world dataset demonstrate that the proposed algorithm can achieve a higher F1 score and hit rate than the extension of existing TSAD methods with the benefit of two independent sub-models for timestamp and indicator detection. 展开更多
关键词 multivariate time series graph attention neural network fine-grained anomaly detection
下载PDF
A deep Koopman operator-based modelling approach for long-term prediction of dynamics with pixel-level measurements
6
作者 Yongqian Xiao Zixin Tang +2 位作者 Xin Xu Xinglong Zhang Yifei Shi 《CAAI Transactions on Intelligence Technology》 SCIE EI 2024年第1期178-196,共19页
Although previous studies have made some clear leap in learning latent dynamics from high-dimensional representations,the performances in terms of accuracy and inference time of long-term model prediction still need t... Although previous studies have made some clear leap in learning latent dynamics from high-dimensional representations,the performances in terms of accuracy and inference time of long-term model prediction still need to be improved.In this study,a deep convolutional network based on the Koopman operator(CKNet)is proposed to model non-linear systems with pixel-level measurements for long-term prediction.CKNet adopts an autoencoder network architecture,consisting of an encoder to generate latent states and a linear dynamical model(i.e.,the Koopman operator)which evolves in the latent state space spanned by the encoder.The decoder is used to recover images from latent states.According to a multi-step ahead prediction loss function,the system matrices for approximating the Koopman operator are trained synchronously with the autoencoder in a mini-batch manner.In this manner,gradients can be synchronously transmitted to both the system matrices and the autoencoder to help the encoder self-adaptively tune the latent state space in the training process,and the resulting model is time-invariant in the latent space.Therefore,the proposed CKNet has the advantages of less inference time and high accuracy for long-term prediction.Experiments are per-formed on OpenAI Gym and Mujoco environments,including two and four non-linear forced dynamical systems with continuous action spaces.The experimental results show that CKNet has strong long-term prediction capabilities with sufficient precision. 展开更多
关键词 deep neural networks image motion analysis image sequences sequential estimation
下载PDF
On-Line Real Time Realization and Application of Adaptive Fuzzy Inference Neural Network
7
作者 Han, Jianguo Guo, Junchao Zhao, Qian 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2000年第1期67-74,共8页
In this paper, a modeling algorithm developed by transferring the adaptive fuzzy inference neural network into an on-line real time algorithm, combining the algorithm with conventional system identification method and... In this paper, a modeling algorithm developed by transferring the adaptive fuzzy inference neural network into an on-line real time algorithm, combining the algorithm with conventional system identification method and applying them to separate identification of nonlinear multi-variable systems is introduced and discussed. 展开更多
关键词 Fuzzy control Identification (control systems) Inference engines Learning algorithms Mathematical models Multivariable control systems neural networks Nonlinear control systems Real time systems
下载PDF
A Deep Neural Network Model for Upper Limb Swing Pattern to Control an Active Bionic Leg 被引量:1
8
作者 Thisara PATHIRANA Hiroshan GUNAWARDANE Nimali T MEDAGEDARA 《Instrumentation》 2021年第1期51-60,共10页
Leg amputations are common in accidents and diseases.The present active bionic legs use Electromyography(EMG)signals in lower limbs(just before the location of the amputation)to generate active control signals.The act... Leg amputations are common in accidents and diseases.The present active bionic legs use Electromyography(EMG)signals in lower limbs(just before the location of the amputation)to generate active control signals.The active control with EMGs greatly limits the potential of using these bionic legs because most accidents and diseases cause severe damages to tissues/muscles which originates EMG signals.As an alternative,the present research attempted to use an upper limb swing pattern to control an active bionic leg.A deep neural network(DNN)model is implemented to recognize the patterns in upper limb swing,and it is used to translate these signals into active control input of a bionic leg.The proposed approach can generate a full gait cycle within 1082 milliseconds,and it is comparable to the normal(a person without any disability)1070 milliseconds gait cycle. 展开更多
关键词 Active Bionic Leg deep neural network Human Gait Cycle time
下载PDF
Time Series Forecasting with Multiple Deep Learners: Selection from a Bayesian Network
9
作者 Shusuke Kobayashi Susumu Shirayama 《Journal of Data Analysis and Information Processing》 2017年第3期115-130,共16页
Considering the recent developments in deep learning, it has become increasingly important to verify what methods are valid for the prediction of multivariate time-series data. In this study, we propose a novel method... Considering the recent developments in deep learning, it has become increasingly important to verify what methods are valid for the prediction of multivariate time-series data. In this study, we propose a novel method of time-series prediction employing multiple deep learners combined with a Bayesian network where training data is divided into clusters using K-means clustering. We decided how many clusters are the best for K-means with the Bayesian information criteria. Depending on each cluster, the multiple deep learners are trained. We used three types of deep learners: deep neural network (DNN), recurrent neural network (RNN), and long short-term memory (LSTM). A naive Bayes classifier is used to determine which deep learner is in charge of predicting a particular time-series. Our proposed method will be applied to a set of financial time-series data, the Nikkei Average Stock price, to assess the accuracy of the predictions made. Compared with the conventional method of employing a single deep learner to acquire all the data, it is demonstrated by our proposed method that F-value and accuracy are improved. 展开更多
关键词 time-Series Data deep LEARNING Bayesian network Recurrent neural network Long SHORT-TERM Memory Ensemble LEARNING K-Means
下载PDF
Classification of Short Time Series in Early Parkinson’s Disease With Deep Learning of Fuzzy Recurrence Plots 被引量:8
10
作者 Tuan D.Pham Karin Wardell +1 位作者 Anders Eklund Goran Salerud 《IEEE/CAA Journal of Automatica Sinica》 EI CSCD 2019年第6期1306-1317,共12页
There are many techniques using sensors and wearable devices for detecting and monitoring patients with Parkinson’s disease(PD).A recent development is the utilization of human interaction with computer keyboards for... There are many techniques using sensors and wearable devices for detecting and monitoring patients with Parkinson’s disease(PD).A recent development is the utilization of human interaction with computer keyboards for analyzing and identifying motor signs in the early stages of the disease.Current designs for classification of time series of computer-key hold durations recorded from healthy control and PD subjects require the time series of length to be considerably long.With an attempt to avoid discomfort to participants in performing long physical tasks for data recording,this paper introduces the use of fuzzy recurrence plots of very short time series as input data for the machine training and classification with long short-term memory(LSTM)neural networks.Being an original approach that is able to both significantly increase the feature dimensions and provides the property of deterministic dynamical systems of very short time series for information processing carried out by an LSTM layer architecture,fuzzy recurrence plots provide promising results and outperform the direct input of the time series for the classification of healthy control and early PD subjects. 展开更多
关键词 deep learning early Parkinson’s disease(PD) fuzzy recurrence plots long short-term memory(LSTM) neural networks pattern classification short time series
下载PDF
A Hybrid Neural Network-based Approach for Forecasting Water Demand
11
作者 Al-Batool Al-Ghamdi Souad Kamel Mashael Khayyat 《Computers, Materials & Continua》 SCIE EI 2022年第10期1365-1383,共19页
Water is a vital resource.It supports a multitude of industries,civilizations,and agriculture.However,climatic conditions impact water availability,particularly in desert areas where the temperature is high,and rain i... Water is a vital resource.It supports a multitude of industries,civilizations,and agriculture.However,climatic conditions impact water availability,particularly in desert areas where the temperature is high,and rain is scarce.Therefore,it is crucial to forecast water demand to provide it to sectors either on regular or emergency days.The study aims to develop an accurate model to forecast daily water demand under the impact of climatic conditions.This forecasting is known as a multivariate time series because it uses both the historical data of water demand and climatic conditions to forecast the future.Focusing on the collected data of Jeddah city,Saudi Arabia in the period between 2004 and 2018,we develop a hybrid approach that uses Artificial Neural Networks(ANN)for forecasting and Particle Swarm Optimization algorithm(PSO)for tuning ANNs’hyperparameters.Based on the Root Mean Square Error(RMSE)metric,results show that the(PSO-ANN)is an accurate model for multivariate time series forecasting.Also,the first day is the most difficult day for prediction(highest error rate),while the second day is the easiest to predict(lowest error rate).Finally,correlation analysis shows that the dew point is the most climatic factor affecting water demand. 展开更多
关键词 Water demand forecasting artificial neural network multivariate time series climatic conditions particle swarm optimization hybrid algorithm
下载PDF
Multi-Feature Fusion Based Structural Deep Neural Network for Predicting Answer Time on Stack Overflow
12
作者 郭世凯 王思文 +3 位作者 李辉 范玉龙 刘亚清 张斌 《Journal of Computer Science & Technology》 SCIE EI CSCD 2023年第3期582-599,共18页
Stack Overflow provides a platform for developers to seek suitable solutions by asking questions and receiving answers on various topics.However,many questions are usually not answered quickly enough.Since the questio... Stack Overflow provides a platform for developers to seek suitable solutions by asking questions and receiving answers on various topics.However,many questions are usually not answered quickly enough.Since the questioners are eager to know the specific time interval at which a question can be answered,it becomes an important task for Stack Overflow to feedback the answer time to the question.To address this issue,we propose a model for predicting the answer time of questions,named Predicting Answer Time(i.e.,PAT model),which consists of two parts:a feature acquisition and fusion model,and a deep neural network model.The framework uses a variety of features mined from questions in Stack Overflow,including the question description,question title,question tags,the creation time of the question,and other temporal features.These features are fused and fed into the deep neural network to predict the answer time of the question.As a case study,post data from Stack Overflow are used to assess the model.We use traditional regression algorithms as the baselines,such as Linear Regression,K-Nearest Neighbors Regression,Support Vector Regression,Multilayer Perceptron Regression,and Random Forest Regression.Experimental results show that the PAT model can predict the answer time of questions more accurately than traditional regression algorithms,and shorten the error of the predicted answer time by nearly 10 hours. 展开更多
关键词 answer time structural deep neural network Stack Overflow feature acquisition feature fusion
原文传递
Multivariate time series imputation for energy data using neural networks
13
作者 Christopher Bulte Max Kleinebrahm +1 位作者 Hasan Umitcan Yilmaz Juan Gomez-Romero 《Energy and AI》 2023年第3期25-35,共11页
Multivariate time series with missing values are common in a wide range of applications,including energy data.Existing imputation methods often fail to focus on the temporal dynamics and the cross-dimensional correlat... Multivariate time series with missing values are common in a wide range of applications,including energy data.Existing imputation methods often fail to focus on the temporal dynamics and the cross-dimensional correlation simultaneously.In this paper we propose a two-step method based on an attention model to impute missing values in multivariate energy time series.First,the underlying distribution of the missing values in the data is learned.This information is then further used to train an attention based imputation model.By learning the distribution prior to the imputation process,the model can respond flexibly to the specific characteristics of the underlying data.The developed model is applied to European energy data,obtained from the European Network of Transmission System Operators for Electricity.Using different evaluation metrics and benchmarks,the conducted experiments show that the proposed model is preferable to the benchmarks and is able to accurately impute missing values. 展开更多
关键词 Missing value estimation multivariate time series neural networks Attention model Energy data
原文传递
Exploiting multi-channels deep convolutional neural networks for multivariate time series classification 被引量:21
14
作者 Yi ZHENG QiLIU +2 位作者 Enhong CHEN Yong GE J. Leon ZHAO 《Frontiers of Computer Science》 SCIE EI CSCD 2016年第1期96-112,共17页
Time series classification is related to many dif- ferent domains, such as health informatics, finance, and bioinformatics. Due to its broad applications, researchers have developed many algorithms for this kind of ta... Time series classification is related to many dif- ferent domains, such as health informatics, finance, and bioinformatics. Due to its broad applications, researchers have developed many algorithms for this kind of tasks, e.g., multivariate time series classification. Among the classifi- cation algorithms, k-nearest neighbor (k-NN) classification (particularly 1-NN) combined with dynamic time warping (DTW) achieves the state of the art performance. The defi- ciency is that when the data set grows large, the time con- sumption of 1-NN with DTW will be very expensive. In con- trast to 1-NN with DTW, it is more efficient but less ef- fective for feature-based classification methods since their performance usually depends on the quality of hand-crafted features. In this paper, we aim to improve the performance of traditional feature-based approaches through the feature learning techniques. Specifically, we propose a novel deep learning framework, multi-channels deep convolutional neu- ral networks (MC-DCNN), for multivariate time series classi- fication. This model first learns features from individual uni- variate time series in each channel, and combines information from all channels as feature representation at the final layer. Then, the learnt features are applied into a multilayer percep- tron (MLP) for classification. Finally, the extensive experi- ments on real-world data sets show that our model is not only more efficient than the state of the art but also competitive in accuracy. This study implies that feature learning is worth to be investigated for the problem of time series classification. 展开更多
关键词 convolutional neural networks time series clas-sification feature learning deep learning
原文传递
偏置剪枝叠式自编码回声状态网络的时序预测
15
作者 刘丽丽 刘玉玺 王河山 《计算机工程与设计》 北大核心 2024年第1期212-219,共8页
针对大多数模型对时间序列预测数据的预测准确率较低,为提升时间序列的预测精度,提出一种基于Biased Drop-weight的偏置剪枝叠式自编码回声状态网络(BD-AE-SGESN)的深度模型。以叠式ESN为多层深度网络框架,提出一种生成式AE算法生成每... 针对大多数模型对时间序列预测数据的预测准确率较低,为提升时间序列的预测精度,提出一种基于Biased Drop-weight的偏置剪枝叠式自编码回声状态网络(BD-AE-SGESN)的深度模型。以叠式ESN为多层深度网络框架,提出一种生成式AE算法生成每一层的输入权值,利用BD算法根据输入权重激活值进行剪枝。对比实验结果表明,该模型能够有效提升预测准确率,在3个不同的数据上,相比其它模型有着较小的预测误差和较高的稳定度。 展开更多
关键词 多变量时间序列 回声状态网络 预测模型 剪枝 自编码 深度网络 权重优化
下载PDF
基于人工神经网络的大地电磁时序分类研究
16
作者 杨凯 刘诚 +2 位作者 贺景龙 李含 姚川 《物探与化探》 CAS 2024年第2期498-507,共10页
随着社会的发展,各类干扰日益加剧,高质量的大地电磁采集也变得愈加困难。为了提高数据质量,学者们针对不同类型的噪声提出了很多对应的去噪方法,由于大地电磁数据量都比较大,去噪前不可能对每条数据进行人工判读,急需一种高效率的噪声... 随着社会的发展,各类干扰日益加剧,高质量的大地电磁采集也变得愈加困难。为了提高数据质量,学者们针对不同类型的噪声提出了很多对应的去噪方法,由于大地电磁数据量都比较大,去噪前不可能对每条数据进行人工判读,急需一种高效率的噪声识别和分类方法。基于此,本文将人工神经网络应用于大地电磁时间序列分类中,为了选取最为合适的大地电磁时间序列分类网络模型,使用模拟方波、工频、脉冲噪声以及实测无噪声数据4类时间序列类型,分别对LSTM、FCN、ResNet、LSTM-FCN及LSTM-ResNet模型进行了噪声分类训练和实测数据分类对比试验。结果表明,FCN及LSTM-FCN在大地电磁时序分类中具有相对较好的效果。其中,FCN模型对实测数据分类准确率最高可达99.84%,每个epoch平均用时9.6 s,LSTM-FCN较FCN具有更高的分类精度,实测数据集最高分类准确率近乎100%,但是其每个epoch平均用时24.6 s,且较FCN也更易过拟合。总体来看,如果数据量较少使用LSTM-FCN可以获取更高的分类精度,数据量较大时需考虑时间成本,使用FCN则更为合适。最后,利用LSTM-FCN分类模型和LSTM去噪模型搭建了大地电磁噪声处理系统,对含有不同类型噪声的大地电磁数据进行了成功处理。 展开更多
关键词 大地电磁 时间序列分类 人工神经网络 深度学习 噪声
下载PDF
基于谱域超图卷积网络的交通流预测模型
17
作者 尹宝才 王竟成 +2 位作者 张勇 胡永利 孙艳丰 《北京工业大学学报》 CAS CSCD 北大核心 2024年第2期152-164,共13页
针对传统图结构难以对节点间的隐含复杂关联关系建模的问题,利用超图对交通流数据进行高阶表示,提出基于谱域超图卷积网络的交通流预测方法。首先,通过动态超边刻画数据特征层面的关系,利用谱域超图卷积,包括基于傅里叶和图小波的超图... 针对传统图结构难以对节点间的隐含复杂关联关系建模的问题,利用超图对交通流数据进行高阶表示,提出基于谱域超图卷积网络的交通流预测方法。首先,通过动态超边刻画数据特征层面的关系,利用谱域超图卷积,包括基于傅里叶和图小波的超图卷积及门控时序卷积,在多尺度上提取交通流的时空特征,实现端到端的节点级交通流预测。然后,采用北京市以及美国加利福尼亚州真实历史数据集进行预测实验。消融实验通过孤立和重构网络模型验证了所提方法的有效性。全时段和早高峰交通流预测的实验结果表明,该方法预测准确率高于目前主流交通流预测模型。 展开更多
关键词 图神经网络 超图理论 多元时序预测 深度学习 大数据分析 智慧交通
下载PDF
基于注意力机制循环神经网络的液体火箭发动机故障检测
18
作者 张万旋 卢哲 +2 位作者 张箭 薛薇 张楠 《导弹与航天运载技术(中英文)》 CSCD 北大核心 2024年第2期25-31,共7页
针对液体火箭发动机主级段工作过程,采用多变量非线性时间序列分析理论,在两级注意力机制循环神经网络(Dual Stage Attention Based Recurrent Neural Networks,DA-RNN)的基础上,提出一种新型时序分析工具——卷积两级注意力机制循环神... 针对液体火箭发动机主级段工作过程,采用多变量非线性时间序列分析理论,在两级注意力机制循环神经网络(Dual Stage Attention Based Recurrent Neural Networks,DA-RNN)的基础上,提出一种新型时序分析工具——卷积两级注意力机制循环神经网络(Convolutional Dual Stage Attention Based Recurrent Neural Networks,CDA-RNN),从而建立故障趋势预测模型。通过对预测残差进行自相关性分析并定义故障置信概率,提出了故障检测量化依据。利用发生微弱故障的热试车数据进行验证,结果表明,CDA-RNN模型对非稳态工作段微弱故障多参数检测具有良好鲁棒性,该方法十分有效,具有直接应用价值。 展开更多
关键词 多变量时间序列 注意力机制 循环神经网络 卷积神经网络 自相关性分析
下载PDF
深度神经网络在不规则弥漫大B细胞淋巴瘤时间序列数据分类预测中的应用
19
作者 李琼 张岩波 +8 位作者 余红梅 周洁 赵艳琳 李雪玲 王俊霞 张高源 乔宇 赵志强 罗艳虹 《中国卫生统计》 CSCD 北大核心 2024年第2期190-193,199,共5页
目的探讨深度神经网络在不规则时间序列数据中的分类效果,并对山西某医院2014-2020年362例弥漫大B细胞淋巴瘤(diffuse large B-cell lymphoma,DLBCL)患者进行复发预测。方法回顾性地收集了确诊且治疗后达到完全缓解的362例DLBCL患者的... 目的探讨深度神经网络在不规则时间序列数据中的分类效果,并对山西某医院2014-2020年362例弥漫大B细胞淋巴瘤(diffuse large B-cell lymphoma,DLBCL)患者进行复发预测。方法回顾性地收集了确诊且治疗后达到完全缓解的362例DLBCL患者的病例资料,并预测其两年内的复发。先利用LASSO回归进行变量的筛选,再构建基于GRU-ODE-Bayes(gated recurrent unirt-ordinary differential equation-Bayes)的不规则时间序列深度神经网络模型,并与传统模型及其他深度神经网络模型进行比较。结果在本文的所有模型中,传统模型的分类性能不及深度神经网络模型。其中GRU-ODE-Bayes模型最优,其AUC为0.85,灵敏度为0.84,特异度为0.71,G-means为0.77。结论关于不规则DLBCL时间序列数据,与本文其他模型相比,GRU-ODE-Bayes模型可以更精准地预测DLBCL患者的复发情况,可为患者个性化治疗和医生决策提供参考。 展开更多
关键词 弥漫大B细胞淋巴瘤 不规则时间序列数据 复发预测 深度神经网络
下载PDF
基于时间序列融合的室内定位方法
20
作者 余莲杰 李建峰 +1 位作者 徐睿 张小飞 《数据采集与处理》 CSCD 北大核心 2024年第3期750-760,共11页
提出了一种基于拉依达准则-相关系数-卷积神经网络(Pauta criterion-correlation coefficient-convolutional neural networks,P-C-CNN)的时间序列融合定位算法。P-C-CNN方法整合了不同节点以及不同时间序列的数据点,利用时间和空间数... 提出了一种基于拉依达准则-相关系数-卷积神经网络(Pauta criterion-correlation coefficient-convolutional neural networks,P-C-CNN)的时间序列融合定位算法。P-C-CNN方法整合了不同节点以及不同时间序列的数据点,利用时间和空间数据的相互关联性,提高了室内定位的精度和可靠性。首先,该方法使用拉依达准则-相关系数(Pauta criterion-correlation coefficient,P-C)算法对到达角度(Angle of arrival,AOA)-接收信号强度(Received signal strength,RSS)数据的异常值进行剔除,提高了训练数据的质量。其次,算法对数据进行随机间隔选取,从而缩短模型训练时间,同时较好地模拟在线定位阶段数据选取的不确定性,减少模型对训练数据的过度拟合。再次,传统单帧信息训练方法由于噪声混杂无法稳定提取信息特征,所提算法在连续采集的时间序列数据中,融合随机选取固定长度的多帧AOA-RSS数据,然后利用卷积神经网络(Convolutional neural networks,CNN)进行特征提取,避免了单帧信号定位中误差波动较大的问题。最后,通过大量实际测试,验证了所提方法的有效性。实验结果表明,在典型室内环境中,与仅采用RSS数据或者AOA信息的指纹定位算法相比,本文算法的分类准确率由91.6%提高到了96.4%,定位精度从1.3 m提高到了0.3 m;与传统基于模型的AOA-RSS联合定位相比,本文算法能较好解决实测中多径效应等干扰因素的影响,定位精度从1.1 m提高到了0.3 m。 展开更多
关键词 室内定位 深度学习 卷积神经网络 联合定位 时间序列
下载PDF
上一页 1 2 19 下一页 到第
使用帮助 返回顶部