This paper focuses on wireless-powered communication systems,which are increasingly relevant in the Internet of Things(IoT)due to their ability to extend the operational lifetime of devices with limited energy.The mai...This paper focuses on wireless-powered communication systems,which are increasingly relevant in the Internet of Things(IoT)due to their ability to extend the operational lifetime of devices with limited energy.The main contribution of the paper is a novel approach to minimize the secrecy outage probability(SOP)in these systems.Minimizing SOP is crucial for maintaining the confidentiality and integrity of data,especially in situations where the transmission of sensitive data is critical.Our proposed method harnesses the power of an improved biogeography-based optimization(IBBO)to effectively train a recurrent neural network(RNN).The proposed IBBO introduces an innovative migration model.The core advantage of IBBO lies in its adeptness at maintaining equilibrium between exploration and exploitation.This is accomplished by integrating tactics such as advancing towards a random habitat,adopting the crossover operator from genetic algorithms(GA),and utilizing the global best(Gbest)operator from particle swarm optimization(PSO)into the IBBO framework.The IBBO demonstrates its efficacy by enabling the RNN to optimize the system parameters,resulting in significant outage probability reduction.Through comprehensive simulations,we showcase the superiority of the IBBO-RNN over existing approaches,highlighting its capability to achieve remarkable gains in SOP minimization.This paper compares nine methods for predicting outage probability in wireless-powered communications.The IBBO-RNN achieved the highest accuracy rate of 98.92%,showing a significant performance improvement.In contrast,the standard RNN recorded lower accuracy rates of 91.27%.The IBBO-RNN maintains lower SOP values across the entire signal-to-noise ratio(SNR)spectrum tested,suggesting that the method is highly effective at optimizing system parameters for improved secrecy even at lower SNRs.展开更多
An accurate prediction of earth pressure balance(EPB)shield moving performance is important to ensure the safety tunnel excavation.A hybrid model is developed based on the particle swarm optimization(PSO)and gated rec...An accurate prediction of earth pressure balance(EPB)shield moving performance is important to ensure the safety tunnel excavation.A hybrid model is developed based on the particle swarm optimization(PSO)and gated recurrent unit(GRU)neural network.PSO is utilized to assign the optimal hyperparameters of GRU neural network.There are mainly four steps:data collection and processing,hybrid model establishment,model performance evaluation and correlation analysis.The developed model provides an alternative to tackle with time-series data of tunnel project.Apart from that,a novel framework about model application is performed to provide guidelines in practice.A tunnel project is utilized to evaluate the performance of proposed hybrid model.Results indicate that geological and construction variables are significant to the model performance.Correlation analysis shows that construction variables(main thrust and foam liquid volume)display the highest correlation with the cutterhead torque(CHT).This work provides a feasible and applicable alternative way to estimate the performance of shield tunneling.展开更多
The Gated Recurrent Unit(GRU) neural network has great potential in estimating and predicting a variable. In addition to radar reflectivity(Z), radar echo-top height(ET) is also a good indicator of rainfall rate(R). I...The Gated Recurrent Unit(GRU) neural network has great potential in estimating and predicting a variable. In addition to radar reflectivity(Z), radar echo-top height(ET) is also a good indicator of rainfall rate(R). In this study, we propose a new method, GRU_Z-ET, by introducing Z and ET as two independent variables into the GRU neural network to conduct the quantitative single-polarization radar precipitation estimation. The performance of GRU_Z-ET is compared with that of the other three methods in three heavy rainfall cases in China during 2018, namely, the traditional Z-R relationship(Z=300R1.4), the optimal Z-R relationship(Z=79R1.68) and the GRU neural network with only Z as the independent input variable(GRU_Z). The results indicate that the GRU_Z-ET performs the best, while the traditional Z-R relationship performs the worst. The performances of the rest two methods are similar.To further evaluate the performance of the GRU_Z-ET, 200 rainfall events with 21882 total samples during May–July of 2018 are used for statistical analysis. Results demonstrate that the spatial correlation coefficients, threat scores and probability of detection between the observed and estimated precipitation are the largest for the GRU_Z-ET and the smallest for the traditional Z-R relationship, and the root mean square error is just the opposite. In addition, these statistics of GRU_Z are similar to those of optimal Z-R relationship. Thus, it can be concluded that the performance of the GRU_ZET is the best in the four methods for the quantitative precipitation estimation.展开更多
This study proposed a new real-time manufacturing process monitoring method to monitor and detect process shifts in manufacturing operations.Since real-time production process monitoring is critical in today’s smart ...This study proposed a new real-time manufacturing process monitoring method to monitor and detect process shifts in manufacturing operations.Since real-time production process monitoring is critical in today’s smart manufacturing.The more robust the monitoring model,the more reliable a process is to be under control.In the past,many researchers have developed real-time monitoring methods to detect process shifts early.However,thesemethods have limitations in detecting process shifts as quickly as possible and handling various data volumes and varieties.In this paper,a robust monitoring model combining Gated Recurrent Unit(GRU)and Random Forest(RF)with Real-Time Contrast(RTC)called GRU-RF-RTC was proposed to detect process shifts rapidly.The effectiveness of the proposed GRU-RF-RTC model is first evaluated using multivariate normal and nonnormal distribution datasets.Then,to prove the applicability of the proposed model in a realmanufacturing setting,the model was evaluated using real-world normal and non-normal problems.The results demonstrate that the proposed GRU-RF-RTC outperforms other methods in detecting process shifts quickly with the lowest average out-of-control run length(ARL1)in all synthesis and real-world problems under normal and non-normal cases.The experiment results on real-world problems highlight the significance of the proposed GRU-RF-RTC model in modern manufacturing process monitoring applications.The result reveals that the proposed method improves the shift detection capability by 42.14%in normal and 43.64%in gamma distribution problems.展开更多
As an integrated application of modern information technologies and artificial intelligence,Prognostic and Health Management(PHM)is important for machine health monitoring.Prediction of tool wear is one of the symboli...As an integrated application of modern information technologies and artificial intelligence,Prognostic and Health Management(PHM)is important for machine health monitoring.Prediction of tool wear is one of the symbolic applications of PHM technology in modern manufacturing systems and industry.In this paper,a multi-scale Convolutional Gated Recurrent Unit network(MCGRU)is proposed to address raw sensory data for tool wear prediction.At the bottom of MCGRU,six parallel and independent branches with different kernel sizes are designed to form a multi-scale convolutional neural network,which augments the adaptability to features of different time scales.These features of different scales extracted from raw data are then fed into a Deep Gated Recurrent Unit network to capture long-term dependencies and learn significant representations.At the top of the MCGRU,a fully connected layer and a regression layer are built for cutting tool wear prediction.Two case studies are performed to verify the capability and effectiveness of the proposed MCGRU network and results show that MCGRU outperforms several state-of-the-art baseline models.展开更多
Diabetes mellitus is a metabolic disease in which blood glucose levels rise as a result of pancreatic insulin production failure.It causes hyperglycemia and chronic multiorgan dysfunction,including blindness,renal fai...Diabetes mellitus is a metabolic disease in which blood glucose levels rise as a result of pancreatic insulin production failure.It causes hyperglycemia and chronic multiorgan dysfunction,including blindness,renal failure,and cardi-ovascular disease,if left untreated.One of the essential checks that are needed to be performed frequently in Type 1 Diabetes Mellitus is a blood test,this procedure involves extracting blood quite frequently,which leads to subject discomfort increasing the possibility of infection when the procedure is often recurring.Exist-ing methods used for diabetes classification have less classification accuracy and suffer from vanishing gradient problems,to overcome these issues,we proposed stacking ensemble learning-based convolutional gated recurrent neural network(CGRNN)Metamodel algorithm.Our proposed method initially performs outlier detection to remove outlier data,using the Gaussian distribution method,and the Box-cox method is used to correctly order the dataset.After the outliers’detec-tion,the missing values are replaced by the data’s mean rather than their elimina-tion.In the stacking ensemble base model,multiple machine learning algorithms like Naïve Bayes,Bagging with random forest,and Adaboost Decision tree have been employed.CGRNN Meta model uses two hidden layers Long-Short-Time Memory(LSTM)and Gated Recurrent Unit(GRU)to calculate the weight matrix for diabetes prediction.Finally,the calculated weight matrix is passed to the soft-max function in the output layer to produce the diabetes prediction results.By using LSTM-based CG-RNN,the mean square error(MSE)value is 0.016 and the obtained accuracy is 91.33%.展开更多
Knowledge of pore-water pressure(PWP)variation is fundamental for slope stability.A precise prediction of PWP is difficult due to complex physical mechanisms and in situ natural variability.To explore the applicabilit...Knowledge of pore-water pressure(PWP)variation is fundamental for slope stability.A precise prediction of PWP is difficult due to complex physical mechanisms and in situ natural variability.To explore the applicability and advantages of recurrent neural networks(RNNs)on PWP prediction,three variants of RNNs,i.e.,standard RNN,long short-term memory(LSTM)and gated recurrent unit(GRU)are adopted and compared with a traditional static artificial neural network(ANN),i.e.,multi-layer perceptron(MLP).Measurements of rainfall and PWP of representative piezometers from a fully instrumented natural slope in Hong Kong are used to establish the prediction models.The coefficient of determination(R^2)and root mean square error(RMSE)are used for model evaluations.The influence of input time series length on the model performance is investigated.The results reveal that MLP can provide acceptable performance but is not robust.The uncertainty bounds of RMSE of the MLP model range from 0.24 kPa to 1.12 k Pa for the selected two piezometers.The standard RNN can perform better but the robustness is slightly affected when there are significant time lags between PWP changes and rainfall.The GRU and LSTM models can provide more precise and robust predictions than the standard RNN.The effects of the hidden layer structure and the dropout technique are investigated.The single-layer GRU is accurate enough for PWP prediction,whereas a double-layer GRU brings extra time cost with little accuracy improvement.The dropout technique is essential to overfitting prevention and improvement of accuracy.展开更多
The remaining useful life(RUL)of a system is generally predicted by utilising the data collected from the sensors that continuously monitor different indicators.Recently,different deep learning(DL)techniques have been...The remaining useful life(RUL)of a system is generally predicted by utilising the data collected from the sensors that continuously monitor different indicators.Recently,different deep learning(DL)techniques have been used for RUL prediction and achieved great success.Because the data is often time-sequential,recurrent neural network(RNN)has attracted significant interests due to its efficiency in dealing with such data.This paper systematically reviews RNN and its variants for RUL prediction,with a specific focus on understanding how different components(e.g.,types of optimisers and activation functions)or parameters(e.g.,sequence length,neuron quantities)affect their performance.After that,a case study using the well-studied NASA’s C-MAPSS dataset is presented to quantitatively evaluate the influence of various state-of-the-art RNN structures on the RUL prediction performance.The result suggests that the variant methods usually perform better than the original RNN,and among which,Bi-directional Long Short-Term Memory generally has the best performance in terms of stability,precision and accuracy.Certain model structures may fail to produce valid RUL prediction result due to the gradient vanishing or gradient exploring problem if the parameters are not chosen appropriately.It is concluded that parameter tuning is a crucial step to achieve optimal prediction performance.展开更多
Turnout is one of the important signal infrastructure equipment,which will directly affect the safety and efficiency of driving.Base on analysis of the power curve of the turnout,we extract and select the time domain ...Turnout is one of the important signal infrastructure equipment,which will directly affect the safety and efficiency of driving.Base on analysis of the power curve of the turnout,we extract and select the time domain and Haar wavelet transform characteristics of the curve firstly.Then the correlation between the degradation state and the fault state is established by using the clustering algorithm and the Pearson correlation coefficient.Finally,the convolutional neural network(CNN)and the gated recurrent unit(GRU)are used to establish the state prediction model of the turnout to realize the failure prediction.The CNN can directly extract features from the original data of the turnout and reduce the dimension,which simplifies the prediction process.Due to its unique gate structure and time series processing features,GRU has certain advantages over the traditional forecasting methods in terms of prediction accuracy and time.The experimental results show that the accuracy of prediction can reach 94.2%when the feature matrix adopts 40-dimensional input and iterates 50 times.展开更多
Speech separation is an active research topic that plays an important role in numerous applications,such as speaker recognition,hearing pros-thesis,and autonomous robots.Many algorithms have been put forward to improv...Speech separation is an active research topic that plays an important role in numerous applications,such as speaker recognition,hearing pros-thesis,and autonomous robots.Many algorithms have been put forward to improve separation performance.However,speech separation in reverberant noisy environment is still a challenging task.To address this,a novel speech separation algorithm using gate recurrent unit(GRU)network based on microphone array has been proposed in this paper.The main aim of the proposed algorithm is to improve the separation performance and reduce the computational cost.The proposed algorithm extracts the sub-band steered response power-phase transform(SRP-PHAT)weighted by gammatone filter as the speech separation feature due to its discriminative and robust spatial position in formation.Since the GRU net work has the advantage of processing time series data with faster training speed and fewer training parameters,the GRU model is adopted to process the separation featuresof several sequential frames in the same sub-band to estimate the ideal Ratio Masking(IRM).The proposed algorithm decomposes the mixture signals into time-frequency(TF)units using gammatone filter bank in the frequency domain,and the target speech is reconstructed in the frequency domain by masking the mixture signal according to the estimated IRM.The operations of decomposing the mixture signal and reconstructing the target signal are completed in the frequency domain which can reduce the total computational cost.Experimental results demonstrate that the proposed algorithm realizes omnidirectional speech sep-aration in noisy and reverberant environments,provides good performance in terms of speech quality and intelligibility,and has the generalization capacity to reverberate.展开更多
Predominantly the localization accuracy of the magnetic field-based localization approaches is severed by two limiting factors:Smartphone heterogeneity and smaller data lengths.The use of multifarioussmartphones cripp...Predominantly the localization accuracy of the magnetic field-based localization approaches is severed by two limiting factors:Smartphone heterogeneity and smaller data lengths.The use of multifarioussmartphones cripples the performance of such approaches owing to the variability of the magnetic field data.In the same vein,smaller lengths of magnetic field data decrease the localization accuracy substantially.The current study proposes the use of multiple neural networks like deep neural network(DNN),long short term memory network(LSTM),and gated recurrent unit network(GRN)to perform indoor localization based on the embedded magnetic sensor of the smartphone.A voting scheme is introduced that takes predictions from neural networks into consideration to estimate the current location of the user.Contrary to conventional magnetic field-based localization approaches that rely on the magnetic field data intensity,this study utilizes the normalized magnetic field data for this purpose.Training of neural networks is carried out using Galaxy S8 data while the testing is performed with three devices,i.e.,LG G7,Galaxy S8,and LG Q6.Experiments are performed during different times of the day to analyze the impact of time variability.Results indicate that the proposed approach minimizes the impact of smartphone variability and elevates the localization accuracy.Performance comparison with three approaches reveals that the proposed approach outperforms them in mean,50%,and 75%error even using a lesser amount of magnetic field data than those of other approaches.展开更多
Currently,Bitcoin is the world’s most popular cryptocurrency.The price of Bitcoin is extremely volatile,which can be described as high-benefit and high-risk.To minimize the risk involved,a means of more accurately pr...Currently,Bitcoin is the world’s most popular cryptocurrency.The price of Bitcoin is extremely volatile,which can be described as high-benefit and high-risk.To minimize the risk involved,a means of more accurately predicting the Bitcoin price is required.Most of the existing studies of Bitcoin prediction are based on historical(i.e.,benchmark)data,without considering the real-time(i.e.,live)data.To mitigate the issue of price volatility and achieve more precise outcomes,this study suggests using historical and real-time data to predict the Bitcoin candlestick—or open,high,low,and close(OHLC)—prices.Seeking a better prediction model,the present study proposes time series-based deep learning models.In particular,two deep learning algorithms were applied,namely,long short-term memory(LSTM)and gated recurrent unit(GRU).Using real-time data,the Bitcoin candlesticks were predicted for three intervals:the next 4 h,the next 12 h,and the next 24 h.The results showed that the best-performing model was the LSTM-based model with the 4-h interval.In particular,this model achieved a stellar performance with a mean absolute percentage error(MAPE)of 0.63,a root mean square error(RMSE)of 0.0009,a mean square error(MSE)of 9e-07,a mean absolute error(MAE)of 0.0005,and an R-squared coefficient(R2)of 0.994.With these results,the proposed prediction model has demonstrated its efficiency over the models proposed in previous studies.The findings of this study have considerable implications in the business field,as the proposed model can assist investors and traders in precisely identifying Bitcoin sales and buying opportunities.展开更多
Memristor-based neuromorphic computing shows great potential for high-speed and high-throughput signal processing applications,such as electroencephalogram(EEG)signal processing.Nonetheless,the size of one-transistor ...Memristor-based neuromorphic computing shows great potential for high-speed and high-throughput signal processing applications,such as electroencephalogram(EEG)signal processing.Nonetheless,the size of one-transistor one-resistor(1T1R)memristor arrays is limited by the non-ideality of the devices,which prevents the hardware implementation of large and complex networks.In this work,we propose the depthwise separable convolution and bidirectional gate recurrent unit(DSC-BiGRU)network,a lightweight and highly robust hybrid neural network based on 1T1R arrays that enables efficient processing of EEG signals in the temporal,frequency and spatial domains by hybridizing DSC and BiGRU blocks.The network size is reduced and the network robustness is improved while ensuring the network classification accuracy.In the simulation,the measured non-idealities of the 1T1R array are brought into the network through statistical analysis.Compared with traditional convolutional networks,the network parameters are reduced by 95%and the network classification accuracy is improved by 21%at a 95%array yield rate and 5%tolerable error.This work demonstrates that lightweight and highly robust networks based on memristor arrays hold great promise for applications that rely on low consumption and high efficiency.展开更多
Lithium-ion batteries are commonly used in electric vehicles,mobile phones,and laptops.These batteries demonstrate several advantages,such as environmental friendliness,high energy density,and long life.However,batter...Lithium-ion batteries are commonly used in electric vehicles,mobile phones,and laptops.These batteries demonstrate several advantages,such as environmental friendliness,high energy density,and long life.However,battery overcharging and overdischarging may occur if the batteries are not monitored continuously.Overcharging causesfire and explosion casualties,and overdischar-ging causes a reduction in the battery capacity and life.In addition,the internal resistance of such batteries varies depending on their external temperature,elec-trolyte,cathode material,and other factors;the capacity of the batteries decreases with temperature.In this study,we develop a method for estimating the state of charge(SOC)using a neural network model that is best suited to the external tem-perature of such batteries based on their characteristics.During our simulation,we acquired data at temperatures of 25°C,30°C,35°C,and 40°C.Based on the tem-perature parameters,the voltage,current,and time parameters were obtained,and six cycles of the parameters based on the temperature were used for the experi-ment.Experimental data to verify the proposed method were obtained through a discharge experiment conducted using a vehicle driving simulator.The experi-mental data were provided as inputs to three types of neural network models:mul-tilayer neural network(MNN),long short-term memory(LSTM),and gated recurrent unit(GRU).The neural network models were trained and optimized for the specific temperatures measured during the experiment,and the SOC was estimated by selecting the most suitable model for each temperature.The experimental results revealed that the mean absolute errors of the MNN,LSTM,and GRU using the proposed method were 2.17%,2.19%,and 2.15%,respec-tively,which are better than those of the conventional method(4.47%,4.60%,and 4.40%).Finally,SOC estimation based on GRU using the proposed method was found to be 2.15%,which was the most accurate.展开更多
Teaching machine to understand needs to design an algorithm for the machine to comprehend documents. As some traditional methods cannot learn the inherent characters effectively, this paper presents a new hybrid neura...Teaching machine to understand needs to design an algorithm for the machine to comprehend documents. As some traditional methods cannot learn the inherent characters effectively, this paper presents a new hybrid neural network model to extract sentence-level summarization from single document,and it allows us to develop an attention based deep neural network that can learn to understand documents with minimal prior knowledge. The proposed model composed of multiple processing layers can learn the representations of features.Word embedding is used to learn continuous word representations for constructing sentence as input to convolutional neural network. The recurrent neural network is also used to label the sentences from the original document, and the proposed BAM-GRU model is more efficient. Experimental results show the feasibility of the approach. Some problems and further works are also present in the end.展开更多
In recent years,wearable devices-based Human Activity Recognition(HAR)models have received significant attention.Previously developed HAR models use hand-crafted features to recognize human activities,leading to the e...In recent years,wearable devices-based Human Activity Recognition(HAR)models have received significant attention.Previously developed HAR models use hand-crafted features to recognize human activities,leading to the extraction of basic features.The images captured by wearable sensors contain advanced features,allowing them to be analyzed by deep learning algorithms to enhance the detection and recognition of human actions.Poor lighting and limited sensor capabilities can impact data quality,making the recognition of human actions a challenging task.The unimodal-based HAR approaches are not suitable in a real-time environment.Therefore,an updated HAR model is developed using multiple types of data and an advanced deep-learning approach.Firstly,the required signals and sensor data are accumulated from the standard databases.From these signals,the wave features are retrieved.Then the extracted wave features and sensor data are given as the input to recognize the human activity.An Adaptive Hybrid Deep Attentive Network(AHDAN)is developed by incorporating a“1D Convolutional Neural Network(1DCNN)”with a“Gated Recurrent Unit(GRU)”for the human activity recognition process.Additionally,the Enhanced Archerfish Hunting Optimizer(EAHO)is suggested to fine-tune the network parameters for enhancing the recognition process.An experimental evaluation is performed on various deep learning networks and heuristic algorithms to confirm the effectiveness of the proposed HAR model.The EAHO-based HAR model outperforms traditional deep learning networks with an accuracy of 95.36,95.25 for recall,95.48 for specificity,and 95.47 for precision,respectively.The result proved that the developed model is effective in recognizing human action by taking less time.Additionally,it reduces the computation complexity and overfitting issue through using an optimization approach.展开更多
Considering the nonlinear structure and spatial-temporal correlation of traffic network,and the influence of potential correlation between nodes of traffic network on the spatial features,this paper proposes a traffic...Considering the nonlinear structure and spatial-temporal correlation of traffic network,and the influence of potential correlation between nodes of traffic network on the spatial features,this paper proposes a traffic speed prediction model based on the combination of graph attention network with self-adaptive adjacency matrix(SAdpGAT)and bidirectional gated recurrent unit(BiGRU).First-ly,the model introduces graph attention network(GAT)to extract the spatial features of real road network and potential road network respectively in spatial dimension.Secondly,the spatial features are input into BiGRU to extract the time series features.Finally,the prediction results of the real road network and the potential road network are connected to generate the final prediction results of the model.The experimental results show that the prediction accuracy of the proposed model is im-proved obviously on METR-LA and PEMS-BAY datasets,which proves the advantages of the pro-posed spatial-temporal model in traffic speed prediction.展开更多
Recurrent neural networks (RNN) have been very successful in handling sequence data. However, understanding RNN and finding the best practices for RNN learning is a difficult task, partly because there are many comp...Recurrent neural networks (RNN) have been very successful in handling sequence data. However, understanding RNN and finding the best practices for RNN learning is a difficult task, partly because there are many competing and complex hidden units, such as the long short-term memory (LSTM) and the gated recurrent unit (GRU). We propose a gated unit for RNN, named as minimal gated unit (MCU), since it only contains one gate, which is a minimal design among all gated hidden units. The design of MCU benefits from evaluation results on LSTM and GRU in the literature. Experiments on various sequence data show that MCU has comparable accuracy with GRU, but has a simpler structure, fewer parameters, and faster training. Hence, MGU is suitable in RNN's applications. Its simple architecture also means that it is easier to evaluate and tune, and in principle it is easier to study MGU's properties theoretically and empirically.展开更多
Purpose-The purpose of this paper is to solve the shortage of the existing methods for the prediction of network security situations(NSS).Because the conventional methods for the prediction of NSS,such as support vect...Purpose-The purpose of this paper is to solve the shortage of the existing methods for the prediction of network security situations(NSS).Because the conventional methods for the prediction of NSS,such as support vector machine,particle swarm optimization,etc.,lack accuracy,robustness and efficiency,in this study,the authors propose a new method for the prediction of NSS based on recurrent neural network(RNN)with gated recurrent unit.Design/methodology/approach-This method extracts internal and external information features from the original time-series network data for the first time.Then,the extracted features are applied to the deep RNN model for training and validation.After iteration and optimization,the accuracy of predictions of NSS will be obtained by the well-trained model,and the model is robust for the unstable network data.Findings-Experiments on bench marked data set show that the proposed method obtains more accurate and robust prediction results than conventional models.Although the deep RNN models need more time consumption for training,they guarantee the accuracy and robustness of prediction in return for validation.Originality/value-In the prediction of NSS time-series data,the proposed internal and external information features are well described the original data,and the employment of deep RNN model will outperform the state-of-the-arts models.展开更多
Accurate short-term traffic flow prediction plays a crucial role in intelligent transportation system (ITS), because it can assist both traffic authorities and individual travelers make better decisions. Previous rese...Accurate short-term traffic flow prediction plays a crucial role in intelligent transportation system (ITS), because it can assist both traffic authorities and individual travelers make better decisions. Previous researches mostly focus on shallow traffic prediction models, which performances were unsatisfying since short-term traffic flow exhibits the characteristics of high nonlinearity, complexity and chaos. Taking the spatial and temporal correlations into consideration, a new traffic flow prediction method is proposed with the basis on the road network topology and gated recurrent unit (GRU). This method can help researchers without professional traffic knowledge extracting generic traffic flow features effectively and efficiently. Experiments are conducted by using real traffic flow data collected from the Caltrans Performance Measurement System (PEMS) database in San Diego and Oakland from June 15, 2017 to September 27, 2017. The results demonstrate that our method outperforms other traditional approaches in terms of mean absolute percentage error (MAPE), symmetric mean absolute percentage error (SMAPE) and root mean square error (RMSE).展开更多
文摘This paper focuses on wireless-powered communication systems,which are increasingly relevant in the Internet of Things(IoT)due to their ability to extend the operational lifetime of devices with limited energy.The main contribution of the paper is a novel approach to minimize the secrecy outage probability(SOP)in these systems.Minimizing SOP is crucial for maintaining the confidentiality and integrity of data,especially in situations where the transmission of sensitive data is critical.Our proposed method harnesses the power of an improved biogeography-based optimization(IBBO)to effectively train a recurrent neural network(RNN).The proposed IBBO introduces an innovative migration model.The core advantage of IBBO lies in its adeptness at maintaining equilibrium between exploration and exploitation.This is accomplished by integrating tactics such as advancing towards a random habitat,adopting the crossover operator from genetic algorithms(GA),and utilizing the global best(Gbest)operator from particle swarm optimization(PSO)into the IBBO framework.The IBBO demonstrates its efficacy by enabling the RNN to optimize the system parameters,resulting in significant outage probability reduction.Through comprehensive simulations,we showcase the superiority of the IBBO-RNN over existing approaches,highlighting its capability to achieve remarkable gains in SOP minimization.This paper compares nine methods for predicting outage probability in wireless-powered communications.The IBBO-RNN achieved the highest accuracy rate of 98.92%,showing a significant performance improvement.In contrast,the standard RNN recorded lower accuracy rates of 91.27%.The IBBO-RNN maintains lower SOP values across the entire signal-to-noise ratio(SNR)spectrum tested,suggesting that the method is highly effective at optimizing system parameters for improved secrecy even at lower SNRs.
基金funded by“The Pearl River Talent Recruitment Program”of Guangdong Province in 2019(Grant No.2019CX01G338)the Research Funding of Shantou University for New Faculty Member(Grant No.NTF19024-2019).
文摘An accurate prediction of earth pressure balance(EPB)shield moving performance is important to ensure the safety tunnel excavation.A hybrid model is developed based on the particle swarm optimization(PSO)and gated recurrent unit(GRU)neural network.PSO is utilized to assign the optimal hyperparameters of GRU neural network.There are mainly four steps:data collection and processing,hybrid model establishment,model performance evaluation and correlation analysis.The developed model provides an alternative to tackle with time-series data of tunnel project.Apart from that,a novel framework about model application is performed to provide guidelines in practice.A tunnel project is utilized to evaluate the performance of proposed hybrid model.Results indicate that geological and construction variables are significant to the model performance.Correlation analysis shows that construction variables(main thrust and foam liquid volume)display the highest correlation with the cutterhead torque(CHT).This work provides a feasible and applicable alternative way to estimate the performance of shield tunneling.
基金jointly supported by the National Science Foundation of China (Grant Nos. 42275007 and 41865003)Jiangxi Provincial Department of science and technology project (Grant No. 20171BBG70004)。
文摘The Gated Recurrent Unit(GRU) neural network has great potential in estimating and predicting a variable. In addition to radar reflectivity(Z), radar echo-top height(ET) is also a good indicator of rainfall rate(R). In this study, we propose a new method, GRU_Z-ET, by introducing Z and ET as two independent variables into the GRU neural network to conduct the quantitative single-polarization radar precipitation estimation. The performance of GRU_Z-ET is compared with that of the other three methods in three heavy rainfall cases in China during 2018, namely, the traditional Z-R relationship(Z=300R1.4), the optimal Z-R relationship(Z=79R1.68) and the GRU neural network with only Z as the independent input variable(GRU_Z). The results indicate that the GRU_Z-ET performs the best, while the traditional Z-R relationship performs the worst. The performances of the rest two methods are similar.To further evaluate the performance of the GRU_Z-ET, 200 rainfall events with 21882 total samples during May–July of 2018 are used for statistical analysis. Results demonstrate that the spatial correlation coefficients, threat scores and probability of detection between the observed and estimated precipitation are the largest for the GRU_Z-ET and the smallest for the traditional Z-R relationship, and the root mean square error is just the opposite. In addition, these statistics of GRU_Z are similar to those of optimal Z-R relationship. Thus, it can be concluded that the performance of the GRU_ZET is the best in the four methods for the quantitative precipitation estimation.
基金support from the National Science and Technology Council of Taiwan(Contract Nos.111-2221 E-011081 and 111-2622-E-011019)the support from Intelligent Manufacturing Innovation Center(IMIC),National Taiwan University of Science and Technology(NTUST),Taipei,Taiwan,which is a Featured Areas Research Center in Higher Education Sprout Project of Ministry of Education(MOE),Taiwan(since 2023)was appreciatedWe also thank Wang Jhan Yang Charitable Trust Fund(Contract No.WJY 2020-HR-01)for its financial support.
文摘This study proposed a new real-time manufacturing process monitoring method to monitor and detect process shifts in manufacturing operations.Since real-time production process monitoring is critical in today’s smart manufacturing.The more robust the monitoring model,the more reliable a process is to be under control.In the past,many researchers have developed real-time monitoring methods to detect process shifts early.However,thesemethods have limitations in detecting process shifts as quickly as possible and handling various data volumes and varieties.In this paper,a robust monitoring model combining Gated Recurrent Unit(GRU)and Random Forest(RF)with Real-Time Contrast(RTC)called GRU-RF-RTC was proposed to detect process shifts rapidly.The effectiveness of the proposed GRU-RF-RTC model is first evaluated using multivariate normal and nonnormal distribution datasets.Then,to prove the applicability of the proposed model in a realmanufacturing setting,the model was evaluated using real-world normal and non-normal problems.The results demonstrate that the proposed GRU-RF-RTC outperforms other methods in detecting process shifts quickly with the lowest average out-of-control run length(ARL1)in all synthesis and real-world problems under normal and non-normal cases.The experiment results on real-world problems highlight the significance of the proposed GRU-RF-RTC model in modern manufacturing process monitoring applications.The result reveals that the proposed method improves the shift detection capability by 42.14%in normal and 43.64%in gamma distribution problems.
基金Supported in part by Natural Science Foundation of China(Grant Nos.51835009,51705398)Shaanxi Province 2020 Natural Science Basic Research Plan(Grant No.2020JQ-042)Aeronautical Science Foundation(Grant No.2019ZB070001).
文摘As an integrated application of modern information technologies and artificial intelligence,Prognostic and Health Management(PHM)is important for machine health monitoring.Prediction of tool wear is one of the symbolic applications of PHM technology in modern manufacturing systems and industry.In this paper,a multi-scale Convolutional Gated Recurrent Unit network(MCGRU)is proposed to address raw sensory data for tool wear prediction.At the bottom of MCGRU,six parallel and independent branches with different kernel sizes are designed to form a multi-scale convolutional neural network,which augments the adaptability to features of different time scales.These features of different scales extracted from raw data are then fed into a Deep Gated Recurrent Unit network to capture long-term dependencies and learn significant representations.At the top of the MCGRU,a fully connected layer and a regression layer are built for cutting tool wear prediction.Two case studies are performed to verify the capability and effectiveness of the proposed MCGRU network and results show that MCGRU outperforms several state-of-the-art baseline models.
文摘Diabetes mellitus is a metabolic disease in which blood glucose levels rise as a result of pancreatic insulin production failure.It causes hyperglycemia and chronic multiorgan dysfunction,including blindness,renal failure,and cardi-ovascular disease,if left untreated.One of the essential checks that are needed to be performed frequently in Type 1 Diabetes Mellitus is a blood test,this procedure involves extracting blood quite frequently,which leads to subject discomfort increasing the possibility of infection when the procedure is often recurring.Exist-ing methods used for diabetes classification have less classification accuracy and suffer from vanishing gradient problems,to overcome these issues,we proposed stacking ensemble learning-based convolutional gated recurrent neural network(CGRNN)Metamodel algorithm.Our proposed method initially performs outlier detection to remove outlier data,using the Gaussian distribution method,and the Box-cox method is used to correctly order the dataset.After the outliers’detec-tion,the missing values are replaced by the data’s mean rather than their elimina-tion.In the stacking ensemble base model,multiple machine learning algorithms like Naïve Bayes,Bagging with random forest,and Adaboost Decision tree have been employed.CGRNN Meta model uses two hidden layers Long-Short-Time Memory(LSTM)and Gated Recurrent Unit(GRU)to calculate the weight matrix for diabetes prediction.Finally,the calculated weight matrix is passed to the soft-max function in the output layer to produce the diabetes prediction results.By using LSTM-based CG-RNN,the mean square error(MSE)value is 0.016 and the obtained accuracy is 91.33%.
基金supported by the Natural Science Foundation of China(Grant Nos.51979158,51639008,51679135,and 51422905)the Program of Shanghai Academic Research Leader by Science and Technology Commission of Shanghai Municipality(Project No.19XD1421900)。
文摘Knowledge of pore-water pressure(PWP)variation is fundamental for slope stability.A precise prediction of PWP is difficult due to complex physical mechanisms and in situ natural variability.To explore the applicability and advantages of recurrent neural networks(RNNs)on PWP prediction,three variants of RNNs,i.e.,standard RNN,long short-term memory(LSTM)and gated recurrent unit(GRU)are adopted and compared with a traditional static artificial neural network(ANN),i.e.,multi-layer perceptron(MLP).Measurements of rainfall and PWP of representative piezometers from a fully instrumented natural slope in Hong Kong are used to establish the prediction models.The coefficient of determination(R^2)and root mean square error(RMSE)are used for model evaluations.The influence of input time series length on the model performance is investigated.The results reveal that MLP can provide acceptable performance but is not robust.The uncertainty bounds of RMSE of the MLP model range from 0.24 kPa to 1.12 k Pa for the selected two piezometers.The standard RNN can perform better but the robustness is slightly affected when there are significant time lags between PWP changes and rainfall.The GRU and LSTM models can provide more precise and robust predictions than the standard RNN.The effects of the hidden layer structure and the dropout technique are investigated.The single-layer GRU is accurate enough for PWP prediction,whereas a double-layer GRU brings extra time cost with little accuracy improvement.The dropout technique is essential to overfitting prevention and improvement of accuracy.
基金Supported by U.K.EPSRC Platform Grant(Grant No.EP/P027121/1).
文摘The remaining useful life(RUL)of a system is generally predicted by utilising the data collected from the sensors that continuously monitor different indicators.Recently,different deep learning(DL)techniques have been used for RUL prediction and achieved great success.Because the data is often time-sequential,recurrent neural network(RNN)has attracted significant interests due to its efficiency in dealing with such data.This paper systematically reviews RNN and its variants for RUL prediction,with a specific focus on understanding how different components(e.g.,types of optimisers and activation functions)or parameters(e.g.,sequence length,neuron quantities)affect their performance.After that,a case study using the well-studied NASA’s C-MAPSS dataset is presented to quantitatively evaluate the influence of various state-of-the-art RNN structures on the RUL prediction performance.The result suggests that the variant methods usually perform better than the original RNN,and among which,Bi-directional Long Short-Term Memory generally has the best performance in terms of stability,precision and accuracy.Certain model structures may fail to produce valid RUL prediction result due to the gradient vanishing or gradient exploring problem if the parameters are not chosen appropriately.It is concluded that parameter tuning is a crucial step to achieve optimal prediction performance.
基金National Natural Science Foundation of China(Nos.61863024,71761023)Funding for Scientific Research Projects of Colleges and Universities in Gansu Province(Nos.2018C-11,2018A-22)Natural Science Fund of Gansu Province(No.18JR3RA130)。
文摘Turnout is one of the important signal infrastructure equipment,which will directly affect the safety and efficiency of driving.Base on analysis of the power curve of the turnout,we extract and select the time domain and Haar wavelet transform characteristics of the curve firstly.Then the correlation between the degradation state and the fault state is established by using the clustering algorithm and the Pearson correlation coefficient.Finally,the convolutional neural network(CNN)and the gated recurrent unit(GRU)are used to establish the state prediction model of the turnout to realize the failure prediction.The CNN can directly extract features from the original data of the turnout and reduce the dimension,which simplifies the prediction process.Due to its unique gate structure and time series processing features,GRU has certain advantages over the traditional forecasting methods in terms of prediction accuracy and time.The experimental results show that the accuracy of prediction can reach 94.2%when the feature matrix adopts 40-dimensional input and iterates 50 times.
基金This work is supported by Nanjing Institute of Technology(NIT)fund for Research Startup Projects of Introduced talents under Grant No.YKJ202019Nature Sci-ence Research Project of Higher Education Institutions in Jiangsu Province under Grant No.21KJB510018+1 种基金National Nature Science Foundation of China(NSFC)under Grant No.62001215NIT fund for Doctoral Research Projects under Grant No.ZKJ2020003.
文摘Speech separation is an active research topic that plays an important role in numerous applications,such as speaker recognition,hearing pros-thesis,and autonomous robots.Many algorithms have been put forward to improve separation performance.However,speech separation in reverberant noisy environment is still a challenging task.To address this,a novel speech separation algorithm using gate recurrent unit(GRU)network based on microphone array has been proposed in this paper.The main aim of the proposed algorithm is to improve the separation performance and reduce the computational cost.The proposed algorithm extracts the sub-band steered response power-phase transform(SRP-PHAT)weighted by gammatone filter as the speech separation feature due to its discriminative and robust spatial position in formation.Since the GRU net work has the advantage of processing time series data with faster training speed and fewer training parameters,the GRU model is adopted to process the separation featuresof several sequential frames in the same sub-band to estimate the ideal Ratio Masking(IRM).The proposed algorithm decomposes the mixture signals into time-frequency(TF)units using gammatone filter bank in the frequency domain,and the target speech is reconstructed in the frequency domain by masking the mixture signal according to the estimated IRM.The operations of decomposing the mixture signal and reconstructing the target signal are completed in the frequency domain which can reduce the total computational cost.Experimental results demonstrate that the proposed algorithm realizes omnidirectional speech sep-aration in noisy and reverberant environments,provides good performance in terms of speech quality and intelligibility,and has the generalization capacity to reverberate.
基金supported by the MSIT(Ministry of Science and ICT),Korea,under the ITRC(Information Technology Research Center)support program(IITP-2019-2016-0-00313)supervised by the IITP(Institute for Information&communication Technology Promotion)+1 种基金supported by Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Science,ICT and Future Planning(2017R1E1A1A01074345).
文摘Predominantly the localization accuracy of the magnetic field-based localization approaches is severed by two limiting factors:Smartphone heterogeneity and smaller data lengths.The use of multifarioussmartphones cripples the performance of such approaches owing to the variability of the magnetic field data.In the same vein,smaller lengths of magnetic field data decrease the localization accuracy substantially.The current study proposes the use of multiple neural networks like deep neural network(DNN),long short term memory network(LSTM),and gated recurrent unit network(GRN)to perform indoor localization based on the embedded magnetic sensor of the smartphone.A voting scheme is introduced that takes predictions from neural networks into consideration to estimate the current location of the user.Contrary to conventional magnetic field-based localization approaches that rely on the magnetic field data intensity,this study utilizes the normalized magnetic field data for this purpose.Training of neural networks is carried out using Galaxy S8 data while the testing is performed with three devices,i.e.,LG G7,Galaxy S8,and LG Q6.Experiments are performed during different times of the day to analyze the impact of time variability.Results indicate that the proposed approach minimizes the impact of smartphone variability and elevates the localization accuracy.Performance comparison with three approaches reveals that the proposed approach outperforms them in mean,50%,and 75%error even using a lesser amount of magnetic field data than those of other approaches.
文摘Currently,Bitcoin is the world’s most popular cryptocurrency.The price of Bitcoin is extremely volatile,which can be described as high-benefit and high-risk.To minimize the risk involved,a means of more accurately predicting the Bitcoin price is required.Most of the existing studies of Bitcoin prediction are based on historical(i.e.,benchmark)data,without considering the real-time(i.e.,live)data.To mitigate the issue of price volatility and achieve more precise outcomes,this study suggests using historical and real-time data to predict the Bitcoin candlestick—or open,high,low,and close(OHLC)—prices.Seeking a better prediction model,the present study proposes time series-based deep learning models.In particular,two deep learning algorithms were applied,namely,long short-term memory(LSTM)and gated recurrent unit(GRU).Using real-time data,the Bitcoin candlesticks were predicted for three intervals:the next 4 h,the next 12 h,and the next 24 h.The results showed that the best-performing model was the LSTM-based model with the 4-h interval.In particular,this model achieved a stellar performance with a mean absolute percentage error(MAPE)of 0.63,a root mean square error(RMSE)of 0.0009,a mean square error(MSE)of 9e-07,a mean absolute error(MAE)of 0.0005,and an R-squared coefficient(R2)of 0.994.With these results,the proposed prediction model has demonstrated its efficiency over the models proposed in previous studies.The findings of this study have considerable implications in the business field,as the proposed model can assist investors and traders in precisely identifying Bitcoin sales and buying opportunities.
基金Project supported by the National Key Research and Development Program of China(Grant No.2019YFB2205102)the National Natural Science Foundation of China(Grant Nos.61974164,62074166,61804181,62004219,62004220,and 62104256).
文摘Memristor-based neuromorphic computing shows great potential for high-speed and high-throughput signal processing applications,such as electroencephalogram(EEG)signal processing.Nonetheless,the size of one-transistor one-resistor(1T1R)memristor arrays is limited by the non-ideality of the devices,which prevents the hardware implementation of large and complex networks.In this work,we propose the depthwise separable convolution and bidirectional gate recurrent unit(DSC-BiGRU)network,a lightweight and highly robust hybrid neural network based on 1T1R arrays that enables efficient processing of EEG signals in the temporal,frequency and spatial domains by hybridizing DSC and BiGRU blocks.The network size is reduced and the network robustness is improved while ensuring the network classification accuracy.In the simulation,the measured non-idealities of the 1T1R array are brought into the network through statistical analysis.Compared with traditional convolutional networks,the network parameters are reduced by 95%and the network classification accuracy is improved by 21%at a 95%array yield rate and 5%tolerable error.This work demonstrates that lightweight and highly robust networks based on memristor arrays hold great promise for applications that rely on low consumption and high efficiency.
基金supported by the BK21 FOUR project funded by the Ministry of Education,Korea(4199990113966).
文摘Lithium-ion batteries are commonly used in electric vehicles,mobile phones,and laptops.These batteries demonstrate several advantages,such as environmental friendliness,high energy density,and long life.However,battery overcharging and overdischarging may occur if the batteries are not monitored continuously.Overcharging causesfire and explosion casualties,and overdischar-ging causes a reduction in the battery capacity and life.In addition,the internal resistance of such batteries varies depending on their external temperature,elec-trolyte,cathode material,and other factors;the capacity of the batteries decreases with temperature.In this study,we develop a method for estimating the state of charge(SOC)using a neural network model that is best suited to the external tem-perature of such batteries based on their characteristics.During our simulation,we acquired data at temperatures of 25°C,30°C,35°C,and 40°C.Based on the tem-perature parameters,the voltage,current,and time parameters were obtained,and six cycles of the parameters based on the temperature were used for the experi-ment.Experimental data to verify the proposed method were obtained through a discharge experiment conducted using a vehicle driving simulator.The experi-mental data were provided as inputs to three types of neural network models:mul-tilayer neural network(MNN),long short-term memory(LSTM),and gated recurrent unit(GRU).The neural network models were trained and optimized for the specific temperatures measured during the experiment,and the SOC was estimated by selecting the most suitable model for each temperature.The experimental results revealed that the mean absolute errors of the MNN,LSTM,and GRU using the proposed method were 2.17%,2.19%,and 2.15%,respec-tively,which are better than those of the conventional method(4.47%,4.60%,and 4.40%).Finally,SOC estimation based on GRU using the proposed method was found to be 2.15%,which was the most accurate.
文摘Teaching machine to understand needs to design an algorithm for the machine to comprehend documents. As some traditional methods cannot learn the inherent characters effectively, this paper presents a new hybrid neural network model to extract sentence-level summarization from single document,and it allows us to develop an attention based deep neural network that can learn to understand documents with minimal prior knowledge. The proposed model composed of multiple processing layers can learn the representations of features.Word embedding is used to learn continuous word representations for constructing sentence as input to convolutional neural network. The recurrent neural network is also used to label the sentences from the original document, and the proposed BAM-GRU model is more efficient. Experimental results show the feasibility of the approach. Some problems and further works are also present in the end.
文摘In recent years,wearable devices-based Human Activity Recognition(HAR)models have received significant attention.Previously developed HAR models use hand-crafted features to recognize human activities,leading to the extraction of basic features.The images captured by wearable sensors contain advanced features,allowing them to be analyzed by deep learning algorithms to enhance the detection and recognition of human actions.Poor lighting and limited sensor capabilities can impact data quality,making the recognition of human actions a challenging task.The unimodal-based HAR approaches are not suitable in a real-time environment.Therefore,an updated HAR model is developed using multiple types of data and an advanced deep-learning approach.Firstly,the required signals and sensor data are accumulated from the standard databases.From these signals,the wave features are retrieved.Then the extracted wave features and sensor data are given as the input to recognize the human activity.An Adaptive Hybrid Deep Attentive Network(AHDAN)is developed by incorporating a“1D Convolutional Neural Network(1DCNN)”with a“Gated Recurrent Unit(GRU)”for the human activity recognition process.Additionally,the Enhanced Archerfish Hunting Optimizer(EAHO)is suggested to fine-tune the network parameters for enhancing the recognition process.An experimental evaluation is performed on various deep learning networks and heuristic algorithms to confirm the effectiveness of the proposed HAR model.The EAHO-based HAR model outperforms traditional deep learning networks with an accuracy of 95.36,95.25 for recall,95.48 for specificity,and 95.47 for precision,respectively.The result proved that the developed model is effective in recognizing human action by taking less time.Additionally,it reduces the computation complexity and overfitting issue through using an optimization approach.
基金the National Natural Science Foundation of China(No.61461027,61762059)the Provincial Science and Technology Program supported the Key Project of Natural Science Foundation of Gansu Province(No.22JR5RA226)。
文摘Considering the nonlinear structure and spatial-temporal correlation of traffic network,and the influence of potential correlation between nodes of traffic network on the spatial features,this paper proposes a traffic speed prediction model based on the combination of graph attention network with self-adaptive adjacency matrix(SAdpGAT)and bidirectional gated recurrent unit(BiGRU).First-ly,the model introduces graph attention network(GAT)to extract the spatial features of real road network and potential road network respectively in spatial dimension.Secondly,the spatial features are input into BiGRU to extract the time series features.Finally,the prediction results of the real road network and the potential road network are connected to generate the final prediction results of the model.The experimental results show that the prediction accuracy of the proposed model is im-proved obviously on METR-LA and PEMS-BAY datasets,which proves the advantages of the pro-posed spatial-temporal model in traffic speed prediction.
基金supported by National Natural Science Foundation of China(Nos.61422203 and 61333014)National Key Basic Research Program of China(No.2014CB340501)
文摘Recurrent neural networks (RNN) have been very successful in handling sequence data. However, understanding RNN and finding the best practices for RNN learning is a difficult task, partly because there are many competing and complex hidden units, such as the long short-term memory (LSTM) and the gated recurrent unit (GRU). We propose a gated unit for RNN, named as minimal gated unit (MCU), since it only contains one gate, which is a minimal design among all gated hidden units. The design of MCU benefits from evaluation results on LSTM and GRU in the literature. Experiments on various sequence data show that MCU has comparable accuracy with GRU, but has a simpler structure, fewer parameters, and faster training. Hence, MGU is suitable in RNN's applications. Its simple architecture also means that it is easier to evaluate and tune, and in principle it is easier to study MGU's properties theoretically and empirically.
基金supported by the funds of Ningde Normal University Youth Teacher Research Program(2015Q15)The Education Science Project of the Junior Teacher in the Education Department of Fujian province(JAT160532).
文摘Purpose-The purpose of this paper is to solve the shortage of the existing methods for the prediction of network security situations(NSS).Because the conventional methods for the prediction of NSS,such as support vector machine,particle swarm optimization,etc.,lack accuracy,robustness and efficiency,in this study,the authors propose a new method for the prediction of NSS based on recurrent neural network(RNN)with gated recurrent unit.Design/methodology/approach-This method extracts internal and external information features from the original time-series network data for the first time.Then,the extracted features are applied to the deep RNN model for training and validation.After iteration and optimization,the accuracy of predictions of NSS will be obtained by the well-trained model,and the model is robust for the unstable network data.Findings-Experiments on bench marked data set show that the proposed method obtains more accurate and robust prediction results than conventional models.Although the deep RNN models need more time consumption for training,they guarantee the accuracy and robustness of prediction in return for validation.Originality/value-In the prediction of NSS time-series data,the proposed internal and external information features are well described the original data,and the employment of deep RNN model will outperform the state-of-the-arts models.
基金Supported by the Support Program of the National 12th Five Year-Plan of China(2015BAK25B03)
文摘Accurate short-term traffic flow prediction plays a crucial role in intelligent transportation system (ITS), because it can assist both traffic authorities and individual travelers make better decisions. Previous researches mostly focus on shallow traffic prediction models, which performances were unsatisfying since short-term traffic flow exhibits the characteristics of high nonlinearity, complexity and chaos. Taking the spatial and temporal correlations into consideration, a new traffic flow prediction method is proposed with the basis on the road network topology and gated recurrent unit (GRU). This method can help researchers without professional traffic knowledge extracting generic traffic flow features effectively and efficiently. Experiments are conducted by using real traffic flow data collected from the Caltrans Performance Measurement System (PEMS) database in San Diego and Oakland from June 15, 2017 to September 27, 2017. The results demonstrate that our method outperforms other traditional approaches in terms of mean absolute percentage error (MAPE), symmetric mean absolute percentage error (SMAPE) and root mean square error (RMSE).