Time series forecasting plays an important role in various fields, such as energy, finance, transport, and weather. Temporal convolutional networks (TCNs) based on dilated causal convolution have been widely used in t...Time series forecasting plays an important role in various fields, such as energy, finance, transport, and weather. Temporal convolutional networks (TCNs) based on dilated causal convolution have been widely used in time series forecasting. However, two problems weaken the performance of TCNs. One is that in dilated casual convolution, causal convolution leads to the receptive fields of outputs being concentrated in the earlier part of the input sequence, whereas the recent input information will be severely lost. The other is that the distribution shift problem in time series has not been adequately solved. To address the first problem, we propose a subsequence-based dilated convolution method (SDC). By using multiple convolutional filters to convolve elements of neighboring subsequences, the method extracts temporal features from a growing receptive field via a growing subsequence rather than a single element. Ultimately, the receptive field of each output element can cover the whole input sequence. To address the second problem, we propose a difference and compensation method (DCM). The method reduces the discrepancies between and within the input sequences by difference operations and then compensates the outputs for the information lost due to difference operations. Based on SDC and DCM, we further construct a temporal subsequence-based convolutional network with difference (TSCND) for time series forecasting. The experimental results show that TSCND can reduce prediction mean squared error by 7.3% and save runtime, compared with state-of-the-art models and vanilla TCN.展开更多
In this paper,we investigate a spectrumsensing system in the presence of a satellite,where the satellite works as a sensing node.Considering the conventional energy detection method is sensitive to the noise uncertain...In this paper,we investigate a spectrumsensing system in the presence of a satellite,where the satellite works as a sensing node.Considering the conventional energy detection method is sensitive to the noise uncertainty,thus,a temporal convolutional network(TCN)based spectrum-sensing method is designed to eliminate the effect of the noise uncertainty and improve the performance of spectrum sensing,relying on the offline training and the online detection stages.Specifically,in the offline training stage,spectrum data captured by the satellite is sent to the TCN deployed on the gateway for training purpose.Moreover,in the online detection stage,the well trained TCN is utilized to perform real-time spectrum sensing,which can upgrade spectrum-sensing performance by exploiting the temporal features.Additionally,simulation results demonstrate that the proposed method achieves a higher probability of detection than that of the conventional energy detection(ED),the convolutional neural network(CNN),and deep neural network(DNN).Furthermore,the proposed method outperforms the CNN and the DNN in terms of a lower computational complexity.展开更多
Time series prediction has always been an important problem in the field of machine learning.Among them,power load forecasting plays a crucial role in identifying the behavior of photovoltaic power plants and regulati...Time series prediction has always been an important problem in the field of machine learning.Among them,power load forecasting plays a crucial role in identifying the behavior of photovoltaic power plants and regulating their control strategies.Traditional power load forecasting often has poor feature extraction performance for long time series.In this paper,a new deep learning framework Residual Stacked Temporal Long Short-Term Memory(RST-LSTM)is proposed,which combines wavelet decomposition and time convolutional memory network to solve the problem of feature extraction for long sequences.The network framework of RST-LSTM consists of two parts:one is a stacked time convolutional memory unit module for global and local feature extraction,and the other is a residual combination optimization module to reduce model redundancy.Finally,this paper demonstrates through various experimental indicators that RST-LSTM achieves significant performance improvements in both overall and local prediction accuracy compared to some state-of-the-art baseline methods.展开更多
Since the oil production of single well in water flooding reservoir varies greatly and is hard to predict, an oil production prediction method of single well based on temporal convolutional network(TCN) is proposed an...Since the oil production of single well in water flooding reservoir varies greatly and is hard to predict, an oil production prediction method of single well based on temporal convolutional network(TCN) is proposed and verified. This method is started from data processing, the correspondence between water injectors and oil producers is determined according to the influence radius of the water injectors, the influence degree of a water injector on an oil producer in the month concerned is added as a model feature, and a Random Forest(RF) model is built to fill the dynamic data of water flooding. The single well history is divided into 4 stages according to its water cut, that is, low water cut, middle water cut, high water cut and extra-high water cut stages. In each stage, a TCN based prediction model is established, hyperparameters of the model are optimized by the Sparrow Search Algorithm(SSA). Finally, the models of the 4 stages are integrated into one whole-life model of the well for production prediction. The application of this method in Daqing Oilfield, NE China shows that:(1) Compared with conventional data processing methods, the data obtained by this processing method are more close to the actual production, and the data set obtained is more authentic and complete.(2) The TCN model has higher prediction accuracy than other 11 models such as Long Short Term Memory(LSTM).(3) Compared with the conventional full-life-cycle models, the model of integrated stages can significantly reduce the error of production prediction.展开更多
The ever-growing available visual data(i.e.,uploaded videos and pictures by internet users)has attracted the research community’s attention in the computer vision field.Therefore,finding efficient solutions to extrac...The ever-growing available visual data(i.e.,uploaded videos and pictures by internet users)has attracted the research community’s attention in the computer vision field.Therefore,finding efficient solutions to extract knowledge from these sources is imperative.Recently,the BlazePose system has been released for skeleton extraction from images oriented to mobile devices.With this skeleton graph representation in place,a Spatial-Temporal Graph Convolutional Network can be implemented to predict the action.We hypothesize that just by changing the skeleton input data for a different set of joints that offers more information about the action of interest,it is possible to increase the performance of the Spatial-Temporal Graph Convolutional Network for HAR tasks.Hence,in this study,we present the first implementation of the BlazePose skeleton topology upon this architecture for action recognition.Moreover,we propose the Enhanced-BlazePose topology that can achieve better results than its predecessor.Additionally,we propose different skeleton detection thresholds that can improve the accuracy performance even further.We reached a top-1 accuracy performance of 40.1%on the Kinetics dataset.For the NTU-RGB+D dataset,we achieved 87.59%and 92.1%accuracy for Cross-Subject and Cross-View evaluation criteria,respectively.展开更多
A lightweight multi-layer residual temporal convolutional network model(RTCN)is proposed to target the highly complex kinematics and temporal correlation of human motion.RTCN uses 1-D convolution to efficiently obtain...A lightweight multi-layer residual temporal convolutional network model(RTCN)is proposed to target the highly complex kinematics and temporal correlation of human motion.RTCN uses 1-D convolution to efficiently obtain the spatial structure information of human motion and extract the correlation in the time series of human motion.The residual structure is applied to the proposed network model to alleviate the problem of gradient disappearance in the deep network.Experiments on the Human 3.6M dataset demonstrate that the proposed method effectively reduces the errors of motion prediction compared with previous methods,especially of long-term prediction.展开更多
Diabetes,as a chronic disease,is caused by the increase of blood glucose concentration due to pancreatic insulin production failure or insulin resistance in the body.Predicting the change trend of blood glucose level ...Diabetes,as a chronic disease,is caused by the increase of blood glucose concentration due to pancreatic insulin production failure or insulin resistance in the body.Predicting the change trend of blood glucose level in advance brings convenience for prompt treatment,so as to maintain blood glucose level within the recommended levels.Based on the flash glucose monitoring data,we propose a method that combines prophet with temporal convolutional networks(TCN)to achieve good experimental results in predicting patient blood glucose.The proposed model achieves high accuracy in the long-term and short-term prediction of blood glucose,and outperforms other models on the adaptability to non-stationary and detection capability of periodic changes.展开更多
In order to reduce the physical impairment caused by signal distortion,in this paper,we investigate symbol detection with Deep Learning(DL)methods to improve bit-error performance in the optical communication system.M...In order to reduce the physical impairment caused by signal distortion,in this paper,we investigate symbol detection with Deep Learning(DL)methods to improve bit-error performance in the optical communication system.Many DL-based methods have been applied to such systems to improve bit-error performance.Referring to the speech-to-text method of automatic speech recognition,this paper proposes a signal-to-symbol method based on DL and designs a receiver for symbol detection on single-polarized optical communications modes.To realize this detection method,we propose a non-causal temporal convolutional network-assisted receiver to detect symbols directly from the baseband signal,which specifically integrates most modules of the receiver.Meanwhile,we adopt three training approaches for different signal-to-noise ratios.We also apply a parametric rectified linear unit to enhance the noise robustness of the proposed network.According to the simulation experiments,the biterror-rate performance of the proposed method is close to or even superior to that of the conventional receiver and better than the recurrent neural network-based receiver.展开更多
In the field of speech bandwidth exten-sion,it is difficult to achieve high speech quality based on the shallow statistical model method.Although the application of deep learning has greatly improved the extended spee...In the field of speech bandwidth exten-sion,it is difficult to achieve high speech quality based on the shallow statistical model method.Although the application of deep learning has greatly improved the extended speech quality,the high model complex-ity makes it infeasible to run on the client.In order to tackle these issues,this paper proposes an end-to-end speech bandwidth extension method based on a temporal convolutional neural network,which greatly reduces the complexity of the model.In addition,a new time-frequency loss function is designed to en-able narrowband speech to acquire a more accurate wideband mapping in the time domain and the fre-quency domain.The experimental results show that the reconstructed wideband speech generated by the proposed method is superior to the traditional heuris-tic rule based approaches and the conventional neu-ral network methods for both subjective and objective evaluation.展开更多
基金supported by the National Key Research and Development Program of China(No.2018YFB2101300)the National Natural Science Foundation of China(Grant No.61871186)the Dean’s Fund of Engineering Research Center of Software/Hardware Co-Design Technology and Application,Ministry of Education(East China Normal University).
文摘Time series forecasting plays an important role in various fields, such as energy, finance, transport, and weather. Temporal convolutional networks (TCNs) based on dilated causal convolution have been widely used in time series forecasting. However, two problems weaken the performance of TCNs. One is that in dilated casual convolution, causal convolution leads to the receptive fields of outputs being concentrated in the earlier part of the input sequence, whereas the recent input information will be severely lost. The other is that the distribution shift problem in time series has not been adequately solved. To address the first problem, we propose a subsequence-based dilated convolution method (SDC). By using multiple convolutional filters to convolve elements of neighboring subsequences, the method extracts temporal features from a growing receptive field via a growing subsequence rather than a single element. Ultimately, the receptive field of each output element can cover the whole input sequence. To address the second problem, we propose a difference and compensation method (DCM). The method reduces the discrepancies between and within the input sequences by difference operations and then compensates the outputs for the information lost due to difference operations. Based on SDC and DCM, we further construct a temporal subsequence-based convolutional network with difference (TSCND) for time series forecasting. The experimental results show that TSCND can reduce prediction mean squared error by 7.3% and save runtime, compared with state-of-the-art models and vanilla TCN.
基金the National Science Foundation of China (No.91738201, 61971440)the Jiangsu Province Basic Research Project (No.BK20192002)+1 种基金the China Postdoctoral Science Foundation (No.2018M632347)the Natural Science Research of Higher Education Institutions of Jiangsu Province (No.18KJB510030)。
文摘In this paper,we investigate a spectrumsensing system in the presence of a satellite,where the satellite works as a sensing node.Considering the conventional energy detection method is sensitive to the noise uncertainty,thus,a temporal convolutional network(TCN)based spectrum-sensing method is designed to eliminate the effect of the noise uncertainty and improve the performance of spectrum sensing,relying on the offline training and the online detection stages.Specifically,in the offline training stage,spectrum data captured by the satellite is sent to the TCN deployed on the gateway for training purpose.Moreover,in the online detection stage,the well trained TCN is utilized to perform real-time spectrum sensing,which can upgrade spectrum-sensing performance by exploiting the temporal features.Additionally,simulation results demonstrate that the proposed method achieves a higher probability of detection than that of the conventional energy detection(ED),the convolutional neural network(CNN),and deep neural network(DNN).Furthermore,the proposed method outperforms the CNN and the DNN in terms of a lower computational complexity.
基金funded by NARI Group’s Independent Project of China(Granted No.524609230125)the foundation of NARI-TECH Nanjing Control System Ltd.of China(Granted No.0914202403120020).
文摘Time series prediction has always been an important problem in the field of machine learning.Among them,power load forecasting plays a crucial role in identifying the behavior of photovoltaic power plants and regulating their control strategies.Traditional power load forecasting often has poor feature extraction performance for long time series.In this paper,a new deep learning framework Residual Stacked Temporal Long Short-Term Memory(RST-LSTM)is proposed,which combines wavelet decomposition and time convolutional memory network to solve the problem of feature extraction for long sequences.The network framework of RST-LSTM consists of two parts:one is a stacked time convolutional memory unit module for global and local feature extraction,and the other is a residual combination optimization module to reduce model redundancy.Finally,this paper demonstrates through various experimental indicators that RST-LSTM achieves significant performance improvements in both overall and local prediction accuracy compared to some state-of-the-art baseline methods.
基金Major Unified Construction Project of Petro China(2019-40210-000020-02)。
文摘Since the oil production of single well in water flooding reservoir varies greatly and is hard to predict, an oil production prediction method of single well based on temporal convolutional network(TCN) is proposed and verified. This method is started from data processing, the correspondence between water injectors and oil producers is determined according to the influence radius of the water injectors, the influence degree of a water injector on an oil producer in the month concerned is added as a model feature, and a Random Forest(RF) model is built to fill the dynamic data of water flooding. The single well history is divided into 4 stages according to its water cut, that is, low water cut, middle water cut, high water cut and extra-high water cut stages. In each stage, a TCN based prediction model is established, hyperparameters of the model are optimized by the Sparrow Search Algorithm(SSA). Finally, the models of the 4 stages are integrated into one whole-life model of the well for production prediction. The application of this method in Daqing Oilfield, NE China shows that:(1) Compared with conventional data processing methods, the data obtained by this processing method are more close to the actual production, and the data set obtained is more authentic and complete.(2) The TCN model has higher prediction accuracy than other 11 models such as Long Short Term Memory(LSTM).(3) Compared with the conventional full-life-cycle models, the model of integrated stages can significantly reduce the error of production prediction.
文摘The ever-growing available visual data(i.e.,uploaded videos and pictures by internet users)has attracted the research community’s attention in the computer vision field.Therefore,finding efficient solutions to extract knowledge from these sources is imperative.Recently,the BlazePose system has been released for skeleton extraction from images oriented to mobile devices.With this skeleton graph representation in place,a Spatial-Temporal Graph Convolutional Network can be implemented to predict the action.We hypothesize that just by changing the skeleton input data for a different set of joints that offers more information about the action of interest,it is possible to increase the performance of the Spatial-Temporal Graph Convolutional Network for HAR tasks.Hence,in this study,we present the first implementation of the BlazePose skeleton topology upon this architecture for action recognition.Moreover,we propose the Enhanced-BlazePose topology that can achieve better results than its predecessor.Additionally,we propose different skeleton detection thresholds that can improve the accuracy performance even further.We reached a top-1 accuracy performance of 40.1%on the Kinetics dataset.For the NTU-RGB+D dataset,we achieved 87.59%and 92.1%accuracy for Cross-Subject and Cross-View evaluation criteria,respectively.
文摘A lightweight multi-layer residual temporal convolutional network model(RTCN)is proposed to target the highly complex kinematics and temporal correlation of human motion.RTCN uses 1-D convolution to efficiently obtain the spatial structure information of human motion and extract the correlation in the time series of human motion.The residual structure is applied to the proposed network model to alleviate the problem of gradient disappearance in the deep network.Experiments on the Human 3.6M dataset demonstrate that the proposed method effectively reduces the errors of motion prediction compared with previous methods,especially of long-term prediction.
文摘Diabetes,as a chronic disease,is caused by the increase of blood glucose concentration due to pancreatic insulin production failure or insulin resistance in the body.Predicting the change trend of blood glucose level in advance brings convenience for prompt treatment,so as to maintain blood glucose level within the recommended levels.Based on the flash glucose monitoring data,we propose a method that combines prophet with temporal convolutional networks(TCN)to achieve good experimental results in predicting patient blood glucose.The proposed model achieves high accuracy in the long-term and short-term prediction of blood glucose,and outperforms other models on the adaptability to non-stationary and detection capability of periodic changes.
基金supported by the National Key R&D Program of China under Grant 2018YFB1801500.
文摘In order to reduce the physical impairment caused by signal distortion,in this paper,we investigate symbol detection with Deep Learning(DL)methods to improve bit-error performance in the optical communication system.Many DL-based methods have been applied to such systems to improve bit-error performance.Referring to the speech-to-text method of automatic speech recognition,this paper proposes a signal-to-symbol method based on DL and designs a receiver for symbol detection on single-polarized optical communications modes.To realize this detection method,we propose a non-causal temporal convolutional network-assisted receiver to detect symbols directly from the baseband signal,which specifically integrates most modules of the receiver.Meanwhile,we adopt three training approaches for different signal-to-noise ratios.We also apply a parametric rectified linear unit to enhance the noise robustness of the proposed network.According to the simulation experiments,the biterror-rate performance of the proposed method is close to or even superior to that of the conventional receiver and better than the recurrent neural network-based receiver.
文摘In the field of speech bandwidth exten-sion,it is difficult to achieve high speech quality based on the shallow statistical model method.Although the application of deep learning has greatly improved the extended speech quality,the high model complex-ity makes it infeasible to run on the client.In order to tackle these issues,this paper proposes an end-to-end speech bandwidth extension method based on a temporal convolutional neural network,which greatly reduces the complexity of the model.In addition,a new time-frequency loss function is designed to en-able narrowband speech to acquire a more accurate wideband mapping in the time domain and the fre-quency domain.The experimental results show that the reconstructed wideband speech generated by the proposed method is superior to the traditional heuris-tic rule based approaches and the conventional neu-ral network methods for both subjective and objective evaluation.