Electrochemical impedance spectroscopy(EIS) is an effective technique for Lithium-ion battery state of health diagnosis, and the impedance spectrum prediction by battery charging curve is expected to enable battery im...Electrochemical impedance spectroscopy(EIS) is an effective technique for Lithium-ion battery state of health diagnosis, and the impedance spectrum prediction by battery charging curve is expected to enable battery impedance testing during vehicle operation. However, the mechanistic relationship between charging curves and impedance spectrum remains unclear, which hinders the development as well as optimization of EIS-based prediction techniques. In this paper, we predicted the impedance spectrum by the battery charging voltage curve and optimized the input based on electrochemical mechanistic analysis and machine learning. The internal electrochemical relationships between the charging curve,incremental capacity curve, and the impedance spectrum are explored, which improves the physical interpretability for this prediction and helps define the proper partial voltage range for the input for machine learning models. Different machine learning algorithms have been adopted for the verification of the proposed framework based on the sequence-to-sequence predictions. In addition, the predictions with different partial voltage ranges, at different state of charge, and with different training data ratio are evaluated to prove the proposed method have high generalization and robustness. The experimental results show that the proper partial voltage range has high accuracy and converges to the findings of the electrochemical analysis. The predicted errors for impedance spectrum are less than 1.9 mΩ with the proper partial voltage range selected by the corelative analysis of the electrochemical reactions inside the batteries. Even with the voltage range reduced to 3.65–3.75 V, the predictions are still reliable with most RMSEs less than 4 mO.展开更多
Virtual machine(VM)consolidation is an effective way to improve resource utilization and reduce energy consumption in cloud data centers.Most existing studies have considered VM consolidation as a bin-packing problem,...Virtual machine(VM)consolidation is an effective way to improve resource utilization and reduce energy consumption in cloud data centers.Most existing studies have considered VM consolidation as a bin-packing problem,but the current schemes commonly ignore the long-term relationship between VMs and hosts.In addition,there is a lack of long-term consideration for resource optimization in the VM consolidation,which results in unnecessary VM migration and increased energy consumption.To address these limitations,a VM consolidation method based on multi-step prediction and affinity-aware technique for energy-efficient cloud data centers(MPaAF-VMC)is proposed.The proposed method uses an improved linear regression prediction algorithm to predict the next-moment resource utilization of hosts and VMs,and obtains the stage demand of resources in the future period through multi-step prediction,which is realized by iterative prediction.Then,based on the multi-step prediction,an affinity model between the VM and host is designed using the first-order correlation coefficient and Euclidean distance.During the VM consolidation,the affinity value is used to select the migration VM and placement host.The proposed method is compared with the existing consolidation algorithms on the PlanetLab and Google cluster real workload data using the CloudSim simulation platform.Experimental results show that the proposed method can achieve significant improvement in reducing energy consumption,VM migration costs,and service level agreement(SLA)violations.展开更多
Spectrum prediction plays an important role for the secondary user(SU)to utilize the shared spectrum resources.However,currently utilized prediction methods are not well applied to spectrum with high burstiness,as par...Spectrum prediction plays an important role for the secondary user(SU)to utilize the shared spectrum resources.However,currently utilized prediction methods are not well applied to spectrum with high burstiness,as parameters of prediction models cannot be adjusted properly.This paper studies the prediction problem of bursty bands.Specifically,we first collect real Wi Fi transmission data in 2.4GHz Industrial,Scientific,Medical(ISM)band which is considered to have bursty characteristics.Feature analysis of the data indicates that the spectrum occupancy law of the data is time-variant,which suggests that the performance of commonly used single prediction model could be restricted.Considering that the match between diverse spectrum states and multiple prediction models may essentially improve the prediction performance,we then propose a deep-reinforcement learning based multilayer perceptron(DRL-MLP)method to address this matching problem.The state space of the method is composed of feature vectors,and each of the vectors contains multi-dimensional feature values.Meanwhile,the action space consists of several multilayer perceptrons(MLPs)that are trained on the basis of multiple classified data sets.We finally conduct experiments with the collected real data and simulations with generated data to verify the performance of the proposed method.The results demonstrate that the proposed method significantly outperforms the stateof-the-art methods in terms of the prediction accuracy.展开更多
Spectrum prediction is one of the new techniques in cognitive radio that predicts changes in the spectrum state and plays a crucial role in improving spectrum sensing performance.Prediction models previously trained i...Spectrum prediction is one of the new techniques in cognitive radio that predicts changes in the spectrum state and plays a crucial role in improving spectrum sensing performance.Prediction models previously trained in the source band tend to perform poorly in the new target band because of changes in the channel.In addition,cognitive radio devices require dynamic spectrum access,which means that the time to retrain the model in the new band is minimal.To increase the amount of data in the target band,we use the GAN to convert the data of source band into target band.First,we analyze the data differences between bands and calculate FID scores to identify the available bands with the slightest difference from the target predicted band.The original GAN structure is unsuitable for converting spectrum data,and we propose the spectrum data conversion GAN(SDC-GAN).The generator module consists of a convolutional network and an LSTM module that can integrate multiple features of the data and can convert data from the source band to the target band.Finally,we use the generated target band data to train the prediction model.The experimental results validate the effectiveness of the proposed algorithm.展开更多
Spectrum prediction is a promising technology to infer future spectrum state by exploiting inherent patterns of historical spectrum data.In practice,for a given spectrum band of interest,when facing relatively scarce ...Spectrum prediction is a promising technology to infer future spectrum state by exploiting inherent patterns of historical spectrum data.In practice,for a given spectrum band of interest,when facing relatively scarce historical data,spectrum prediction based on traditional learning methods does not work well.Thus,this paper proposes a cross-band spectrum prediction model based on transfer learning.Firstly,by analysing service activities and computing the distances between various frequency points based on Dynamic Time Warping,the similarity between spectrum bands has been verified.Next,the features,which mainly affect the performance of transfer learning in the crossband spectrum prediction,are explored by leveraging transfer component analysis.Then,the effectiveness of transfer learning for the cross-band spectrum prediction has been demonstrated.Further,experimental results with real-world spectrum data demonstrate that the performance of the proposed model is better than the state-of-theart models when the historical spectrum data is limited.展开更多
Video prediction is the problem of generating future frames by exploiting the spatiotemporal correlation from the past frame sequence.It is one of the crucial issues in computer vision and has many real-world applicat...Video prediction is the problem of generating future frames by exploiting the spatiotemporal correlation from the past frame sequence.It is one of the crucial issues in computer vision and has many real-world applications,mainly focused on predicting future scenarios to avoid undesirable outcomes.However,modeling future image content and object is challenging due to the dynamic evolution and complexity of the scene,such as occlusions,camera movements,delay and illumination.Direct frame synthesis or optical-flow estimation are common approaches used by researchers.However,researchers mainly focused on video prediction using one of the approaches.Both methods have limitations,such as direct frame synthesis,usually face blurry prediction due to complex pixel distributions in the scene,and optical-flow estimation,usually produce artifacts due to large object displacements or obstructions in the clip.In this paper,we constructed a deep neural network Frame Prediction Network(FPNet-OF)with multiplebranch inputs(optical flow and original frame)to predict the future video frame by adaptively fusing the future object-motion with the future frame generator.The key idea is to jointly optimize direct RGB frame synthesis and dense optical flow estimation to generate a superior video prediction network.Using various real-world datasets,we experimentally verify that our proposed framework can produce high-level video frame compared to other state-ofthe-art framework.展开更多
High frequency(HF) communication is widely spread due to some merits like easy deployment and wide communication coverage. Spectrum prediction is a promising technique to facilitate the working frequency selection and...High frequency(HF) communication is widely spread due to some merits like easy deployment and wide communication coverage. Spectrum prediction is a promising technique to facilitate the working frequency selection and enhance the function of automatic link establishment. Most of the existing spectrum prediction algorithms focus on predicting spectrum values in a slot-by-slot manner and therefore are lack of timeliness. Deep learning based spectrum prediction is developed in this paper by simultaneously predicting multi-slot ahead states of multiple spectrum points within a period of time. Specifically, we first employ supervised learning and construct samples depending on longterm and short-term HF spectrum data. Then, advanced residual units are introduced to build multiple residual network modules to respectively capture characteristics in these data with diverse time scales. Further, convolution neural network fuses the outputs of residual network modules above for temporal-spectral prediction, which is combined with residual network modules to construct the deep temporal-spectral residual network. Experiments have demonstrated that the approach proposed in this paper has a significant advantage over the benchmark schemes.展开更多
This paper investigates the problem of data scarcity in spectrum prediction.A cognitive radio equipment may frequently switch the target frequency as the electromagnetic environment changes.The previously trained mode...This paper investigates the problem of data scarcity in spectrum prediction.A cognitive radio equipment may frequently switch the target frequency as the electromagnetic environment changes.The previously trained model for prediction often cannot maintain a good performance when facing small amount of historical data of the new target frequency.Moreover,the cognitive radio equipment usually implements the dynamic spectrum access in real time which means the time to recollect the data of the new task frequency band and retrain the model is very limited.To address the above issues,we develop a crossband data augmentation framework for spectrum prediction by leveraging the recent advances of generative adversarial network(GAN)and deep transfer learning.Firstly,through the similarity measurement,we pre-train a GAN model using the historical data of the frequency band that is the most similar to the target frequency band.Then,through the data augmentation by feeding the small amount of the target data into the pre-trained GAN,temporal-spectral residual network is further trained using deep transfer learning and the generated data with high similarity from GAN.Finally,experiment results demonstrate the effectiveness of the proposed framework.展开更多
Considering chaotic time series multi-step prediction, multi-step direct prediction model based on partial least squares (PLS) is proposed in this article, where PLS, the method for predicting a set of dependent var...Considering chaotic time series multi-step prediction, multi-step direct prediction model based on partial least squares (PLS) is proposed in this article, where PLS, the method for predicting a set of dependent variables forming a large set of predictors, is used to model the dynamic evolution between the space points and the corresponding future points. The model can eliminate error accumulation with the common single-step local model algorithm~ and refrain from the high multi-collinearity problem in the reconstructed state space with the increase of embedding dimension. Simulation predictions are done on the Mackey-Glass chaotic time series with the model. The satisfying prediction accuracy is obtained and the model efficiency verified. In the experiments, the number of extracted components in PLS is set with cross-validation procedure.展开更多
Traffic flow prediction is an important part of the intelligent transportation system. Accurate multi-step traffic flow prediction plays an important role in improving the operational efficiency of the traffic network...Traffic flow prediction is an important part of the intelligent transportation system. Accurate multi-step traffic flow prediction plays an important role in improving the operational efficiency of the traffic network. Since traffic flow data has complex spatio-temporal correlation and non-linearity, existing prediction methods are mainly accomplished through a combination of a Graph Convolutional Network (GCN) and a recurrent neural network. The combination strategy has an excellent performance in traffic prediction tasks. However, multi-step prediction error accumulates with the predicted step size. Some scholars use multiple sampling sequences to achieve more accurate prediction results. But it requires high hardware conditions and multiplied training time. Considering the spatiotemporal correlation of traffic flow and influence of external factors, we propose an Attention Based Spatio-Temporal Graph Convolutional Network considering External Factors (ABSTGCN-EF) for multi-step traffic flow prediction. This model models the traffic flow as diffusion on a digraph and extracts the spatial characteristics of traffic flow through GCN. We add meaningful time-slots attention to the encoder-decoder to form an Attention Encoder Network (AEN) to handle temporal correlation. The attention vector is used as a competitive choice to draw the correlation between predicted states and historical states. We considered the impact of three external factors (daytime, weekdays, and traffic accident markers) on the traffic flow prediction tasks. Experiments on two public data sets show that it makes sense to consider external factors. The prediction performance of our ABSTGCN-EF model achieves 7.2%–8.7% higher than the state-of-the-art baselines.展开更多
As the earliest invented and utilized communication approach, shortwave, known as high frequency(HF) communication now experience the deterioration of HF electromagnetic environment. Finding quality frequency in effic...As the earliest invented and utilized communication approach, shortwave, known as high frequency(HF) communication now experience the deterioration of HF electromagnetic environment. Finding quality frequency in efficient manner becomes one of the key challenges in HF communication. Spectrum prediction infers the future spectrum status from history spectrum data by exploring the inherent correlations and regularities. The investigation of HF electromagnetic environment data reveals the correlations and predictability of HF frequency band in both time and frequency domain. To solve this problem, we develop a Spectrum Prediction-based Frequency Band Pre-selection(SP-FBP) for HF communications. The pre-selection of HF frequency band mainly incorporated in prediction of HF spectrum occupancy and prediction of HF usable frequency, which provide the frequency band ranking of spectrum occupancy and alternative frequency for spectrum sensing, respectively. Performance evaluation via real-world HF spectrum data shows that SP-FBP significantly improves the efficiency of finding quality frequency in HF communications.展开更多
An accurate landslide displacement prediction is an important part of landslide warning system. Aiming at the dynamic characteristics of landslide evolution and the shortcomings of traditional static prediction models...An accurate landslide displacement prediction is an important part of landslide warning system. Aiming at the dynamic characteristics of landslide evolution and the shortcomings of traditional static prediction models, this paper proposes a dynamic prediction model of landslide displacement based on singular spectrum analysis(SSA) and stack long short-term memory(SLSTM) network. The SSA is used to decompose the landslide accumulated displacement time series data into trend term and periodic term displacement subsequences. A cubic polynomial function is used to predict the trend term displacement subsequence, and the SLSTM neural network is used to predict the periodic term displacement subsequence. At the same time, the Bayesian optimization algorithm is used to determine that the SLSTM network input sequence length is 12 and the number of hidden layer nodes is 18. The SLSTM network is updated by adding predicted values to the training set to achieve dynamic displacement prediction. Finally, the accumulated landslide displacement is obtained by superimposing the predicted value of each displacement subsequence. The proposed model was verified on the Xintan landslide in Hubei Province, China. The results show that when predicting the displacement of the periodic term, the SLSTM network has higher prediction accuracy than the support vector machine(SVM) and auto regressive integrated moving average(ARIMA). The mean relative error(MRE) is reduced by 4.099% and 3.548% respectively, while the root mean square error(RMSE) is reduced by 5.830 mm and 3.854 mm respectively. It is concluded that the SLSTM network model can better simulate the dynamic characteristics of landslides.展开更多
In current researches on spectrum leasing, Common model and Property-right model are two main approaches to dynamic spectrum sharing. However, Common model does not consider the obligation of Primary System (PS) and i...In current researches on spectrum leasing, Common model and Property-right model are two main approaches to dynamic spectrum sharing. However, Common model does not consider the obligation of Primary System (PS) and is unfair to Secondary System (SS), while the cooperation based on Property-rights model has problems on its feasibility. This paper proposes a novel system model, in which a Cost-Prediction scheme for Spectrum Leasing (CPSL scheme) is designed to forecast the cost that PS would pay for leasing spectrum. Cost Function is introduced as a criterion to evaluate the potential cost of spectrum leasing for PS. The simulation results show that compared with Common model based scheme, CPSL scheme substantially improves the QoS of the delay-sensitive traffic in SS at the cost of a small degradation of PS performance.展开更多
A content-aware multi-step prediction control(CAMPC)algorithm is proposed to determine the bitrate of 360-degree videos,aim⁃ing to enhance the quality of experience(QoE)of users and reduce the cost of video content pr...A content-aware multi-step prediction control(CAMPC)algorithm is proposed to determine the bitrate of 360-degree videos,aim⁃ing to enhance the quality of experience(QoE)of users and reduce the cost of video content providers(VCP).The CAMPC algorithm first em⁃ploys a neural network to generate the content richness and combines it with the current field of view(FOV)to accurately predict the probability distribution of tiles being viewed.Then,for the tiles in the predicted viewport which directly affect QoE,the CAMPC algorithm utilizes a multi-step prediction for future system states,and accordingly selects the bitrates of multiple subsequent steps,instead of an instantaneous state.Meanwhile,it controls the buffer occupancy to eliminate the impact of prediction errors.We implement CAMPC on players by building a 360-degree video streaming platform and evaluating other advanced adaptive bitrate(ABR)rules through the real network.Experimental results show that CAMPC can save 83.5%of bandwidth resources compared with the scheme that completely transmits the tiles outside the viewport with the Dynamic Adaptive Streaming over HTTP(DASH)protocol.Besides,the proposed method can improve the system utility by 62.7%and 27.6%compared with the DASH official and viewport-based rules,respectively.展开更多
In cognitive radio networks, Secondary Users (SUs) have opportunities to access the spectrum channel when primary user would not use it, which will enhance the resource utilization. In order to avoid interference to p...In cognitive radio networks, Secondary Users (SUs) have opportunities to access the spectrum channel when primary user would not use it, which will enhance the resource utilization. In order to avoid interference to primary users, it is very important and essential for SUs to sense the idle spectrum channels, but also it is very hard to detect all the channels in a short time due to the hardware restriction. This paper proposes a novel spectrum prediction scheme based on Support Vector Machines (SVM), to save the time and energy consumed by spectrum sensing via predicting the channels' state before detecting. Besides, spectrum utilization is further improved by using the cooperative mechanism, in which SUs could share spectrum channels' history state information and prediction results with neighbor nodes. The simulation results show that the algorithm has high prediction accuracy under the condition of small training sample case, and can obviously reduce the detecting energy, which also leads to the improvement of spectrum utilization.展开更多
Spectrum sensing is one of the key issues in cognitive radio networks. Most of previous work concenates on sensing the spectrum in a single spectrum band. In this paper, we propose a spectrum sensing sequence predicti...Spectrum sensing is one of the key issues in cognitive radio networks. Most of previous work concenates on sensing the spectrum in a single spectrum band. In this paper, we propose a spectrum sensing sequence prediction scheme for cognitive radio networks with multiple spectrum bands to decrease the spectrum sensing time and increase the throughput of secondary users. The scheme is based on recent advances in computational learning theory, which has shown that prediction is synonymous with data compression. A Ziv-Lempel data compression algorithm is used to design our spectrum sensing sequence prediction scheme. The spectrum band usage history is used for the prediction in our proposed scheme. Simulation results show that the proposed scheme can reduce the average sensing time and improve the system throughput significantly.展开更多
A compound neural network was constructed during the process of identification and multi-step prediction. Under the PID-type long-range predictive cost function, the control signal was calculated based on gradient alg...A compound neural network was constructed during the process of identification and multi-step prediction. Under the PID-type long-range predictive cost function, the control signal was calculated based on gradient algorithm. The nonlinear controller’s structure was similar to the conventional PID controller. The parameters of this controller were tuned by using a local recurrent neural network on-line. The controller has a better effect than the conventional PID controller. Simulation study shows the effectiveness and good performance.展开更多
Predictions of averaged SST monthly anomalous series for Nino 1-4 regions in the context of auto-adaptive filter are made using a model combining the singular spectrum analysis (SSA) and auto-regression (AR). The resu...Predictions of averaged SST monthly anomalous series for Nino 1-4 regions in the context of auto-adaptive filter are made using a model combining the singular spectrum analysis (SSA) and auto-regression (AR). The results have shown that the scheme is efticient in forward forecaning of the strong ENSO event in 1997- 1998, it is of high reliability in retrospective forecasting of three corresponding historical strong ENSO events. It is seen that the scheme has stable skill and large accuracy for experiments of both independent samples and real cases.With modifications, the SSA-AR scheme is expected to become an efficient model in routine predictions of ENSO.展开更多
This paper focuses on potential issues related to the random selection of a sensing channel that occurs after the prediction phase in a Cognitive Radio Network (CRN).A novel approach (Approach-l)for improved selection...This paper focuses on potential issues related to the random selection of a sensing channel that occurs after the prediction phase in a Cognitive Radio Network (CRN).A novel approach (Approach-l)for improved selection is proposed,which relies on the probabilities of channels by which they are predicted idle.Further,closed-form expressions are derived for the throughput of Cognitive Users (CUs)in the conventional and proposed approaches. In addition to this,a fimdamental approach for computing the prediction probabilities is also proposed.Moreover, a new challenging issue named "sense and stuck"was observed in the conventional approach.The proposed approach is validated by comparing the results achieved with the results of the conventional approach.However, to achieve the prediction probabilities,the pre-channel-state-information is a prerequisite,but it may be unavailable for particular scenarios;therefore,a modified selection method is introduced to avoid the sense and stuck problem.An algorithm to evaluate the throughput using the random,improved,and modified selection methods is presented with its space and time complexities.Furthermore,for additional improvement in the throughput of the CU,a new frame structure is introduced,in which the spectrum prediction and sensing periods are exploited for simultaneous transmission of data via the underlay spectrtun access technique (Approach-2).The simulated results of Approach-2 are compared with our pre-obtalned results of Approach-I,which confirm significant improvement in the throughput.展开更多
Based on the concept of ant colony optimization and the idea of population in genetic algorithm, a novel global optimization algorithm, called the hybrid ant colony optimization (HACO), is proposed in this paper to ...Based on the concept of ant colony optimization and the idea of population in genetic algorithm, a novel global optimization algorithm, called the hybrid ant colony optimization (HACO), is proposed in this paper to tackle continuous-space optimization problems. It was compared with other well-known stochastic methods in the optimization of the benchmark functions and was also used to solve the problem of selecting appropriate dilation efficiently by optimizing the wavelet power spectrum of the hydrophobic sequence of protein, which is the key step on using continuous wavelet transform (CWT) to predict a-helices and connecting peptides.展开更多
基金supported by a grant from the China Scholarship Council (202006370035)a fund from Otto Monsteds Fund (4057941073)。
文摘Electrochemical impedance spectroscopy(EIS) is an effective technique for Lithium-ion battery state of health diagnosis, and the impedance spectrum prediction by battery charging curve is expected to enable battery impedance testing during vehicle operation. However, the mechanistic relationship between charging curves and impedance spectrum remains unclear, which hinders the development as well as optimization of EIS-based prediction techniques. In this paper, we predicted the impedance spectrum by the battery charging voltage curve and optimized the input based on electrochemical mechanistic analysis and machine learning. The internal electrochemical relationships between the charging curve,incremental capacity curve, and the impedance spectrum are explored, which improves the physical interpretability for this prediction and helps define the proper partial voltage range for the input for machine learning models. Different machine learning algorithms have been adopted for the verification of the proposed framework based on the sequence-to-sequence predictions. In addition, the predictions with different partial voltage ranges, at different state of charge, and with different training data ratio are evaluated to prove the proposed method have high generalization and robustness. The experimental results show that the proper partial voltage range has high accuracy and converges to the findings of the electrochemical analysis. The predicted errors for impedance spectrum are less than 1.9 mΩ with the proper partial voltage range selected by the corelative analysis of the electrochemical reactions inside the batteries. Even with the voltage range reduced to 3.65–3.75 V, the predictions are still reliable with most RMSEs less than 4 mO.
基金supported by the National Natural Science Foundation of China(62172089,61972087,62172090).
文摘Virtual machine(VM)consolidation is an effective way to improve resource utilization and reduce energy consumption in cloud data centers.Most existing studies have considered VM consolidation as a bin-packing problem,but the current schemes commonly ignore the long-term relationship between VMs and hosts.In addition,there is a lack of long-term consideration for resource optimization in the VM consolidation,which results in unnecessary VM migration and increased energy consumption.To address these limitations,a VM consolidation method based on multi-step prediction and affinity-aware technique for energy-efficient cloud data centers(MPaAF-VMC)is proposed.The proposed method uses an improved linear regression prediction algorithm to predict the next-moment resource utilization of hosts and VMs,and obtains the stage demand of resources in the future period through multi-step prediction,which is realized by iterative prediction.Then,based on the multi-step prediction,an affinity model between the VM and host is designed using the first-order correlation coefficient and Euclidean distance.During the VM consolidation,the affinity value is used to select the migration VM and placement host.The proposed method is compared with the existing consolidation algorithms on the PlanetLab and Google cluster real workload data using the CloudSim simulation platform.Experimental results show that the proposed method can achieve significant improvement in reducing energy consumption,VM migration costs,and service level agreement(SLA)violations.
基金supported in part by the China National Key R&D Program(no.2020YF-B1808000)Beijing Natural Science Foundation(No.L192002)+2 种基金in part by the Fundamental Research Funds for the Central Universities(No.328202206)the National Natural Science Foundation of China(No.61971058)in part by"Advanced and sophisticated"discipline construction project of universities in Beijing(No.20210013Z0401)。
文摘Spectrum prediction plays an important role for the secondary user(SU)to utilize the shared spectrum resources.However,currently utilized prediction methods are not well applied to spectrum with high burstiness,as parameters of prediction models cannot be adjusted properly.This paper studies the prediction problem of bursty bands.Specifically,we first collect real Wi Fi transmission data in 2.4GHz Industrial,Scientific,Medical(ISM)band which is considered to have bursty characteristics.Feature analysis of the data indicates that the spectrum occupancy law of the data is time-variant,which suggests that the performance of commonly used single prediction model could be restricted.Considering that the match between diverse spectrum states and multiple prediction models may essentially improve the prediction performance,we then propose a deep-reinforcement learning based multilayer perceptron(DRL-MLP)method to address this matching problem.The state space of the method is composed of feature vectors,and each of the vectors contains multi-dimensional feature values.Meanwhile,the action space consists of several multilayer perceptrons(MLPs)that are trained on the basis of multiple classified data sets.We finally conduct experiments with the collected real data and simulations with generated data to verify the performance of the proposed method.The results demonstrate that the proposed method significantly outperforms the stateof-the-art methods in terms of the prediction accuracy.
基金supported by the fund coded,National Natural Science Fund program(No.11975307)China National Defence Science and Technology Innovation Special Zone Project(19-H863-01-ZT-003-003-12).
文摘Spectrum prediction is one of the new techniques in cognitive radio that predicts changes in the spectrum state and plays a crucial role in improving spectrum sensing performance.Prediction models previously trained in the source band tend to perform poorly in the new target band because of changes in the channel.In addition,cognitive radio devices require dynamic spectrum access,which means that the time to retrain the model in the new band is minimal.To increase the amount of data in the target band,we use the GAN to convert the data of source band into target band.First,we analyze the data differences between bands and calculate FID scores to identify the available bands with the slightest difference from the target predicted band.The original GAN structure is unsuitable for converting spectrum data,and we propose the spectrum data conversion GAN(SDC-GAN).The generator module consists of a convolutional network and an LSTM module that can integrate multiple features of the data and can convert data from the source band to the target band.Finally,we use the generated target band data to train the prediction model.The experimental results validate the effectiveness of the proposed algorithm.
基金supported by the National Key R&D Program of China under Grant 2018AAA0102303 and Grant 2018YFB1801103the National Natural Science Foundation of China (No. 61871398 and No. 61931011)+1 种基金the Natural Science Foundation for Distinguished Young Scholars of Jiangsu Province (No. BK20190030)the Equipment Advanced Research Field Foundation (No. 61403120304)
文摘Spectrum prediction is a promising technology to infer future spectrum state by exploiting inherent patterns of historical spectrum data.In practice,for a given spectrum band of interest,when facing relatively scarce historical data,spectrum prediction based on traditional learning methods does not work well.Thus,this paper proposes a cross-band spectrum prediction model based on transfer learning.Firstly,by analysing service activities and computing the distances between various frequency points based on Dynamic Time Warping,the similarity between spectrum bands has been verified.Next,the features,which mainly affect the performance of transfer learning in the crossband spectrum prediction,are explored by leveraging transfer component analysis.Then,the effectiveness of transfer learning for the cross-band spectrum prediction has been demonstrated.Further,experimental results with real-world spectrum data demonstrate that the performance of the proposed model is better than the state-of-theart models when the historical spectrum data is limited.
基金supported by Incheon NationalUniversity Research Grant in 2017.
文摘Video prediction is the problem of generating future frames by exploiting the spatiotemporal correlation from the past frame sequence.It is one of the crucial issues in computer vision and has many real-world applications,mainly focused on predicting future scenarios to avoid undesirable outcomes.However,modeling future image content and object is challenging due to the dynamic evolution and complexity of the scene,such as occlusions,camera movements,delay and illumination.Direct frame synthesis or optical-flow estimation are common approaches used by researchers.However,researchers mainly focused on video prediction using one of the approaches.Both methods have limitations,such as direct frame synthesis,usually face blurry prediction due to complex pixel distributions in the scene,and optical-flow estimation,usually produce artifacts due to large object displacements or obstructions in the clip.In this paper,we constructed a deep neural network Frame Prediction Network(FPNet-OF)with multiplebranch inputs(optical flow and original frame)to predict the future video frame by adaptively fusing the future object-motion with the future frame generator.The key idea is to jointly optimize direct RGB frame synthesis and dense optical flow estimation to generate a superior video prediction network.Using various real-world datasets,we experimentally verify that our proposed framework can produce high-level video frame compared to other state-ofthe-art framework.
基金supported in part by the National Natural Science Foundation of China (Grants No. 61501510 and No. 61631020)Natural Science Foundation of Jiangsu Province (Grant No. BK20150717)+2 种基金China Postdoctoral Science Foundation Funded Project (Grant No. 2016M590398 and No.2018T110426)Jiangsu Planned Projects for Postdoctoral Research Funds (Grant No. 1501009A)Natural Science Foundation for Distinguished Young Scholars of Jiangsu Province (Grant No. BK20160034)
文摘High frequency(HF) communication is widely spread due to some merits like easy deployment and wide communication coverage. Spectrum prediction is a promising technique to facilitate the working frequency selection and enhance the function of automatic link establishment. Most of the existing spectrum prediction algorithms focus on predicting spectrum values in a slot-by-slot manner and therefore are lack of timeliness. Deep learning based spectrum prediction is developed in this paper by simultaneously predicting multi-slot ahead states of multiple spectrum points within a period of time. Specifically, we first employ supervised learning and construct samples depending on longterm and short-term HF spectrum data. Then, advanced residual units are introduced to build multiple residual network modules to respectively capture characteristics in these data with diverse time scales. Further, convolution neural network fuses the outputs of residual network modules above for temporal-spectral prediction, which is combined with residual network modules to construct the deep temporal-spectral residual network. Experiments have demonstrated that the approach proposed in this paper has a significant advantage over the benchmark schemes.
基金This work was supported by the Science and Technology Innovation 2030-Key Project of“New Generation Artificial Intelligence”of China under Grant 2018AAA0102303the Natural Science Foundation for Distinguished Young Scholars of Jiangsu Province(No.BK20190030)the National Natural Science Foundation of China(No.61631020,No.61871398,No.61931011 and No.U20B2038).
文摘This paper investigates the problem of data scarcity in spectrum prediction.A cognitive radio equipment may frequently switch the target frequency as the electromagnetic environment changes.The previously trained model for prediction often cannot maintain a good performance when facing small amount of historical data of the new target frequency.Moreover,the cognitive radio equipment usually implements the dynamic spectrum access in real time which means the time to recollect the data of the new task frequency band and retrain the model is very limited.To address the above issues,we develop a crossband data augmentation framework for spectrum prediction by leveraging the recent advances of generative adversarial network(GAN)and deep transfer learning.Firstly,through the similarity measurement,we pre-train a GAN model using the historical data of the frequency band that is the most similar to the target frequency band.Then,through the data augmentation by feeding the small amount of the target data into the pre-trained GAN,temporal-spectral residual network is further trained using deep transfer learning and the generated data with high similarity from GAN.Finally,experiment results demonstrate the effectiveness of the proposed framework.
文摘Considering chaotic time series multi-step prediction, multi-step direct prediction model based on partial least squares (PLS) is proposed in this article, where PLS, the method for predicting a set of dependent variables forming a large set of predictors, is used to model the dynamic evolution between the space points and the corresponding future points. The model can eliminate error accumulation with the common single-step local model algorithm~ and refrain from the high multi-collinearity problem in the reconstructed state space with the increase of embedding dimension. Simulation predictions are done on the Mackey-Glass chaotic time series with the model. The satisfying prediction accuracy is obtained and the model efficiency verified. In the experiments, the number of extracted components in PLS is set with cross-validation procedure.
基金supported by the Nation Natural Science Foundation of China(NSFC)under Grant No.61462042 and No.61966018.
文摘Traffic flow prediction is an important part of the intelligent transportation system. Accurate multi-step traffic flow prediction plays an important role in improving the operational efficiency of the traffic network. Since traffic flow data has complex spatio-temporal correlation and non-linearity, existing prediction methods are mainly accomplished through a combination of a Graph Convolutional Network (GCN) and a recurrent neural network. The combination strategy has an excellent performance in traffic prediction tasks. However, multi-step prediction error accumulates with the predicted step size. Some scholars use multiple sampling sequences to achieve more accurate prediction results. But it requires high hardware conditions and multiplied training time. Considering the spatiotemporal correlation of traffic flow and influence of external factors, we propose an Attention Based Spatio-Temporal Graph Convolutional Network considering External Factors (ABSTGCN-EF) for multi-step traffic flow prediction. This model models the traffic flow as diffusion on a digraph and extracts the spatial characteristics of traffic flow through GCN. We add meaningful time-slots attention to the encoder-decoder to form an Attention Encoder Network (AEN) to handle temporal correlation. The attention vector is used as a competitive choice to draw the correlation between predicted states and historical states. We considered the impact of three external factors (daytime, weekdays, and traffic accident markers) on the traffic flow prediction tasks. Experiments on two public data sets show that it makes sense to consider external factors. The prediction performance of our ABSTGCN-EF model achieves 7.2%–8.7% higher than the state-of-the-art baselines.
基金the Project of National Natural Science Foundation of China (Grant No. 61471395, No. 61301161, and No. 61501510)partly supported by Natural Science Foundation of Jiangsu Province (Grant No. BK20161125 and No. BK20150717)
文摘As the earliest invented and utilized communication approach, shortwave, known as high frequency(HF) communication now experience the deterioration of HF electromagnetic environment. Finding quality frequency in efficient manner becomes one of the key challenges in HF communication. Spectrum prediction infers the future spectrum status from history spectrum data by exploring the inherent correlations and regularities. The investigation of HF electromagnetic environment data reveals the correlations and predictability of HF frequency band in both time and frequency domain. To solve this problem, we develop a Spectrum Prediction-based Frequency Band Pre-selection(SP-FBP) for HF communications. The pre-selection of HF frequency band mainly incorporated in prediction of HF spectrum occupancy and prediction of HF usable frequency, which provide the frequency band ranking of spectrum occupancy and alternative frequency for spectrum sensing, respectively. Performance evaluation via real-world HF spectrum data shows that SP-FBP significantly improves the efficiency of finding quality frequency in HF communications.
基金supported by the Natural Science Foundation of Shaanxi Province under Grant 2019JQ206in part by the Science and Technology Department of Shaanxi Province under Grant 2020CGXNG-009in part by the Education Department of Shaanxi Province under Grant 17JK0346。
文摘An accurate landslide displacement prediction is an important part of landslide warning system. Aiming at the dynamic characteristics of landslide evolution and the shortcomings of traditional static prediction models, this paper proposes a dynamic prediction model of landslide displacement based on singular spectrum analysis(SSA) and stack long short-term memory(SLSTM) network. The SSA is used to decompose the landslide accumulated displacement time series data into trend term and periodic term displacement subsequences. A cubic polynomial function is used to predict the trend term displacement subsequence, and the SLSTM neural network is used to predict the periodic term displacement subsequence. At the same time, the Bayesian optimization algorithm is used to determine that the SLSTM network input sequence length is 12 and the number of hidden layer nodes is 18. The SLSTM network is updated by adding predicted values to the training set to achieve dynamic displacement prediction. Finally, the accumulated landslide displacement is obtained by superimposing the predicted value of each displacement subsequence. The proposed model was verified on the Xintan landslide in Hubei Province, China. The results show that when predicting the displacement of the periodic term, the SLSTM network has higher prediction accuracy than the support vector machine(SVM) and auto regressive integrated moving average(ARIMA). The mean relative error(MRE) is reduced by 4.099% and 3.548% respectively, while the root mean square error(RMSE) is reduced by 5.830 mm and 3.854 mm respectively. It is concluded that the SLSTM network model can better simulate the dynamic characteristics of landslides.
基金supported by the National High Technology Research and Development Program of China ('863' Program, No.2009AA01Z242)National Natural Science Foundation of China (60972080)
文摘In current researches on spectrum leasing, Common model and Property-right model are two main approaches to dynamic spectrum sharing. However, Common model does not consider the obligation of Primary System (PS) and is unfair to Secondary System (SS), while the cooperation based on Property-rights model has problems on its feasibility. This paper proposes a novel system model, in which a Cost-Prediction scheme for Spectrum Leasing (CPSL scheme) is designed to forecast the cost that PS would pay for leasing spectrum. Cost Function is introduced as a criterion to evaluate the potential cost of spectrum leasing for PS. The simulation results show that compared with Common model based scheme, CPSL scheme substantially improves the QoS of the delay-sensitive traffic in SS at the cost of a small degradation of PS performance.
基金supported in part by ZTE Corporation under Grant No.2021420118000065.
文摘A content-aware multi-step prediction control(CAMPC)algorithm is proposed to determine the bitrate of 360-degree videos,aim⁃ing to enhance the quality of experience(QoE)of users and reduce the cost of video content providers(VCP).The CAMPC algorithm first em⁃ploys a neural network to generate the content richness and combines it with the current field of view(FOV)to accurately predict the probability distribution of tiles being viewed.Then,for the tiles in the predicted viewport which directly affect QoE,the CAMPC algorithm utilizes a multi-step prediction for future system states,and accordingly selects the bitrates of multiple subsequent steps,instead of an instantaneous state.Meanwhile,it controls the buffer occupancy to eliminate the impact of prediction errors.We implement CAMPC on players by building a 360-degree video streaming platform and evaluating other advanced adaptive bitrate(ABR)rules through the real network.Experimental results show that CAMPC can save 83.5%of bandwidth resources compared with the scheme that completely transmits the tiles outside the viewport with the Dynamic Adaptive Streaming over HTTP(DASH)protocol.Besides,the proposed method can improve the system utility by 62.7%and 27.6%compared with the DASH official and viewport-based rules,respectively.
基金Sponsored by the Youth Foundation of Beijing Univesity of Postsand Telecommunications(Grant No.2011RC0110)Director Foundation of Key Lab of Universal Wirelsess Communication of Ministry of Education(Grant No.ZRJJ-2010-3)Ministry of Industry and Information Technology of China(Grant No.2011ZX03001-007-03)
文摘In cognitive radio networks, Secondary Users (SUs) have opportunities to access the spectrum channel when primary user would not use it, which will enhance the resource utilization. In order to avoid interference to primary users, it is very important and essential for SUs to sense the idle spectrum channels, but also it is very hard to detect all the channels in a short time due to the hardware restriction. This paper proposes a novel spectrum prediction scheme based on Support Vector Machines (SVM), to save the time and energy consumed by spectrum sensing via predicting the channels' state before detecting. Besides, spectrum utilization is further improved by using the cooperative mechanism, in which SUs could share spectrum channels' history state information and prediction results with neighbor nodes. The simulation results show that the algorithm has high prediction accuracy under the condition of small training sample case, and can obviously reduce the detecting energy, which also leads to the improvement of spectrum utilization.
基金Supported by the National Natural Science Foundation of China(No.60832009), the Natural Science Foundation of Beijing (No.4102044) and the National Nature Science Foundation for Young Scholars of China (No.61001115)
文摘Spectrum sensing is one of the key issues in cognitive radio networks. Most of previous work concenates on sensing the spectrum in a single spectrum band. In this paper, we propose a spectrum sensing sequence prediction scheme for cognitive radio networks with multiple spectrum bands to decrease the spectrum sensing time and increase the throughput of secondary users. The scheme is based on recent advances in computational learning theory, which has shown that prediction is synonymous with data compression. A Ziv-Lempel data compression algorithm is used to design our spectrum sensing sequence prediction scheme. The spectrum band usage history is used for the prediction in our proposed scheme. Simulation results show that the proposed scheme can reduce the average sensing time and improve the system throughput significantly.
基金This work was supported by the National Natural Science Foundation of China (No. 60174021, No. 60374037)the Science and Technology Greativeness Foundation of Nankai University
文摘A compound neural network was constructed during the process of identification and multi-step prediction. Under the PID-type long-range predictive cost function, the control signal was calculated based on gradient algorithm. The nonlinear controller’s structure was similar to the conventional PID controller. The parameters of this controller were tuned by using a local recurrent neural network on-line. The controller has a better effect than the conventional PID controller. Simulation study shows the effectiveness and good performance.
文摘Predictions of averaged SST monthly anomalous series for Nino 1-4 regions in the context of auto-adaptive filter are made using a model combining the singular spectrum analysis (SSA) and auto-regression (AR). The results have shown that the scheme is efticient in forward forecaning of the strong ENSO event in 1997- 1998, it is of high reliability in retrospective forecasting of three corresponding historical strong ENSO events. It is seen that the scheme has stable skill and large accuracy for experiments of both independent samples and real cases.With modifications, the SSA-AR scheme is expected to become an efficient model in routine predictions of ENSO.
文摘This paper focuses on potential issues related to the random selection of a sensing channel that occurs after the prediction phase in a Cognitive Radio Network (CRN).A novel approach (Approach-l)for improved selection is proposed,which relies on the probabilities of channels by which they are predicted idle.Further,closed-form expressions are derived for the throughput of Cognitive Users (CUs)in the conventional and proposed approaches. In addition to this,a fimdamental approach for computing the prediction probabilities is also proposed.Moreover, a new challenging issue named "sense and stuck"was observed in the conventional approach.The proposed approach is validated by comparing the results achieved with the results of the conventional approach.However, to achieve the prediction probabilities,the pre-channel-state-information is a prerequisite,but it may be unavailable for particular scenarios;therefore,a modified selection method is introduced to avoid the sense and stuck problem.An algorithm to evaluate the throughput using the random,improved,and modified selection methods is presented with its space and time complexities.Furthermore,for additional improvement in the throughput of the CU,a new frame structure is introduced,in which the spectrum prediction and sensing periods are exploited for simultaneous transmission of data via the underlay spectrtun access technique (Approach-2).The simulated results of Approach-2 are compared with our pre-obtalned results of Approach-I,which confirm significant improvement in the throughput.
基金the National Natural Science Foundation of China(No.20475068) the Guangdong Provincial Natural Science Foundation(No.031577).
文摘Based on the concept of ant colony optimization and the idea of population in genetic algorithm, a novel global optimization algorithm, called the hybrid ant colony optimization (HACO), is proposed in this paper to tackle continuous-space optimization problems. It was compared with other well-known stochastic methods in the optimization of the benchmark functions and was also used to solve the problem of selecting appropriate dilation efficiently by optimizing the wavelet power spectrum of the hydrophobic sequence of protein, which is the key step on using continuous wavelet transform (CWT) to predict a-helices and connecting peptides.