Existing researches on cyber attackdefense analysis have typically adopted stochastic game theory to model the problem for solutions,but the assumption of complete rationality is used in modeling,ignoring the informat...Existing researches on cyber attackdefense analysis have typically adopted stochastic game theory to model the problem for solutions,but the assumption of complete rationality is used in modeling,ignoring the information opacity in practical attack and defense scenarios,and the model and method lack accuracy.To such problem,we investigate network defense policy methods under finite rationality constraints and propose network defense policy selection algorithm based on deep reinforcement learning.Based on graph theoretical methods,we transform the decision-making problem into a path optimization problem,and use a compression method based on service node to map the network state.On this basis,we improve the A3C algorithm and design the DefenseA3C defense policy selection algorithm with online learning capability.The experimental results show that the model and method proposed in this paper can stably converge to a better network state after training,which is faster and more stable than the original A3C algorithm.Compared with the existing typical approaches,Defense-A3C is verified its advancement.展开更多
Although modulation classification based on deep neural network can achieve high Modulation Classification(MC)accuracies,catastrophic forgetting will occur when the neural network model continues to learn new tasks.In...Although modulation classification based on deep neural network can achieve high Modulation Classification(MC)accuracies,catastrophic forgetting will occur when the neural network model continues to learn new tasks.In this paper,we simulate the dynamic wireless communication environment and focus on breaking the learning paradigm of isolated automatic MC.We innovate a research algorithm for continuous automatic MC.Firstly,a memory for storing representative old task modulation signals is built,which is employed to limit the gradient update direction of new tasks in the continuous learning stage to ensure that the loss of old tasks is also in a downward trend.Secondly,in order to better simulate the dynamic wireless communication environment,we employ the mini-batch gradient algorithm which is more suitable for continuous learning.Finally,the signal in the memory can be replayed to further strengthen the characteristics of the old task signal in the model.Simulation results verify the effectiveness of the method.展开更多
Due to the limited computational capability and the diversity of the Internet of Things devices working in different environment,we consider fewshot learning-based automatic modulation classification(AMC)to improve it...Due to the limited computational capability and the diversity of the Internet of Things devices working in different environment,we consider fewshot learning-based automatic modulation classification(AMC)to improve its reliability.A data enhancement module(DEM)is designed by a convolutional layer to supplement frequency-domain information as well as providing nonlinear mapping that is beneficial for AMC.Multimodal network is designed to have multiple residual blocks,where each residual block has multiple convolutional kernels of different sizes for diverse feature extraction.Moreover,a deep supervised loss function is designed to supervise all parts of the network including the hidden layers and the DEM.Since different model may output different results,cooperative classifier is designed to avoid the randomness of single model and improve the reliability.Simulation results show that this few-shot learning-based AMC method can significantly improve the AMC accuracy compared to the existing methods.展开更多
In this paper,we summarize recent progresses made in deep learning based acoustic models and the motivation and insights behind the surveyed techniques.We first discuss models such as recurrent neural networks(RNNs) a...In this paper,we summarize recent progresses made in deep learning based acoustic models and the motivation and insights behind the surveyed techniques.We first discuss models such as recurrent neural networks(RNNs) and convolutional neural networks(CNNs) that can effectively exploit variablelength contextual information,and their various combination with other models.We then describe models that are optimized end-to-end and emphasize on feature representations learned jointly with the rest of the system,the connectionist temporal classification(CTC) criterion,and the attention-based sequenceto-sequence translation model.We further illustrate robustness issues in speech recognition systems,and discuss acoustic model adaptation,speech enhancement and separation,and robust training strategies.We also cover modeling techniques that lead to more efficient decoding and discuss possible future directions in acoustic model research.展开更多
The performance of deep learning(DL)networks has been increased by elaborating the network structures. However, the DL netowrks have many parameters, which have a lot of influence on the performance of the network. We...The performance of deep learning(DL)networks has been increased by elaborating the network structures. However, the DL netowrks have many parameters, which have a lot of influence on the performance of the network. We propose a genetic algorithm(GA) based deep belief neural network(DBNN) method for robot object recognition and grasping purpose. This method optimizes the parameters of the DBNN method, such as the number of hidden units, the number of epochs, and the learning rates, which would reduce the error rate and the network training time of object recognition. After recognizing objects, the robot performs the pick-andplace operations. We build a database of six objects for experimental purpose. Experimental results demonstrate that our method outperforms on the optimized robot object recognition and grasping tasks.展开更多
Channel estimation has been considered as a key issue in the millimeter-wave(mmWave)massive multi-input multioutput(MIMO)communication systems,which becomes more challenging with a large number of antennas.In this pap...Channel estimation has been considered as a key issue in the millimeter-wave(mmWave)massive multi-input multioutput(MIMO)communication systems,which becomes more challenging with a large number of antennas.In this paper,we propose a deep learning(DL)-based fast channel estimation method for mmWave massive MIMO systems.The proposed method can directly and effectively estimate channel state information(CSI)from received data without performing pilot signals estimate in advance,which simplifies the estimation process.Specifically,we develop a convolutional neural network(CNN)-based channel estimation network for the case of dimensional mismatch of input and output data,subsequently denoted as channel(H)neural network(HNN).It can quickly estimate the channel information by learning the inherent characteristics of the received data and the relationship between the received data and the channel,while the dimension of the received data is much smaller than the channel matrix.Simulation results show that the proposed HNN can gain better channel estimation accuracy compared with existing schemes.展开更多
Deep Learning presents a critical capability to be geared into environments being constantly changed and ongoing learning dynamic,which is especially relevant in Network Intrusion Detection.In this paper,as enlightene...Deep Learning presents a critical capability to be geared into environments being constantly changed and ongoing learning dynamic,which is especially relevant in Network Intrusion Detection.In this paper,as enlightened by the theory of Deep Learning Neural Networks,Hierarchy Distributed-Agents Model for Network Risk Evaluation,a newly developed model,is proposed.The architecture taken on by the distributed-agents model are given,as well as the approach of analyzing network intrusion detection using Deep Learning,the mechanism of sharing hyper-parameters to improve the efficiency of learning is presented,and the hierarchical evaluative framework for Network Risk Evaluation of the proposed model is built.Furthermore,to examine the proposed model,a series of experiments were conducted in terms of NSLKDD datasets.The proposed model was able to differentiate between normal and abnormal network activities with an accuracy of 97.60%on NSL-KDD datasets.As the results acquired from the experiment indicate,the model developed in this paper is characterized by high-speed and high-accuracy processing which shall offer a preferable solution with regard to the Risk Evaluation in Network.展开更多
Paralytic shellfi sh poisoning(PSP)microalgae,as one of the harmful algal blooms,causes great damage to the of fshore fi shery,marine culture,and marine ecological environment.At present,there is no technique for real...Paralytic shellfi sh poisoning(PSP)microalgae,as one of the harmful algal blooms,causes great damage to the of fshore fi shery,marine culture,and marine ecological environment.At present,there is no technique for real-time accurate identifi cation of toxic microalgae,by combining three-dimensional fluorescence with machine learning(ML)and deep learning(DL),we developed methods to classify the PSP and non-PSP microalgae.The average classifi cation accuracies of these two methods for microalgae are above 90%,and the accuracies for discriminating 12 microalgae species in PSP and non-PSP microalgae are above 94%.When the emission wavelength is 650-690 nm,the fl uorescence characteristics bands(excitation wavelength)occur dif ferently at 410-480 nm and 500-560 nm for PSP and non-PSP microalgae,respectively.The identification accuracies of ML models(support vector machine(SVM),and k-nearest neighbor rule(k-NN)),and DL model(convolutional neural network(CNN))to PSP microalgae are 96.25%,96.36%,and 95.88%respectively,indicating that ML and DL are suitable for the classifi cation of toxic microalgae.展开更多
This study proposed a measurement platform for continuous blood pressure estimation based on dual photoplethysmography(PPG)sensors and a deep learning(DL)that can be used for continuous and rapid measurement of blood ...This study proposed a measurement platform for continuous blood pressure estimation based on dual photoplethysmography(PPG)sensors and a deep learning(DL)that can be used for continuous and rapid measurement of blood pressure and analysis of cardiovascular-related indicators.The proposed platform measured the signal changes in PPG and converted them into physiological indicators,such as pulse transit time(PTT),pulse wave velocity(PWV),perfusion index(PI)and heart rate(HR);these indicators were then fed into the DL to calculate blood pressure.The hardware of the experiment comprised 2 PPG components(i.e.,Raspberry Pi 3 Model B and analog-todigital converter[MCP3008]),which were connected using a serial peripheral interface.The DL algorithm converted the stable dual PPG signals acquired from the strictly standardized experimental process into various physiological indicators as input parameters and finally obtained the systolic blood pressure(SBP),diastolic blood pressure(DBP)and mean arterial pressure(MAP).To increase the robustness of the DL model,this study input data of 100 Asian participants into the training database,including those with and without cardiovascular disease,each with a proportion of approximately 50%.The experimental results revealed that the mean absolute error and standard deviation of SBP was 0.17±0.46 mmHg.The mean absolute error and standard deviation of DBP was 0.27±0.52 mmHg.The mean absolute error and standard deviation of MAP was 0.16±0.40 mmHg.展开更多
For high-speed mobile MIMO-OFDM system,a low-complexity deep learning(DL) based timevarying channel estimation scheme is proposed.To reduce the number of estimated parameters,the basis expansion model(BEM) is employed...For high-speed mobile MIMO-OFDM system,a low-complexity deep learning(DL) based timevarying channel estimation scheme is proposed.To reduce the number of estimated parameters,the basis expansion model(BEM) is employed to model the time-varying channel,which converts the channel estimation into the estimation of the basis coefficient.Specifically,the initial basis coefficients are firstly used to train the neural network in an offline manner,and then the high-precision channel estimation can be obtained by small number of inputs.Moreover,the linear minimum mean square error(LMMSE) estimated channel is considered for the loss function in training phase,which makes the proposed method more practical.Simulation results show that the proposed method has a better performance and lower computational complexity compared with the available schemes,and it is robust to the fast time-varying channel in the high-speed mobile scenarios.展开更多
Several recent successes in deep learning(DL),such as state-of-the-art performance on several image classification benchmarks,have been achieved through the improved configuration.Hyperparameters(HPs)tuning is a key f...Several recent successes in deep learning(DL),such as state-of-the-art performance on several image classification benchmarks,have been achieved through the improved configuration.Hyperparameters(HPs)tuning is a key factor affecting the performance of machine learning(ML)algorithms.Various state-of-the-art DL models use different HPs in different ways for classification tasks on different datasets.This manuscript provides a brief overview of learning parameters and configuration techniques to show the benefits of using a large-scale handdrawn sketch dataset for classification problems.We analyzed the impact of different learning parameters and toplayer configurations with batch normalization(BN)and dropouts on the performance of the pre-trained visual geometry group 19(VGG-19).The analyzed learning parameters include different learning rates and momentum values of two different optimizers,such as stochastic gradient descent(SGD)and Adam.Our analysis demonstrates that using the SGD optimizer and learning parameters,such as small learning rates with high values of momentum,along with both BN and dropouts in top layers,has a good impact on the sketch image classification accuracy.展开更多
This study is designed to develop Artificial Intelligence(AI)based analysis tool that could accurately detect COVID-19 lung infections based on portable chest x-rays(CXRs).The frontline physicians and radiologists suf...This study is designed to develop Artificial Intelligence(AI)based analysis tool that could accurately detect COVID-19 lung infections based on portable chest x-rays(CXRs).The frontline physicians and radiologists suffer from grand challenges for COVID-19 pandemic due to the suboptimal image quality and the large volume of CXRs.In this study,AI-based analysis tools were developed that can precisely classify COVID-19 lung infection.Publicly available datasets of COVID-19(N=1525),non-COVID-19 normal(N=1525),viral pneumonia(N=1342)and bacterial pneumonia(N=2521)from the Italian Society of Medical and Interventional Radiology(SIRM),Radiopaedia,The Cancer Imaging Archive(TCIA)and Kaggle repositories were taken.A multi-approach utilizing deep learning ResNet101 with and without hyperparameters optimization was employed.Additionally,the fea-tures extracted from the average pooling layer of ResNet101 were used as input to machine learning(ML)algorithms,which twice trained the learning algorithms.The ResNet101 with optimized parameters yielded improved performance to default parameters.The extracted features from ResNet101 are fed to the k-nearest neighbor(KNN)and support vector machine(SVM)yielded the highest 3-class classification performance of 99.86%and 99.46%,respectively.The results indicate that the proposed approach can be bet-ter utilized for improving the accuracy and diagnostic efficiency of CXRs.The proposed deep learning model has the potential to improve further the efficiency of the healthcare systems for proper diagnosis and prognosis of COVID-19 lung infection.展开更多
Deep learning(DL)algorithms have been widely used in various security applications to enhance the performances of decision-based models.Malicious data added by an attacker can cause several security and privacy proble...Deep learning(DL)algorithms have been widely used in various security applications to enhance the performances of decision-based models.Malicious data added by an attacker can cause several security and privacy problems in the operation of DL models.The two most common active attacks are poisoning and evasion attacks,which can cause various problems,including wrong prediction and misclassification of decision-based models.Therefore,to design an efficient DL model,it is crucial to mitigate these attacks.In this regard,this study proposes a secure neural network(NN)model that provides data security during model training and testing phases.The main idea is to use cryptographic functions,such as hash function(SHA512)and homomorphic encryption(HE)scheme,to provide authenticity,integrity,and confidentiality of data.The performance of the proposed model is evaluated by experiments based on accuracy,precision,attack detection rate(ADR),and computational cost.The results show that the proposed model has achieved an accuracy of 98%,a precision of 0.97,and an ADR of 98%,even for a large number of attacks.Hence,the proposed model can be used to detect attacks and mitigate the attacker motives.The results also show that the computational cost of the proposed model does not increase with model complexity.展开更多
The main function of the power communication business is to monitor,control and manage the power communication network to ensure normal and stable operation of the power communication network.Commu-nication services r...The main function of the power communication business is to monitor,control and manage the power communication network to ensure normal and stable operation of the power communication network.Commu-nication services related to dispatching data networks and the transmission of fault information or feeder automation have high requirements for delay.If processing time is prolonged,a power business cascade reaction may be triggered.In order to solve the above problems,this paper establishes an edge object-linked agent business deployment model for power communication network to unify the management of data collection,resource allocation and task scheduling within the system,realizes the virtualization of object-linked agent computing resources through Docker container technology,designs the target model of network latency and energy consumption,and introduces A3C algorithm in deep reinforcement learning,improves it according to scene characteristics,and sets corresponding optimization strategies.Mini-mize network delay and energy consumption;At the same time,to ensure that sensitive power business is handled in time,this paper designs the business dispatch model and task migration model,and solves the problem of server failure.Finally,the corresponding simulation program is designed to verify the feasibility and validity of this method,and to compare it with other existing mechanisms.展开更多
In the fifth-generation new radio(5G-NR) high-speed railway(HSR) downlink,a deep learning(DL) based Doppler frequency offset(DFO) estimation scheme is proposed by using the back propagation neural network(BPNN).The pr...In the fifth-generation new radio(5G-NR) high-speed railway(HSR) downlink,a deep learning(DL) based Doppler frequency offset(DFO) estimation scheme is proposed by using the back propagation neural network(BPNN).The proposed method mainly includes pre-training,training,and estimation phases,where the pre-training and training belong to the off-line stage,and the estimation is the online stage.To reduce the performance loss caused by the random initialization,the pre-training method is employed to acquire a desirable initialization,which is used as the initial parameters of the training phase.Moreover,the initial DFO estimation is used as input along with the received pilots to further improve the estimation accuracy.Different from the training phase,the initial DFO estimation in pre-training phase is obtained by the data and pilot symbols.Simulation results show that the mean squared error(MSE) performance of the proposed method is better than those of the available algorithms,and it has acceptable computational complexity.展开更多
Background:We test a deep learning(DL)supported remote diagnosis approach to detect diabetic retinopathy(DR)and other referable retinal pathologies using ultra-wide-field(UWF)Optomap.Methods:Prospective,non-randomized...Background:We test a deep learning(DL)supported remote diagnosis approach to detect diabetic retinopathy(DR)and other referable retinal pathologies using ultra-wide-field(UWF)Optomap.Methods:Prospective,non-randomized study involving diabetic patients seen at endocrinology clinics.Non-expert imagers were trained to obtain non-dilated images using UWF Primary.Images were graded by two retina specialists and classified as DR or incidental retinal findings.Cohen’s kappa was used to test the agreement between the remote diagnosis and the gold standard exam.A novel DL model was trained to identify the presence or absence of referable pathology,and sensitivity,specificity and area under the receiver operator characteristics curve(AUROC)were used to assess its performance.Results:A total of 265 patients were enrolled,of which 241 patients were imaged(433 eyes).The mean age was 50±17 years,45%of patients were female,34%had a diagnosis of diabetes mellitus type 1,and 66%of type 2.The average Hemoglobin A1c was 8.8±2.3%,and 81%were on Insulin.Of the 433 images,404(93%)were gradable,64 patients(27%)were referred to a retina specialist,and 46(19%)were referred to comprehensive ophthalmologist for a referable retinal pathology on remote diagnosis.Cohen’s kappa was 0.58,indicating moderate agreement.Our DL algorithm achieved an accuracy of 82.8%(95%CI:80.3-85.2%),a sensitivity of 81.0%(95%CI:78.5-83.6%),specificity of 73.5%(95%CI:70.6-76.3%),and AUROC of 81.0%(95%CI:78.5-83.6%).Conclusions:UWF Primary can be used in the non-ophthalmology setting to screen for referable retinal pathology and can be successfully supported by an automated algorithm for image classification.展开更多
Existing specific emitter identification(SEI)methods based on hand-crafted features have drawbacks of losing feature information and involving multiple processing stages,which reduce the identification accuracy of emi...Existing specific emitter identification(SEI)methods based on hand-crafted features have drawbacks of losing feature information and involving multiple processing stages,which reduce the identification accuracy of emitters and complicate the procedures of identification.In this paper,we propose a deep SEI approach via multidimensional feature extraction for radio frequency fingerprints(RFFs),namely,RFFsNet-SEI.Particularly,we extract multidimensional physical RFFs from the received signal by virtue of variational mode decomposition(VMD)and Hilbert transform(HT).The physical RFFs and I-Q data are formed into the balanced-RFFs,which are then used to train RFFsNet-SEI.As introducing model-aided RFFs into neural network,the hybrid-driven scheme including physical features and I-Q data is constructed.It improves physical interpretability of RFFsNet-SEI.Meanwhile,since RFFsNet-SEI identifies individual of emitters from received raw data in end-to-end,it accelerates SEI implementation and simplifies procedures of identification.Moreover,as the temporal features and spectral features of the received signal are both extracted by RFFsNet-SEI,identification accuracy is improved.Finally,we compare RFFsNet-SEI with the counterparts in terms of identification accuracy,computational complexity,and prediction speed.Experimental results illustrate that the proposed method outperforms the counterparts on the basis of simulation dataset and real dataset collected in the anechoic chamber.展开更多
Undeniably,Deep Learning(DL)has rapidly eroded traditional machine learning in Remote Sensing(RS)and geoscience domains with applications such as scene understanding,material identification,extreme weather detection,o...Undeniably,Deep Learning(DL)has rapidly eroded traditional machine learning in Remote Sensing(RS)and geoscience domains with applications such as scene understanding,material identification,extreme weather detection,oil spill identification,among many others.Traditional machine learning algorithms are given less and less attention in the era of big data.Recently,a substantial amount of work aimed at developing image classification approaches based on the DL model’s success in computer vision.The number of relevant articles has nearly doubled every year since 2015.Advances in remote sensing technology,as well as the rapidly expanding volume of publicly available satellite imagery on a worldwide scale,have opened up the possibilities for a wide range of modern applications.However,there are some challenges related to the availability of annotated data,the complex nature of data,and model parameterization,which strongly impact performance.In this article,a comprehensive review of the literature encompassing a broad spectrum of pioneer work in remote sensing image classification is presented including network architectures(vintage Convolutional Neural Network,CNN;Fully Convolutional Networks,FCN;encoder-decoder,recurrent networks;attention models,and generative adversarial models).The characteristics,capabilities,and limitations of current DL models were examined,and potential research directions were discussed.展开更多
The recent global outbreak of COVID-19 damaged the world health systems,human health,economy,and daily life badly.None of the countries was ready to face this emerging health challenge.Health professionals were not ab...The recent global outbreak of COVID-19 damaged the world health systems,human health,economy,and daily life badly.None of the countries was ready to face this emerging health challenge.Health professionals were not able to predict its rise and next move,as well as the future curve and impact on lives in case of a similar pandemic situation happened.This created huge chaos globally,for longer and the world is still struggling to come up with any suitable solution.Here the better use of advanced technologies,such as artificial intelligence and deep learning,may aid healthcare practitioners in making reliable COVID-19 diagnoses.The proposed research would provide a prediction model that would use Artificial Intelligence and Deep Learning to improve the diagnostic process by reducing unreliable diagnostic interpretation of chest CT scans and allowing clinicians to accurately discriminate between patients who are sick with COVID-19 or pneumonia,and also empowering health professionals to distinguish chest CT scans of healthy people.The efforts done by the Saudi government for the management and control of COVID-19 are remarkable,however;there is a need to improve the diagnostics process for better perception.We used a data set from Saudi regions to build a prediction model that can help distinguish between COVID-19 cases and regular cases from CT scans.The proposed methodology was compared to current models and found to be more accurate(93 percent)than the existing methods.展开更多
Flash floods are one of the most dangerous natural disasters,especially in hilly terrain,causing loss of life,property,and infrastructures and sudden disruption of traffic.These types of floods are mostly associated w...Flash floods are one of the most dangerous natural disasters,especially in hilly terrain,causing loss of life,property,and infrastructures and sudden disruption of traffic.These types of floods are mostly associated with landslides and erosion of roads within a short time.Most of Vietnamis hilly and mountainous;thus,the problem due to flash flood is severe and requires systematic studies to correctly identify flood susceptible areas for proper landuse planning and traffic management.In this study,three Machine Learning(ML)methods namely Deep Learning Neural Network(DL),Correlation-based FeatureWeighted Naive Bayes(CFWNB),and Adaboost(AB-CFWNB)were used for the development of flash flood susceptibility maps for hilly road section(115 km length)of National Highway(NH)-6 inHoa Binh province,Vietnam.In the proposedmodels,88 past flash flood events were used together with 14 flash floods affecting topographical and geo-environmental factors.The performance of themodels was evaluated using standard statisticalmeasures including Receiver Operating Characteristic(ROC)Curve,Area Under Curve(AUC)and Root Mean Square Error(RMSE).The results revealed that all the models performed well(AUC>0.80)in predicting flash flood susceptibility zones,but the performance of the DL model is the best(AUC:0.972,RMSE:0.352).Therefore,the DL model can be applied to develop an accurate flash flood susceptibility map of hilly terrain which can be used for proper planning and designing of the highways and other infrastructure facilities besides landuse management of the area.展开更多
基金supported by the Major Science and Technology Programs in Henan Province(No.241100210100)The Project of Science and Technology in Henan Province(No.242102211068,No.232102210078)+2 种基金The Key Field Special Project of Guangdong Province(No.2021ZDZX1098)The China University Research Innovation Fund(No.2021FNB3001,No.2022IT020)Shenzhen Science and Technology Innovation Commission Stable Support Plan(No.20231128083944001)。
文摘Existing researches on cyber attackdefense analysis have typically adopted stochastic game theory to model the problem for solutions,but the assumption of complete rationality is used in modeling,ignoring the information opacity in practical attack and defense scenarios,and the model and method lack accuracy.To such problem,we investigate network defense policy methods under finite rationality constraints and propose network defense policy selection algorithm based on deep reinforcement learning.Based on graph theoretical methods,we transform the decision-making problem into a path optimization problem,and use a compression method based on service node to map the network state.On this basis,we improve the A3C algorithm and design the DefenseA3C defense policy selection algorithm with online learning capability.The experimental results show that the model and method proposed in this paper can stably converge to a better network state after training,which is faster and more stable than the original A3C algorithm.Compared with the existing typical approaches,Defense-A3C is verified its advancement.
文摘Although modulation classification based on deep neural network can achieve high Modulation Classification(MC)accuracies,catastrophic forgetting will occur when the neural network model continues to learn new tasks.In this paper,we simulate the dynamic wireless communication environment and focus on breaking the learning paradigm of isolated automatic MC.We innovate a research algorithm for continuous automatic MC.Firstly,a memory for storing representative old task modulation signals is built,which is employed to limit the gradient update direction of new tasks in the continuous learning stage to ensure that the loss of old tasks is also in a downward trend.Secondly,in order to better simulate the dynamic wireless communication environment,we employ the mini-batch gradient algorithm which is more suitable for continuous learning.Finally,the signal in the memory can be replayed to further strengthen the characteristics of the old task signal in the model.Simulation results verify the effectiveness of the method.
基金supported in part by National Key Research and Development Program of China under Grant 2021YFB2900404.
文摘Due to the limited computational capability and the diversity of the Internet of Things devices working in different environment,we consider fewshot learning-based automatic modulation classification(AMC)to improve its reliability.A data enhancement module(DEM)is designed by a convolutional layer to supplement frequency-domain information as well as providing nonlinear mapping that is beneficial for AMC.Multimodal network is designed to have multiple residual blocks,where each residual block has multiple convolutional kernels of different sizes for diverse feature extraction.Moreover,a deep supervised loss function is designed to supervise all parts of the network including the hidden layers and the DEM.Since different model may output different results,cooperative classifier is designed to avoid the randomness of single model and improve the reliability.Simulation results show that this few-shot learning-based AMC method can significantly improve the AMC accuracy compared to the existing methods.
文摘In this paper,we summarize recent progresses made in deep learning based acoustic models and the motivation and insights behind the surveyed techniques.We first discuss models such as recurrent neural networks(RNNs) and convolutional neural networks(CNNs) that can effectively exploit variablelength contextual information,and their various combination with other models.We then describe models that are optimized end-to-end and emphasize on feature representations learned jointly with the rest of the system,the connectionist temporal classification(CTC) criterion,and the attention-based sequenceto-sequence translation model.We further illustrate robustness issues in speech recognition systems,and discuss acoustic model adaptation,speech enhancement and separation,and robust training strategies.We also cover modeling techniques that lead to more efficient decoding and discuss possible future directions in acoustic model research.
文摘The performance of deep learning(DL)networks has been increased by elaborating the network structures. However, the DL netowrks have many parameters, which have a lot of influence on the performance of the network. We propose a genetic algorithm(GA) based deep belief neural network(DBNN) method for robot object recognition and grasping purpose. This method optimizes the parameters of the DBNN method, such as the number of hidden units, the number of epochs, and the learning rates, which would reduce the error rate and the network training time of object recognition. After recognizing objects, the robot performs the pick-andplace operations. We build a database of six objects for experimental purpose. Experimental results demonstrate that our method outperforms on the optimized robot object recognition and grasping tasks.
基金supported by the National Key R&D Program of China(2018YFB1802004)111 Project(B08038)。
文摘Channel estimation has been considered as a key issue in the millimeter-wave(mmWave)massive multi-input multioutput(MIMO)communication systems,which becomes more challenging with a large number of antennas.In this paper,we propose a deep learning(DL)-based fast channel estimation method for mmWave massive MIMO systems.The proposed method can directly and effectively estimate channel state information(CSI)from received data without performing pilot signals estimate in advance,which simplifies the estimation process.Specifically,we develop a convolutional neural network(CNN)-based channel estimation network for the case of dimensional mismatch of input and output data,subsequently denoted as channel(H)neural network(HNN).It can quickly estimate the channel information by learning the inherent characteristics of the received data and the relationship between the received data and the channel,while the dimension of the received data is much smaller than the channel matrix.Simulation results show that the proposed HNN can gain better channel estimation accuracy compared with existing schemes.
基金This work is supported by the National Key Research and Development Program of China under Grant 2016YFB0800600the Natural Science Foundation of China under Grant(No.61872254 and No.U1736212)+2 种基金the Fundamental Research Funds for the central Universities(No.YJ201727,No.A0920502051815-98)Academic and Technical Leaders’Training Support Fund of Sichuan Province(2016)the research projects of the Humanity and Social Science Youth Foundation of Ministry of Education(13YJCZH021).We want to convey our grateful appreciation to the corresponding author of this paper,Gang Liang,who has offered advice with huge values in all stages when writing this essay to us.
文摘Deep Learning presents a critical capability to be geared into environments being constantly changed and ongoing learning dynamic,which is especially relevant in Network Intrusion Detection.In this paper,as enlightened by the theory of Deep Learning Neural Networks,Hierarchy Distributed-Agents Model for Network Risk Evaluation,a newly developed model,is proposed.The architecture taken on by the distributed-agents model are given,as well as the approach of analyzing network intrusion detection using Deep Learning,the mechanism of sharing hyper-parameters to improve the efficiency of learning is presented,and the hierarchical evaluative framework for Network Risk Evaluation of the proposed model is built.Furthermore,to examine the proposed model,a series of experiments were conducted in terms of NSLKDD datasets.The proposed model was able to differentiate between normal and abnormal network activities with an accuracy of 97.60%on NSL-KDD datasets.As the results acquired from the experiment indicate,the model developed in this paper is characterized by high-speed and high-accuracy processing which shall offer a preferable solution with regard to the Risk Evaluation in Network.
基金Supported by the National Natural Science Foundation of China(No.41972244)partially supported by the Science and Technology Basic Resources Survey of the Ministry of Science and Technology(No.2018FY100201)+3 种基金the National Key Research and Development Program(No.2019YFC1407900)to Siyu GOUShuai ZHANGWenyu GANand Tianjiu JIANG。
文摘Paralytic shellfi sh poisoning(PSP)microalgae,as one of the harmful algal blooms,causes great damage to the of fshore fi shery,marine culture,and marine ecological environment.At present,there is no technique for real-time accurate identifi cation of toxic microalgae,by combining three-dimensional fluorescence with machine learning(ML)and deep learning(DL),we developed methods to classify the PSP and non-PSP microalgae.The average classifi cation accuracies of these two methods for microalgae are above 90%,and the accuracies for discriminating 12 microalgae species in PSP and non-PSP microalgae are above 94%.When the emission wavelength is 650-690 nm,the fl uorescence characteristics bands(excitation wavelength)occur dif ferently at 410-480 nm and 500-560 nm for PSP and non-PSP microalgae,respectively.The identification accuracies of ML models(support vector machine(SVM),and k-nearest neighbor rule(k-NN)),and DL model(convolutional neural network(CNN))to PSP microalgae are 96.25%,96.36%,and 95.88%respectively,indicating that ML and DL are suitable for the classifi cation of toxic microalgae.
基金This study was supported in part by the Ministry of Science and Technology MOST 108-2221-E-150-022-MY3 and Taiwan Ocean University.
文摘This study proposed a measurement platform for continuous blood pressure estimation based on dual photoplethysmography(PPG)sensors and a deep learning(DL)that can be used for continuous and rapid measurement of blood pressure and analysis of cardiovascular-related indicators.The proposed platform measured the signal changes in PPG and converted them into physiological indicators,such as pulse transit time(PTT),pulse wave velocity(PWV),perfusion index(PI)and heart rate(HR);these indicators were then fed into the DL to calculate blood pressure.The hardware of the experiment comprised 2 PPG components(i.e.,Raspberry Pi 3 Model B and analog-todigital converter[MCP3008]),which were connected using a serial peripheral interface.The DL algorithm converted the stable dual PPG signals acquired from the strictly standardized experimental process into various physiological indicators as input parameters and finally obtained the systolic blood pressure(SBP),diastolic blood pressure(DBP)and mean arterial pressure(MAP).To increase the robustness of the DL model,this study input data of 100 Asian participants into the training database,including those with and without cardiovascular disease,each with a proportion of approximately 50%.The experimental results revealed that the mean absolute error and standard deviation of SBP was 0.17±0.46 mmHg.The mean absolute error and standard deviation of DBP was 0.27±0.52 mmHg.The mean absolute error and standard deviation of MAP was 0.16±0.40 mmHg.
基金Supported by the National Science Foundation Program of Jiangsu Province (No.BK20191378)the National Science Research Project of Jiangsu Higher Education Institutions (No.18KJB510034)+2 种基金China Postdoctoral Science Fund Special Funding Project (No.2018T110530)the Key Technologies R&D Program of Jiangsu Province (No.BE2022067,BE2022067-2)Major Research Program Key Project(No.92067201)。
文摘For high-speed mobile MIMO-OFDM system,a low-complexity deep learning(DL) based timevarying channel estimation scheme is proposed.To reduce the number of estimated parameters,the basis expansion model(BEM) is employed to model the time-varying channel,which converts the channel estimation into the estimation of the basis coefficient.Specifically,the initial basis coefficients are firstly used to train the neural network in an offline manner,and then the high-precision channel estimation can be obtained by small number of inputs.Moreover,the linear minimum mean square error(LMMSE) estimated channel is considered for the loss function in training phase,which makes the proposed method more practical.Simulation results show that the proposed method has a better performance and lower computational complexity compared with the available schemes,and it is robust to the fast time-varying channel in the high-speed mobile scenarios.
文摘Several recent successes in deep learning(DL),such as state-of-the-art performance on several image classification benchmarks,have been achieved through the improved configuration.Hyperparameters(HPs)tuning is a key factor affecting the performance of machine learning(ML)algorithms.Various state-of-the-art DL models use different HPs in different ways for classification tasks on different datasets.This manuscript provides a brief overview of learning parameters and configuration techniques to show the benefits of using a large-scale handdrawn sketch dataset for classification problems.We analyzed the impact of different learning parameters and toplayer configurations with batch normalization(BN)and dropouts on the performance of the pre-trained visual geometry group 19(VGG-19).The analyzed learning parameters include different learning rates and momentum values of two different optimizers,such as stochastic gradient descent(SGD)and Adam.Our analysis demonstrates that using the SGD optimizer and learning parameters,such as small learning rates with high values of momentum,along with both BN and dropouts in top layers,has a good impact on the sketch image classification accuracy.
文摘This study is designed to develop Artificial Intelligence(AI)based analysis tool that could accurately detect COVID-19 lung infections based on portable chest x-rays(CXRs).The frontline physicians and radiologists suffer from grand challenges for COVID-19 pandemic due to the suboptimal image quality and the large volume of CXRs.In this study,AI-based analysis tools were developed that can precisely classify COVID-19 lung infection.Publicly available datasets of COVID-19(N=1525),non-COVID-19 normal(N=1525),viral pneumonia(N=1342)and bacterial pneumonia(N=2521)from the Italian Society of Medical and Interventional Radiology(SIRM),Radiopaedia,The Cancer Imaging Archive(TCIA)and Kaggle repositories were taken.A multi-approach utilizing deep learning ResNet101 with and without hyperparameters optimization was employed.Additionally,the fea-tures extracted from the average pooling layer of ResNet101 were used as input to machine learning(ML)algorithms,which twice trained the learning algorithms.The ResNet101 with optimized parameters yielded improved performance to default parameters.The extracted features from ResNet101 are fed to the k-nearest neighbor(KNN)and support vector machine(SVM)yielded the highest 3-class classification performance of 99.86%and 99.46%,respectively.The results indicate that the proposed approach can be bet-ter utilized for improving the accuracy and diagnostic efficiency of CXRs.The proposed deep learning model has the potential to improve further the efficiency of the healthcare systems for proper diagnosis and prognosis of COVID-19 lung infection.
文摘Deep learning(DL)algorithms have been widely used in various security applications to enhance the performances of decision-based models.Malicious data added by an attacker can cause several security and privacy problems in the operation of DL models.The two most common active attacks are poisoning and evasion attacks,which can cause various problems,including wrong prediction and misclassification of decision-based models.Therefore,to design an efficient DL model,it is crucial to mitigate these attacks.In this regard,this study proposes a secure neural network(NN)model that provides data security during model training and testing phases.The main idea is to use cryptographic functions,such as hash function(SHA512)and homomorphic encryption(HE)scheme,to provide authenticity,integrity,and confidentiality of data.The performance of the proposed model is evaluated by experiments based on accuracy,precision,attack detection rate(ADR),and computational cost.The results show that the proposed model has achieved an accuracy of 98%,a precision of 0.97,and an ADR of 98%,even for a large number of attacks.Hence,the proposed model can be used to detect attacks and mitigate the attacker motives.The results also show that the computational cost of the proposed model does not increase with model complexity.
基金funded by the“Research on Digitization and Intelligent Application of Low-Voltage Power Distribution Equipment”[SGSDDK00PDJS2000375]。
文摘The main function of the power communication business is to monitor,control and manage the power communication network to ensure normal and stable operation of the power communication network.Commu-nication services related to dispatching data networks and the transmission of fault information or feeder automation have high requirements for delay.If processing time is prolonged,a power business cascade reaction may be triggered.In order to solve the above problems,this paper establishes an edge object-linked agent business deployment model for power communication network to unify the management of data collection,resource allocation and task scheduling within the system,realizes the virtualization of object-linked agent computing resources through Docker container technology,designs the target model of network latency and energy consumption,and introduces A3C algorithm in deep reinforcement learning,improves it according to scene characteristics,and sets corresponding optimization strategies.Mini-mize network delay and energy consumption;At the same time,to ensure that sensitive power business is handled in time,this paper designs the business dispatch model and task migration model,and solves the problem of server failure.Finally,the corresponding simulation program is designed to verify the feasibility and validity of this method,and to compare it with other existing mechanisms.
基金Supported by the National Science Foundation Program of Jiangsu Province(No.BK20191378)the National Science Research Project of Jiangsu Higher Education Institutions(No.18KJB510034)+1 种基金the 11th Batch of China Postdoctoral Science Fund Special Funding Project(No.2018T110530)the National Natural Science Foundation of China(No.61771255)。
文摘In the fifth-generation new radio(5G-NR) high-speed railway(HSR) downlink,a deep learning(DL) based Doppler frequency offset(DFO) estimation scheme is proposed by using the back propagation neural network(BPNN).The proposed method mainly includes pre-training,training,and estimation phases,where the pre-training and training belong to the off-line stage,and the estimation is the online stage.To reduce the performance loss caused by the random initialization,the pre-training method is employed to acquire a desirable initialization,which is used as the initial parameters of the training phase.Moreover,the initial DFO estimation is used as input along with the received pilots to further improve the estimation accuracy.Different from the training phase,the initial DFO estimation in pre-training phase is obtained by the data and pilot symbols.Simulation results show that the mean squared error(MSE) performance of the proposed method is better than those of the available algorithms,and it has acceptable computational complexity.
文摘Background:We test a deep learning(DL)supported remote diagnosis approach to detect diabetic retinopathy(DR)and other referable retinal pathologies using ultra-wide-field(UWF)Optomap.Methods:Prospective,non-randomized study involving diabetic patients seen at endocrinology clinics.Non-expert imagers were trained to obtain non-dilated images using UWF Primary.Images were graded by two retina specialists and classified as DR or incidental retinal findings.Cohen’s kappa was used to test the agreement between the remote diagnosis and the gold standard exam.A novel DL model was trained to identify the presence or absence of referable pathology,and sensitivity,specificity and area under the receiver operator characteristics curve(AUROC)were used to assess its performance.Results:A total of 265 patients were enrolled,of which 241 patients were imaged(433 eyes).The mean age was 50±17 years,45%of patients were female,34%had a diagnosis of diabetes mellitus type 1,and 66%of type 2.The average Hemoglobin A1c was 8.8±2.3%,and 81%were on Insulin.Of the 433 images,404(93%)were gradable,64 patients(27%)were referred to a retina specialist,and 46(19%)were referred to comprehensive ophthalmologist for a referable retinal pathology on remote diagnosis.Cohen’s kappa was 0.58,indicating moderate agreement.Our DL algorithm achieved an accuracy of 82.8%(95%CI:80.3-85.2%),a sensitivity of 81.0%(95%CI:78.5-83.6%),specificity of 73.5%(95%CI:70.6-76.3%),and AUROC of 81.0%(95%CI:78.5-83.6%).Conclusions:UWF Primary can be used in the non-ophthalmology setting to screen for referable retinal pathology and can be successfully supported by an automated algorithm for image classification.
基金supported by the National Natural Science Foundation of China(62061003)Sichuan Science and Technology Program(2021YFG0192)the Research Foundation of the Civil Aviation Flight University of China(ZJ2020-04,J2020-033)。
文摘Existing specific emitter identification(SEI)methods based on hand-crafted features have drawbacks of losing feature information and involving multiple processing stages,which reduce the identification accuracy of emitters and complicate the procedures of identification.In this paper,we propose a deep SEI approach via multidimensional feature extraction for radio frequency fingerprints(RFFs),namely,RFFsNet-SEI.Particularly,we extract multidimensional physical RFFs from the received signal by virtue of variational mode decomposition(VMD)and Hilbert transform(HT).The physical RFFs and I-Q data are formed into the balanced-RFFs,which are then used to train RFFsNet-SEI.As introducing model-aided RFFs into neural network,the hybrid-driven scheme including physical features and I-Q data is constructed.It improves physical interpretability of RFFsNet-SEI.Meanwhile,since RFFsNet-SEI identifies individual of emitters from received raw data in end-to-end,it accelerates SEI implementation and simplifies procedures of identification.Moreover,as the temporal features and spectral features of the received signal are both extracted by RFFsNet-SEI,identification accuracy is improved.Finally,we compare RFFsNet-SEI with the counterparts in terms of identification accuracy,computational complexity,and prediction speed.Experimental results illustrate that the proposed method outperforms the counterparts on the basis of simulation dataset and real dataset collected in the anechoic chamber.
文摘Undeniably,Deep Learning(DL)has rapidly eroded traditional machine learning in Remote Sensing(RS)and geoscience domains with applications such as scene understanding,material identification,extreme weather detection,oil spill identification,among many others.Traditional machine learning algorithms are given less and less attention in the era of big data.Recently,a substantial amount of work aimed at developing image classification approaches based on the DL model’s success in computer vision.The number of relevant articles has nearly doubled every year since 2015.Advances in remote sensing technology,as well as the rapidly expanding volume of publicly available satellite imagery on a worldwide scale,have opened up the possibilities for a wide range of modern applications.However,there are some challenges related to the availability of annotated data,the complex nature of data,and model parameterization,which strongly impact performance.In this article,a comprehensive review of the literature encompassing a broad spectrum of pioneer work in remote sensing image classification is presented including network architectures(vintage Convolutional Neural Network,CNN;Fully Convolutional Networks,FCN;encoder-decoder,recurrent networks;attention models,and generative adversarial models).The characteristics,capabilities,and limitations of current DL models were examined,and potential research directions were discussed.
基金the Deanship of Scientific Research at Jouf University for funding this work through research grant no(DSR2020-04-1533).
文摘The recent global outbreak of COVID-19 damaged the world health systems,human health,economy,and daily life badly.None of the countries was ready to face this emerging health challenge.Health professionals were not able to predict its rise and next move,as well as the future curve and impact on lives in case of a similar pandemic situation happened.This created huge chaos globally,for longer and the world is still struggling to come up with any suitable solution.Here the better use of advanced technologies,such as artificial intelligence and deep learning,may aid healthcare practitioners in making reliable COVID-19 diagnoses.The proposed research would provide a prediction model that would use Artificial Intelligence and Deep Learning to improve the diagnostic process by reducing unreliable diagnostic interpretation of chest CT scans and allowing clinicians to accurately discriminate between patients who are sick with COVID-19 or pneumonia,and also empowering health professionals to distinguish chest CT scans of healthy people.The efforts done by the Saudi government for the management and control of COVID-19 are remarkable,however;there is a need to improve the diagnostics process for better perception.We used a data set from Saudi regions to build a prediction model that can help distinguish between COVID-19 cases and regular cases from CT scans.The proposed methodology was compared to current models and found to be more accurate(93 percent)than the existing methods.
基金funded by Vietnam National Foundation for Science and Technology Development (NAFOSTED)under Grant No.105.08-2019.03.
文摘Flash floods are one of the most dangerous natural disasters,especially in hilly terrain,causing loss of life,property,and infrastructures and sudden disruption of traffic.These types of floods are mostly associated with landslides and erosion of roads within a short time.Most of Vietnamis hilly and mountainous;thus,the problem due to flash flood is severe and requires systematic studies to correctly identify flood susceptible areas for proper landuse planning and traffic management.In this study,three Machine Learning(ML)methods namely Deep Learning Neural Network(DL),Correlation-based FeatureWeighted Naive Bayes(CFWNB),and Adaboost(AB-CFWNB)were used for the development of flash flood susceptibility maps for hilly road section(115 km length)of National Highway(NH)-6 inHoa Binh province,Vietnam.In the proposedmodels,88 past flash flood events were used together with 14 flash floods affecting topographical and geo-environmental factors.The performance of themodels was evaluated using standard statisticalmeasures including Receiver Operating Characteristic(ROC)Curve,Area Under Curve(AUC)and Root Mean Square Error(RMSE).The results revealed that all the models performed well(AUC>0.80)in predicting flash flood susceptibility zones,but the performance of the DL model is the best(AUC:0.972,RMSE:0.352).Therefore,the DL model can be applied to develop an accurate flash flood susceptibility map of hilly terrain which can be used for proper planning and designing of the highways and other infrastructure facilities besides landuse management of the area.