期刊文献+
共找到194篇文章
< 1 2 10 >
每页显示 20 50 100
Network Defense Decision-Making Based on Deep Reinforcement Learning and Dynamic Game Theory
1
作者 Huang Wanwei Yuan Bo +2 位作者 Wang Sunan Ding Yi Li Yuhua 《China Communications》 SCIE CSCD 2024年第9期262-275,共14页
Existing researches on cyber attackdefense analysis have typically adopted stochastic game theory to model the problem for solutions,but the assumption of complete rationality is used in modeling,ignoring the informat... Existing researches on cyber attackdefense analysis have typically adopted stochastic game theory to model the problem for solutions,but the assumption of complete rationality is used in modeling,ignoring the information opacity in practical attack and defense scenarios,and the model and method lack accuracy.To such problem,we investigate network defense policy methods under finite rationality constraints and propose network defense policy selection algorithm based on deep reinforcement learning.Based on graph theoretical methods,we transform the decision-making problem into a path optimization problem,and use a compression method based on service node to map the network state.On this basis,we improve the A3C algorithm and design the DefenseA3C defense policy selection algorithm with online learning capability.The experimental results show that the model and method proposed in this paper can stably converge to a better network state after training,which is faster and more stable than the original A3C algorithm.Compared with the existing typical approaches,Defense-A3C is verified its advancement. 展开更多
关键词 A3c cyber attack-defense analysis deep reinforcement learning stochastic game theory
下载PDF
Deep learning algorithm featuring continuous learning for modulation classifications in wireless networks
2
作者 WU Nan SUN Yu WANG Xudong 《太赫兹科学与电子信息学报》 2024年第2期209-218,共10页
Although modulation classification based on deep neural network can achieve high Modulation Classification(MC)accuracies,catastrophic forgetting will occur when the neural network model continues to learn new tasks.In... Although modulation classification based on deep neural network can achieve high Modulation Classification(MC)accuracies,catastrophic forgetting will occur when the neural network model continues to learn new tasks.In this paper,we simulate the dynamic wireless communication environment and focus on breaking the learning paradigm of isolated automatic MC.We innovate a research algorithm for continuous automatic MC.Firstly,a memory for storing representative old task modulation signals is built,which is employed to limit the gradient update direction of new tasks in the continuous learning stage to ensure that the loss of old tasks is also in a downward trend.Secondly,in order to better simulate the dynamic wireless communication environment,we employ the mini-batch gradient algorithm which is more suitable for continuous learning.Finally,the signal in the memory can be replayed to further strengthen the characteristics of the old task signal in the model.Simulation results verify the effectiveness of the method. 展开更多
关键词 deep learning(dl) modulation classification continuous learning catastrophic forgetting cognitive radio communications
下载PDF
A Few-Shot Learning-Based Automatic Modulation Classification Method for Internet of Things
3
作者 Aer Sileng Qi Chenhao 《China Communications》 SCIE CSCD 2024年第8期18-29,共12页
Due to the limited computational capability and the diversity of the Internet of Things devices working in different environment,we consider fewshot learning-based automatic modulation classification(AMC)to improve it... Due to the limited computational capability and the diversity of the Internet of Things devices working in different environment,we consider fewshot learning-based automatic modulation classification(AMC)to improve its reliability.A data enhancement module(DEM)is designed by a convolutional layer to supplement frequency-domain information as well as providing nonlinear mapping that is beneficial for AMC.Multimodal network is designed to have multiple residual blocks,where each residual block has multiple convolutional kernels of different sizes for diverse feature extraction.Moreover,a deep supervised loss function is designed to supervise all parts of the network including the hidden layers and the DEM.Since different model may output different results,cooperative classifier is designed to avoid the randomness of single model and improve the reliability.Simulation results show that this few-shot learning-based AMC method can significantly improve the AMC accuracy compared to the existing methods. 展开更多
关键词 automatic modulation classification(AMc) deep learning(dl) few-shot learning Internet of Things(IoT)
下载PDF
Recent Progresses in Deep Learning Based Acoustic Models 被引量:9
4
作者 Dong Yu Jinyu Li 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2017年第3期396-409,共14页
In this paper,we summarize recent progresses made in deep learning based acoustic models and the motivation and insights behind the surveyed techniques.We first discuss models such as recurrent neural networks(RNNs) a... In this paper,we summarize recent progresses made in deep learning based acoustic models and the motivation and insights behind the surveyed techniques.We first discuss models such as recurrent neural networks(RNNs) and convolutional neural networks(CNNs) that can effectively exploit variablelength contextual information,and their various combination with other models.We then describe models that are optimized end-to-end and emphasize on feature representations learned jointly with the rest of the system,the connectionist temporal classification(CTC) criterion,and the attention-based sequenceto-sequence translation model.We further illustrate robustness issues in speech recognition systems,and discuss acoustic model adaptation,speech enhancement and separation,and robust training strategies.We also cover modeling techniques that lead to more efficient decoding and discuss possible future directions in acoustic model research. 展开更多
关键词 Attention model convolutional neural network(cNN) connectionist temporal classification(cTc) deep learning(dl) long short-term memory(LSTM) permutation invariant training speech adaptation speech processing speech recognition speech separation
下载PDF
Optimizing Deep Learning Parameters Using Genetic Algorithm for Object Recognition and Robot Grasping 被引量:2
5
作者 Delowar Hossain Genci Capi Mitsuru Jindai 《Journal of Electronic Science and Technology》 CAS CSCD 2018年第1期11-15,共5页
The performance of deep learning(DL)networks has been increased by elaborating the network structures. However, the DL netowrks have many parameters, which have a lot of influence on the performance of the network. We... The performance of deep learning(DL)networks has been increased by elaborating the network structures. However, the DL netowrks have many parameters, which have a lot of influence on the performance of the network. We propose a genetic algorithm(GA) based deep belief neural network(DBNN) method for robot object recognition and grasping purpose. This method optimizes the parameters of the DBNN method, such as the number of hidden units, the number of epochs, and the learning rates, which would reduce the error rate and the network training time of object recognition. After recognizing objects, the robot performs the pick-andplace operations. We build a database of six objects for experimental purpose. Experimental results demonstrate that our method outperforms on the optimized robot object recognition and grasping tasks. 展开更多
关键词 deep learning(dl) deep belief neural network(DBNN) genetic algorithm(GA) object recognition robot grasping
下载PDF
Deep learning for fast channel estimation in millimeter-wave MIMO systems 被引量:3
6
作者 LYU Siting LI Xiaohui +2 位作者 FAN Tao LIU Jiawen SHI Mingli 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2022年第6期1088-1095,共8页
Channel estimation has been considered as a key issue in the millimeter-wave(mmWave)massive multi-input multioutput(MIMO)communication systems,which becomes more challenging with a large number of antennas.In this pap... Channel estimation has been considered as a key issue in the millimeter-wave(mmWave)massive multi-input multioutput(MIMO)communication systems,which becomes more challenging with a large number of antennas.In this paper,we propose a deep learning(DL)-based fast channel estimation method for mmWave massive MIMO systems.The proposed method can directly and effectively estimate channel state information(CSI)from received data without performing pilot signals estimate in advance,which simplifies the estimation process.Specifically,we develop a convolutional neural network(CNN)-based channel estimation network for the case of dimensional mismatch of input and output data,subsequently denoted as channel(H)neural network(HNN).It can quickly estimate the channel information by learning the inherent characteristics of the received data and the relationship between the received data and the channel,while the dimension of the received data is much smaller than the channel matrix.Simulation results show that the proposed HNN can gain better channel estimation accuracy compared with existing schemes. 展开更多
关键词 millimeter-wave(mmWave) channel estimation deep learning(dl) dimensional mismatch channel state information(cSI)
下载PDF
A Hierarchy Distributed-Agents Model for Network Risk Evaluation Based on Deep Learning 被引量:1
7
作者 Jin Yang Tao Li +2 位作者 Gang Liang Wenbo He Yue Zhao 《Computer Modeling in Engineering & Sciences》 SCIE EI 2019年第7期1-23,共23页
Deep Learning presents a critical capability to be geared into environments being constantly changed and ongoing learning dynamic,which is especially relevant in Network Intrusion Detection.In this paper,as enlightene... Deep Learning presents a critical capability to be geared into environments being constantly changed and ongoing learning dynamic,which is especially relevant in Network Intrusion Detection.In this paper,as enlightened by the theory of Deep Learning Neural Networks,Hierarchy Distributed-Agents Model for Network Risk Evaluation,a newly developed model,is proposed.The architecture taken on by the distributed-agents model are given,as well as the approach of analyzing network intrusion detection using Deep Learning,the mechanism of sharing hyper-parameters to improve the efficiency of learning is presented,and the hierarchical evaluative framework for Network Risk Evaluation of the proposed model is built.Furthermore,to examine the proposed model,a series of experiments were conducted in terms of NSLKDD datasets.The proposed model was able to differentiate between normal and abnormal network activities with an accuracy of 97.60%on NSL-KDD datasets.As the results acquired from the experiment indicate,the model developed in this paper is characterized by high-speed and high-accuracy processing which shall offer a preferable solution with regard to the Risk Evaluation in Network. 展开更多
关键词 Network security deep learning(dl) INTRUSION detection system(IDS) DISTRIBUTED AGENTS
下载PDF
Identifi cation of paralytic shellfi sh toxin-producing microalgae using machine learning and deep learning methods 被引量:1
8
作者 Wei XU Jie NIU +4 位作者 Wenyu GAN Siyu GOU Shuai ZHANG Han QIU Tianjiu JIANG 《Journal of Oceanology and Limnology》 SCIE CAS CSCD 2022年第6期2202-2217,共16页
Paralytic shellfi sh poisoning(PSP)microalgae,as one of the harmful algal blooms,causes great damage to the of fshore fi shery,marine culture,and marine ecological environment.At present,there is no technique for real... Paralytic shellfi sh poisoning(PSP)microalgae,as one of the harmful algal blooms,causes great damage to the of fshore fi shery,marine culture,and marine ecological environment.At present,there is no technique for real-time accurate identifi cation of toxic microalgae,by combining three-dimensional fluorescence with machine learning(ML)and deep learning(DL),we developed methods to classify the PSP and non-PSP microalgae.The average classifi cation accuracies of these two methods for microalgae are above 90%,and the accuracies for discriminating 12 microalgae species in PSP and non-PSP microalgae are above 94%.When the emission wavelength is 650-690 nm,the fl uorescence characteristics bands(excitation wavelength)occur dif ferently at 410-480 nm and 500-560 nm for PSP and non-PSP microalgae,respectively.The identification accuracies of ML models(support vector machine(SVM),and k-nearest neighbor rule(k-NN)),and DL model(convolutional neural network(CNN))to PSP microalgae are 96.25%,96.36%,and 95.88%respectively,indicating that ML and DL are suitable for the classifi cation of toxic microalgae. 展开更多
关键词 paralytic shellfi sh poisoning(PSP) machine learning(ML) deep learning(dl) toxic algal classifi cation
下载PDF
A Deep Learning-Based Continuous Blood Pressure Measurement by Dual Photoplethysmography Signals 被引量:1
9
作者 Chih-Ta Yen Sheng-Nan Chang +1 位作者 Liao Jia-Xian Yi-Kai Huang 《Computers, Materials & Continua》 SCIE EI 2022年第2期2937-2952,共16页
This study proposed a measurement platform for continuous blood pressure estimation based on dual photoplethysmography(PPG)sensors and a deep learning(DL)that can be used for continuous and rapid measurement of blood ... This study proposed a measurement platform for continuous blood pressure estimation based on dual photoplethysmography(PPG)sensors and a deep learning(DL)that can be used for continuous and rapid measurement of blood pressure and analysis of cardiovascular-related indicators.The proposed platform measured the signal changes in PPG and converted them into physiological indicators,such as pulse transit time(PTT),pulse wave velocity(PWV),perfusion index(PI)and heart rate(HR);these indicators were then fed into the DL to calculate blood pressure.The hardware of the experiment comprised 2 PPG components(i.e.,Raspberry Pi 3 Model B and analog-todigital converter[MCP3008]),which were connected using a serial peripheral interface.The DL algorithm converted the stable dual PPG signals acquired from the strictly standardized experimental process into various physiological indicators as input parameters and finally obtained the systolic blood pressure(SBP),diastolic blood pressure(DBP)and mean arterial pressure(MAP).To increase the robustness of the DL model,this study input data of 100 Asian participants into the training database,including those with and without cardiovascular disease,each with a proportion of approximately 50%.The experimental results revealed that the mean absolute error and standard deviation of SBP was 0.17±0.46 mmHg.The mean absolute error and standard deviation of DBP was 0.27±0.52 mmHg.The mean absolute error and standard deviation of MAP was 0.16±0.40 mmHg. 展开更多
关键词 deep learning(dl) blood pressure continuous non-invasive blood pressure measurement photoplethysmography(PGG)
下载PDF
Deep learning-based time-varying channel estimation with basis expansion model for MIMO-OFDM system 被引量:1
10
作者 HU Bo YANG Lihua +1 位作者 REN Lulu NIE Qian 《High Technology Letters》 EI CAS 2022年第3期288-294,共7页
For high-speed mobile MIMO-OFDM system,a low-complexity deep learning(DL) based timevarying channel estimation scheme is proposed.To reduce the number of estimated parameters,the basis expansion model(BEM) is employed... For high-speed mobile MIMO-OFDM system,a low-complexity deep learning(DL) based timevarying channel estimation scheme is proposed.To reduce the number of estimated parameters,the basis expansion model(BEM) is employed to model the time-varying channel,which converts the channel estimation into the estimation of the basis coefficient.Specifically,the initial basis coefficients are firstly used to train the neural network in an offline manner,and then the high-precision channel estimation can be obtained by small number of inputs.Moreover,the linear minimum mean square error(LMMSE) estimated channel is considered for the loss function in training phase,which makes the proposed method more practical.Simulation results show that the proposed method has a better performance and lower computational complexity compared with the available schemes,and it is robust to the fast time-varying channel in the high-speed mobile scenarios. 展开更多
关键词 MIMO-OFDM high-speed mobile time-varying channel deep learning(dl) basis expansion model(BEM)
下载PDF
Tuning-up Learning Parameters for Deep Convolutional Neural Network:A Case Study for Hand-Drawn Sketch Images
11
作者 Shaukat Hayat Kun She +2 位作者 Muhammad Mateen Parinya Suwansrikham Muhammad Abdullah Ahmed Alghaili 《Journal of Electronic Science and Technology》 CAS CSCD 2022年第3期305-318,共14页
Several recent successes in deep learning(DL),such as state-of-the-art performance on several image classification benchmarks,have been achieved through the improved configuration.Hyperparameters(HPs)tuning is a key f... Several recent successes in deep learning(DL),such as state-of-the-art performance on several image classification benchmarks,have been achieved through the improved configuration.Hyperparameters(HPs)tuning is a key factor affecting the performance of machine learning(ML)algorithms.Various state-of-the-art DL models use different HPs in different ways for classification tasks on different datasets.This manuscript provides a brief overview of learning parameters and configuration techniques to show the benefits of using a large-scale handdrawn sketch dataset for classification problems.We analyzed the impact of different learning parameters and toplayer configurations with batch normalization(BN)and dropouts on the performance of the pre-trained visual geometry group 19(VGG-19).The analyzed learning parameters include different learning rates and momentum values of two different optimizers,such as stochastic gradient descent(SGD)and Adam.Our analysis demonstrates that using the SGD optimizer and learning parameters,such as small learning rates with high values of momentum,along with both BN and dropouts in top layers,has a good impact on the sketch image classification accuracy. 展开更多
关键词 deep learning(dl) hand-drawn sketches learning parameters
下载PDF
Deep Learning ResNet101 Deep Features of Portable Chest X-Ray Accurately Classify COVID-19 Lung Infection
12
作者 Sobia Nawaz Sidra Rasheed +5 位作者 Wania Sami Lal Hussain Amjad Aldweesh Elsayed Tag eldin Umair Ahmad Salaria Mohammad Shahbaz Khan 《Computers, Materials & Continua》 SCIE EI 2023年第6期5213-5228,共16页
This study is designed to develop Artificial Intelligence(AI)based analysis tool that could accurately detect COVID-19 lung infections based on portable chest x-rays(CXRs).The frontline physicians and radiologists suf... This study is designed to develop Artificial Intelligence(AI)based analysis tool that could accurately detect COVID-19 lung infections based on portable chest x-rays(CXRs).The frontline physicians and radiologists suffer from grand challenges for COVID-19 pandemic due to the suboptimal image quality and the large volume of CXRs.In this study,AI-based analysis tools were developed that can precisely classify COVID-19 lung infection.Publicly available datasets of COVID-19(N=1525),non-COVID-19 normal(N=1525),viral pneumonia(N=1342)and bacterial pneumonia(N=2521)from the Italian Society of Medical and Interventional Radiology(SIRM),Radiopaedia,The Cancer Imaging Archive(TCIA)and Kaggle repositories were taken.A multi-approach utilizing deep learning ResNet101 with and without hyperparameters optimization was employed.Additionally,the fea-tures extracted from the average pooling layer of ResNet101 were used as input to machine learning(ML)algorithms,which twice trained the learning algorithms.The ResNet101 with optimized parameters yielded improved performance to default parameters.The extracted features from ResNet101 are fed to the k-nearest neighbor(KNN)and support vector machine(SVM)yielded the highest 3-class classification performance of 99.86%and 99.46%,respectively.The results indicate that the proposed approach can be bet-ter utilized for improving the accuracy and diagnostic efficiency of CXRs.The proposed deep learning model has the potential to improve further the efficiency of the healthcare systems for proper diagnosis and prognosis of COVID-19 lung infection. 展开更多
关键词 cOVID-19 deep learning(dl) lung infection convolutional neural network(cNN)
下载PDF
Cryptographic Based Secure Model on Dataset for Deep Learning Algorithms
13
作者 Muhammad Tayyab Mohsen Marjani +3 位作者 N.Z.Jhanjhi Ibrahim Abaker Targio Hashim Abdulwahab Ali Almazroi Abdulaleem Ali Almazroi 《Computers, Materials & Continua》 SCIE EI 2021年第10期1183-1200,共18页
Deep learning(DL)algorithms have been widely used in various security applications to enhance the performances of decision-based models.Malicious data added by an attacker can cause several security and privacy proble... Deep learning(DL)algorithms have been widely used in various security applications to enhance the performances of decision-based models.Malicious data added by an attacker can cause several security and privacy problems in the operation of DL models.The two most common active attacks are poisoning and evasion attacks,which can cause various problems,including wrong prediction and misclassification of decision-based models.Therefore,to design an efficient DL model,it is crucial to mitigate these attacks.In this regard,this study proposes a secure neural network(NN)model that provides data security during model training and testing phases.The main idea is to use cryptographic functions,such as hash function(SHA512)and homomorphic encryption(HE)scheme,to provide authenticity,integrity,and confidentiality of data.The performance of the proposed model is evaluated by experiments based on accuracy,precision,attack detection rate(ADR),and computational cost.The results show that the proposed model has achieved an accuracy of 98%,a precision of 0.97,and an ADR of 98%,even for a large number of attacks.Hence,the proposed model can be used to detect attacks and mitigate the attacker motives.The results also show that the computational cost of the proposed model does not increase with model complexity. 展开更多
关键词 deep learning(dl) poisoning attacks evasion attacks neural network hash functions SHA512 homomorphic encryption scheme
下载PDF
Reliable Scheduling Method for Sensitive Power Business Based on Deep Reinforcement Learning
14
作者 Shen Guo Jiaying Lin +2 位作者 Shuaitao Bai Jichuan Zhang Peng Wang 《Intelligent Automation & Soft Computing》 SCIE 2023年第7期1053-1066,共14页
The main function of the power communication business is to monitor,control and manage the power communication network to ensure normal and stable operation of the power communication network.Commu-nication services r... The main function of the power communication business is to monitor,control and manage the power communication network to ensure normal and stable operation of the power communication network.Commu-nication services related to dispatching data networks and the transmission of fault information or feeder automation have high requirements for delay.If processing time is prolonged,a power business cascade reaction may be triggered.In order to solve the above problems,this paper establishes an edge object-linked agent business deployment model for power communication network to unify the management of data collection,resource allocation and task scheduling within the system,realizes the virtualization of object-linked agent computing resources through Docker container technology,designs the target model of network latency and energy consumption,and introduces A3C algorithm in deep reinforcement learning,improves it according to scene characteristics,and sets corresponding optimization strategies.Mini-mize network delay and energy consumption;At the same time,to ensure that sensitive power business is handled in time,this paper designs the business dispatch model and task migration model,and solves the problem of server failure.Finally,the corresponding simulation program is designed to verify the feasibility and validity of this method,and to compare it with other existing mechanisms. 展开更多
关键词 Power communication network dispatching data networks resource allocation A3c algorithm deep reinforcement learning
下载PDF
Deep learning based Doppler frequency offset estimation for 5G-NR downlink in HSR scenario
15
作者 YANG Lihua WANG Zenghao +1 位作者 ZHANG Jie JIANG Ting 《High Technology Letters》 EI CAS 2022年第2期115-121,共7页
In the fifth-generation new radio(5G-NR) high-speed railway(HSR) downlink,a deep learning(DL) based Doppler frequency offset(DFO) estimation scheme is proposed by using the back propagation neural network(BPNN).The pr... In the fifth-generation new radio(5G-NR) high-speed railway(HSR) downlink,a deep learning(DL) based Doppler frequency offset(DFO) estimation scheme is proposed by using the back propagation neural network(BPNN).The proposed method mainly includes pre-training,training,and estimation phases,where the pre-training and training belong to the off-line stage,and the estimation is the online stage.To reduce the performance loss caused by the random initialization,the pre-training method is employed to acquire a desirable initialization,which is used as the initial parameters of the training phase.Moreover,the initial DFO estimation is used as input along with the received pilots to further improve the estimation accuracy.Different from the training phase,the initial DFO estimation in pre-training phase is obtained by the data and pilot symbols.Simulation results show that the mean squared error(MSE) performance of the proposed method is better than those of the available algorithms,and it has acceptable computational complexity. 展开更多
关键词 fifth-generation new radio(5G-NR) high-speed railway(HSR) deep learning(dl) back propagation neural network(BPNN) Doppler frequency offset(DFO)estimation
下载PDF
Evaluation of a deep learning supported remote diagnosis model for identification of diabetic retinopathy using wide-field Optomap
16
作者 Terry Lee Mingzhe Hu +7 位作者 Qitong Gao Joshua Amason Durga Borkar David D’Alessio Michael Canos Afreen Shariff Miroslav Pajic Majda Hadziahmetovic 《Annals of Eye Science》 2022年第2期93-104,共12页
Background:We test a deep learning(DL)supported remote diagnosis approach to detect diabetic retinopathy(DR)and other referable retinal pathologies using ultra-wide-field(UWF)Optomap.Methods:Prospective,non-randomized... Background:We test a deep learning(DL)supported remote diagnosis approach to detect diabetic retinopathy(DR)and other referable retinal pathologies using ultra-wide-field(UWF)Optomap.Methods:Prospective,non-randomized study involving diabetic patients seen at endocrinology clinics.Non-expert imagers were trained to obtain non-dilated images using UWF Primary.Images were graded by two retina specialists and classified as DR or incidental retinal findings.Cohen’s kappa was used to test the agreement between the remote diagnosis and the gold standard exam.A novel DL model was trained to identify the presence or absence of referable pathology,and sensitivity,specificity and area under the receiver operator characteristics curve(AUROC)were used to assess its performance.Results:A total of 265 patients were enrolled,of which 241 patients were imaged(433 eyes).The mean age was 50±17 years,45%of patients were female,34%had a diagnosis of diabetes mellitus type 1,and 66%of type 2.The average Hemoglobin A1c was 8.8±2.3%,and 81%were on Insulin.Of the 433 images,404(93%)were gradable,64 patients(27%)were referred to a retina specialist,and 46(19%)were referred to comprehensive ophthalmologist for a referable retinal pathology on remote diagnosis.Cohen’s kappa was 0.58,indicating moderate agreement.Our DL algorithm achieved an accuracy of 82.8%(95%CI:80.3-85.2%),a sensitivity of 81.0%(95%CI:78.5-83.6%),specificity of 73.5%(95%CI:70.6-76.3%),and AUROC of 81.0%(95%CI:78.5-83.6%).Conclusions:UWF Primary can be used in the non-ophthalmology setting to screen for referable retinal pathology and can be successfully supported by an automated algorithm for image classification. 展开更多
关键词 RETINA ScREENING imaging deep learning(dl) diabetic retinopathy(DR)
下载PDF
RFFsNet-SEI:a multidimensional balanced-RFFs deep neural network framework for specific emitter identification
17
作者 FAN Rong SI Chengke +1 位作者 HAN Yi WAN Qun 《Journal of Systems Engineering and Electronics》 SCIE CSCD 2024年第3期558-574,F0002,共18页
Existing specific emitter identification(SEI)methods based on hand-crafted features have drawbacks of losing feature information and involving multiple processing stages,which reduce the identification accuracy of emi... Existing specific emitter identification(SEI)methods based on hand-crafted features have drawbacks of losing feature information and involving multiple processing stages,which reduce the identification accuracy of emitters and complicate the procedures of identification.In this paper,we propose a deep SEI approach via multidimensional feature extraction for radio frequency fingerprints(RFFs),namely,RFFsNet-SEI.Particularly,we extract multidimensional physical RFFs from the received signal by virtue of variational mode decomposition(VMD)and Hilbert transform(HT).The physical RFFs and I-Q data are formed into the balanced-RFFs,which are then used to train RFFsNet-SEI.As introducing model-aided RFFs into neural network,the hybrid-driven scheme including physical features and I-Q data is constructed.It improves physical interpretability of RFFsNet-SEI.Meanwhile,since RFFsNet-SEI identifies individual of emitters from received raw data in end-to-end,it accelerates SEI implementation and simplifies procedures of identification.Moreover,as the temporal features and spectral features of the received signal are both extracted by RFFsNet-SEI,identification accuracy is improved.Finally,we compare RFFsNet-SEI with the counterparts in terms of identification accuracy,computational complexity,and prediction speed.Experimental results illustrate that the proposed method outperforms the counterparts on the basis of simulation dataset and real dataset collected in the anechoic chamber. 展开更多
关键词 specific emitter identification(SEI) deep learning(dl) radio frequency fingerprint(RFF) multidimensional feature extraction(MFE) variational mode decomposition(VMD)
下载PDF
Deep Learning Methods Used in Remote Sensing Images: A Review
18
作者 Ekram M.Rewhel Jianqiang Li +9 位作者 Amal A.Hamed Hatem M.Keshk Amira S.Mahmoud Sayed A.Sayed Ehab Samir Hind H.Zeyada Sayed A.Mohamed Marwa S.Moustafa Ayman H.Nasr Ashraf K.Helmy 《Journal of Environmental & Earth Sciences》 2023年第1期33-64,共32页
Undeniably,Deep Learning(DL)has rapidly eroded traditional machine learning in Remote Sensing(RS)and geoscience domains with applications such as scene understanding,material identification,extreme weather detection,o... Undeniably,Deep Learning(DL)has rapidly eroded traditional machine learning in Remote Sensing(RS)and geoscience domains with applications such as scene understanding,material identification,extreme weather detection,oil spill identification,among many others.Traditional machine learning algorithms are given less and less attention in the era of big data.Recently,a substantial amount of work aimed at developing image classification approaches based on the DL model’s success in computer vision.The number of relevant articles has nearly doubled every year since 2015.Advances in remote sensing technology,as well as the rapidly expanding volume of publicly available satellite imagery on a worldwide scale,have opened up the possibilities for a wide range of modern applications.However,there are some challenges related to the availability of annotated data,the complex nature of data,and model parameterization,which strongly impact performance.In this article,a comprehensive review of the literature encompassing a broad spectrum of pioneer work in remote sensing image classification is presented including network architectures(vintage Convolutional Neural Network,CNN;Fully Convolutional Networks,FCN;encoder-decoder,recurrent networks;attention models,and generative adversarial models).The characteristics,capabilities,and limitations of current DL models were examined,and potential research directions were discussed. 展开更多
关键词 deep learning(dl) Satellite imaging Image classification Segmentation and object detection
下载PDF
Prediction Model for Coronavirus Pandemic Using Deep Learning
19
作者 Mamoona Humayun Ahmed Alsayat 《Computer Systems Science & Engineering》 SCIE EI 2022年第3期947-961,共15页
The recent global outbreak of COVID-19 damaged the world health systems,human health,economy,and daily life badly.None of the countries was ready to face this emerging health challenge.Health professionals were not ab... The recent global outbreak of COVID-19 damaged the world health systems,human health,economy,and daily life badly.None of the countries was ready to face this emerging health challenge.Health professionals were not able to predict its rise and next move,as well as the future curve and impact on lives in case of a similar pandemic situation happened.This created huge chaos globally,for longer and the world is still struggling to come up with any suitable solution.Here the better use of advanced technologies,such as artificial intelligence and deep learning,may aid healthcare practitioners in making reliable COVID-19 diagnoses.The proposed research would provide a prediction model that would use Artificial Intelligence and Deep Learning to improve the diagnostic process by reducing unreliable diagnostic interpretation of chest CT scans and allowing clinicians to accurately discriminate between patients who are sick with COVID-19 or pneumonia,and also empowering health professionals to distinguish chest CT scans of healthy people.The efforts done by the Saudi government for the management and control of COVID-19 are remarkable,however;there is a need to improve the diagnostics process for better perception.We used a data set from Saudi regions to build a prediction model that can help distinguish between COVID-19 cases and regular cases from CT scans.The proposed methodology was compared to current models and found to be more accurate(93 percent)than the existing methods. 展开更多
关键词 Artificial Intelligence(AI) deep learning(dl) cOVID-19 pandemic
下载PDF
Prediction of Flash Flood Susceptibility of Hilly Terrain Using Deep Neural Network:A Case Study of Vietnam 被引量:2
20
作者 Huong Thi Thanh Ngo Nguyen Duc Dam +7 位作者 Quynh-Anh Thi Bui Nadhir Al-Ansari Romulus Costache Hang Ha Quynh Duy Bui Sy Hung Mai Indra Prakash Binh Thai Pham 《Computer Modeling in Engineering & Sciences》 SCIE EI 2023年第6期2219-2241,共23页
Flash floods are one of the most dangerous natural disasters,especially in hilly terrain,causing loss of life,property,and infrastructures and sudden disruption of traffic.These types of floods are mostly associated w... Flash floods are one of the most dangerous natural disasters,especially in hilly terrain,causing loss of life,property,and infrastructures and sudden disruption of traffic.These types of floods are mostly associated with landslides and erosion of roads within a short time.Most of Vietnamis hilly and mountainous;thus,the problem due to flash flood is severe and requires systematic studies to correctly identify flood susceptible areas for proper landuse planning and traffic management.In this study,three Machine Learning(ML)methods namely Deep Learning Neural Network(DL),Correlation-based FeatureWeighted Naive Bayes(CFWNB),and Adaboost(AB-CFWNB)were used for the development of flash flood susceptibility maps for hilly road section(115 km length)of National Highway(NH)-6 inHoa Binh province,Vietnam.In the proposedmodels,88 past flash flood events were used together with 14 flash floods affecting topographical and geo-environmental factors.The performance of themodels was evaluated using standard statisticalmeasures including Receiver Operating Characteristic(ROC)Curve,Area Under Curve(AUC)and Root Mean Square Error(RMSE).The results revealed that all the models performed well(AUC>0.80)in predicting flash flood susceptibility zones,but the performance of the DL model is the best(AUC:0.972,RMSE:0.352).Therefore,the DL model can be applied to develop an accurate flash flood susceptibility map of hilly terrain which can be used for proper planning and designing of the highways and other infrastructure facilities besides landuse management of the area. 展开更多
关键词 Flash flood deep learning neural network(dl) machine learning(ML) receiver operating characteristic curve(ROc) VIETNAM
下载PDF
上一页 1 2 10 下一页 到第
使用帮助 返回顶部