Deep neural networks(DNNs)have achieved great success in many data processing applications.However,high computational complexity and storage cost make deep learning difficult to be used on resource-constrained devices...Deep neural networks(DNNs)have achieved great success in many data processing applications.However,high computational complexity and storage cost make deep learning difficult to be used on resource-constrained devices,and it is not environmental-friendly with much power cost.In this paper,we focus on low-rank optimization for efficient deep learning techniques.In the space domain,DNNs are compressed by low rank approximation of the network parameters,which directly reduces the storage requirement with a smaller number of network parameters.In the time domain,the network parameters can be trained in a few subspaces,which enables efficient training for fast convergence.The model compression in the spatial domain is summarized into three categories as pre-train,pre-set,and compression-aware methods,respectively.With a series of integrable techniques discussed,such as sparse pruning,quantization,and entropy coding,we can ensemble them in an integration framework with lower computational complexity and storage.In addition to summary of recent technical advances,we have two findings for motivating future works.One is that the effective rank,derived from the Shannon entropy of the normalized singular values,outperforms other conventional sparse measures such as the?_1 norm for network compression.The other is a spatial and temporal balance for tensorized neural networks.For accelerating the training of tensorized neural networks,it is crucial to leverage redundancy for both model compression and subspace training.展开更多
Although modulation classification based on deep neural network can achieve high Modulation Classification(MC)accuracies,catastrophic forgetting will occur when the neural network model continues to learn new tasks.In...Although modulation classification based on deep neural network can achieve high Modulation Classification(MC)accuracies,catastrophic forgetting will occur when the neural network model continues to learn new tasks.In this paper,we simulate the dynamic wireless communication environment and focus on breaking the learning paradigm of isolated automatic MC.We innovate a research algorithm for continuous automatic MC.Firstly,a memory for storing representative old task modulation signals is built,which is employed to limit the gradient update direction of new tasks in the continuous learning stage to ensure that the loss of old tasks is also in a downward trend.Secondly,in order to better simulate the dynamic wireless communication environment,we employ the mini-batch gradient algorithm which is more suitable for continuous learning.Finally,the signal in the memory can be replayed to further strengthen the characteristics of the old task signal in the model.Simulation results verify the effectiveness of the method.展开更多
This study is designed to develop Artificial Intelligence(AI)based analysis tool that could accurately detect COVID-19 lung infections based on portable chest x-rays(CXRs).The frontline physicians and radiologists suf...This study is designed to develop Artificial Intelligence(AI)based analysis tool that could accurately detect COVID-19 lung infections based on portable chest x-rays(CXRs).The frontline physicians and radiologists suffer from grand challenges for COVID-19 pandemic due to the suboptimal image quality and the large volume of CXRs.In this study,AI-based analysis tools were developed that can precisely classify COVID-19 lung infection.Publicly available datasets of COVID-19(N=1525),non-COVID-19 normal(N=1525),viral pneumonia(N=1342)and bacterial pneumonia(N=2521)from the Italian Society of Medical and Interventional Radiology(SIRM),Radiopaedia,The Cancer Imaging Archive(TCIA)and Kaggle repositories were taken.A multi-approach utilizing deep learning ResNet101 with and without hyperparameters optimization was employed.Additionally,the fea-tures extracted from the average pooling layer of ResNet101 were used as input to machine learning(ML)algorithms,which twice trained the learning algorithms.The ResNet101 with optimized parameters yielded improved performance to default parameters.The extracted features from ResNet101 are fed to the k-nearest neighbor(KNN)and support vector machine(SVM)yielded the highest 3-class classification performance of 99.86%and 99.46%,respectively.The results indicate that the proposed approach can be bet-ter utilized for improving the accuracy and diagnostic efficiency of CXRs.The proposed deep learning model has the potential to improve further the efficiency of the healthcare systems for proper diagnosis and prognosis of COVID-19 lung infection.展开更多
Soil is the major source of infinite lives on Earth and the quality of soil plays significant role on Agriculture practices all around.Hence,the evaluation of soil quality is very important for determining the amount ...Soil is the major source of infinite lives on Earth and the quality of soil plays significant role on Agriculture practices all around.Hence,the evaluation of soil quality is very important for determining the amount of nutrients that the soil require for proper yield.In present decade,the application of deep learning models in many fields of research has created greater impact.The increasing soil data availability of soil data there is a greater demand for the remotely avail open source model,leads to the incorporation of deep learning method to predict the soil quality.With that concern,this paper proposes a novel model called Improved Soil Quality Prediction Model using Deep Learning(ISQP-DL).The work considers the chemical,physical and biological factors of soil in particular area to estimate the soil quality.Firstly,pH rating of soil samples has been collected from the soil testing laboratory from which the acidic range has been categorized through soil test and the same data has been taken as input to the Deep Neural Network Regression(DNNR)model.Secondly,soil nutrient data has been given as second input to the DNNR model.By utilizing this data set,the DNNR method is used to evaluate the fertility rate by which the soil quality has been estimated.For training and testing,the model uses Deep Neural Network Regression(DNNR),by utilizing the dataset.The results show that the proposed model is effective for SQP(Soil Quality Prediction Model)with efficient good fitting and generality is enhanced with input features with higher rate of classification accuracy.The results show that the proposed model achieves 96.7%of accuracy rate compared with existing models.展开更多
Individuals with special needs learn more slowly than their peers and they need repetitions to be permanent.However,in crowded classrooms,it is dif-ficult for a teacher to deal with each student individually.This probl...Individuals with special needs learn more slowly than their peers and they need repetitions to be permanent.However,in crowded classrooms,it is dif-ficult for a teacher to deal with each student individually.This problem can be overcome by using supportive education applications.However,the majority of such applications are not designed for special education and therefore they are not efficient as expected.Special education students differ from their peers in terms of their development,characteristics,and educational qualifications.The handwriting skills of individuals with special needs are lower than their peers.This makes the task of Handwriting Recognition(HWR)more difficult.To over-come this problem,we propose a new personalized handwriting verification sys-tem that validates digits from the handwriting of special education students.The system uses a Convolutional Neural Network(CNN)created and trained from scratch.The data set used is obtained by collecting the handwriting of the students with the help of a tablet.A special education center is visited and the handwrittenfigures of the students are collected under the supervision of special education tea-chers.The system is designed as a person-dependent system as every student has their writing style.Overall,the system achieves promising results,reaching a recognition accuracy of about 94%.Overall,the system can verify special educa-tion students’handwriting digits with high accuracy and is ready to integrate with a mobile application that is designed to teach digits to special education students.展开更多
Undeniably,Deep Learning(DL)has rapidly eroded traditional machine learning in Remote Sensing(RS)and geoscience domains with applications such as scene understanding,material identification,extreme weather detection,o...Undeniably,Deep Learning(DL)has rapidly eroded traditional machine learning in Remote Sensing(RS)and geoscience domains with applications such as scene understanding,material identification,extreme weather detection,oil spill identification,among many others.Traditional machine learning algorithms are given less and less attention in the era of big data.Recently,a substantial amount of work aimed at developing image classification approaches based on the DL model’s success in computer vision.The number of relevant articles has nearly doubled every year since 2015.Advances in remote sensing technology,as well as the rapidly expanding volume of publicly available satellite imagery on a worldwide scale,have opened up the possibilities for a wide range of modern applications.However,there are some challenges related to the availability of annotated data,the complex nature of data,and model parameterization,which strongly impact performance.In this article,a comprehensive review of the literature encompassing a broad spectrum of pioneer work in remote sensing image classification is presented including network architectures(vintage Convolutional Neural Network,CNN;Fully Convolutional Networks,FCN;encoder-decoder,recurrent networks;attention models,and generative adversarial models).The characteristics,capabilities,and limitations of current DL models were examined,and potential research directions were discussed.展开更多
Existing specific emitter identification(SEI)methods based on hand-crafted features have drawbacks of losing feature information and involving multiple processing stages,which reduce the identification accuracy of emi...Existing specific emitter identification(SEI)methods based on hand-crafted features have drawbacks of losing feature information and involving multiple processing stages,which reduce the identification accuracy of emitters and complicate the procedures of identification.In this paper,we propose a deep SEI approach via multidimensional feature extraction for radio frequency fingerprints(RFFs),namely,RFFsNet-SEI.Particularly,we extract multidimensional physical RFFs from the received signal by virtue of variational mode decomposition(VMD)and Hilbert transform(HT).The physical RFFs and I-Q data are formed into the balanced-RFFs,which are then used to train RFFsNet-SEI.As introducing model-aided RFFs into neural network,the hybrid-driven scheme including physical features and I-Q data is constructed.It improves physical interpretability of RFFsNet-SEI.Meanwhile,since RFFsNet-SEI identifies individual of emitters from received raw data in end-to-end,it accelerates SEI implementation and simplifies procedures of identification.Moreover,as the temporal features and spectral features of the received signal are both extracted by RFFsNet-SEI,identification accuracy is improved.Finally,we compare RFFsNet-SEI with the counterparts in terms of identification accuracy,computational complexity,and prediction speed.Experimental results illustrate that the proposed method outperforms the counterparts on the basis of simulation dataset and real dataset collected in the anechoic chamber.展开更多
In this paper,we summarize recent progresses made in deep learning based acoustic models and the motivation and insights behind the surveyed techniques.We first discuss models such as recurrent neural networks(RNNs) a...In this paper,we summarize recent progresses made in deep learning based acoustic models and the motivation and insights behind the surveyed techniques.We first discuss models such as recurrent neural networks(RNNs) and convolutional neural networks(CNNs) that can effectively exploit variablelength contextual information,and their various combination with other models.We then describe models that are optimized end-to-end and emphasize on feature representations learned jointly with the rest of the system,the connectionist temporal classification(CTC) criterion,and the attention-based sequenceto-sequence translation model.We further illustrate robustness issues in speech recognition systems,and discuss acoustic model adaptation,speech enhancement and separation,and robust training strategies.We also cover modeling techniques that lead to more efficient decoding and discuss possible future directions in acoustic model research.展开更多
The performance of deep learning(DL)networks has been increased by elaborating the network structures. However, the DL netowrks have many parameters, which have a lot of influence on the performance of the network. We...The performance of deep learning(DL)networks has been increased by elaborating the network structures. However, the DL netowrks have many parameters, which have a lot of influence on the performance of the network. We propose a genetic algorithm(GA) based deep belief neural network(DBNN) method for robot object recognition and grasping purpose. This method optimizes the parameters of the DBNN method, such as the number of hidden units, the number of epochs, and the learning rates, which would reduce the error rate and the network training time of object recognition. After recognizing objects, the robot performs the pick-andplace operations. We build a database of six objects for experimental purpose. Experimental results demonstrate that our method outperforms on the optimized robot object recognition and grasping tasks.展开更多
Channel estimation has been considered as a key issue in the millimeter-wave(mmWave)massive multi-input multioutput(MIMO)communication systems,which becomes more challenging with a large number of antennas.In this pap...Channel estimation has been considered as a key issue in the millimeter-wave(mmWave)massive multi-input multioutput(MIMO)communication systems,which becomes more challenging with a large number of antennas.In this paper,we propose a deep learning(DL)-based fast channel estimation method for mmWave massive MIMO systems.The proposed method can directly and effectively estimate channel state information(CSI)from received data without performing pilot signals estimate in advance,which simplifies the estimation process.Specifically,we develop a convolutional neural network(CNN)-based channel estimation network for the case of dimensional mismatch of input and output data,subsequently denoted as channel(H)neural network(HNN).It can quickly estimate the channel information by learning the inherent characteristics of the received data and the relationship between the received data and the channel,while the dimension of the received data is much smaller than the channel matrix.Simulation results show that the proposed HNN can gain better channel estimation accuracy compared with existing schemes.展开更多
Deep Learning presents a critical capability to be geared into environments being constantly changed and ongoing learning dynamic,which is especially relevant in Network Intrusion Detection.In this paper,as enlightene...Deep Learning presents a critical capability to be geared into environments being constantly changed and ongoing learning dynamic,which is especially relevant in Network Intrusion Detection.In this paper,as enlightened by the theory of Deep Learning Neural Networks,Hierarchy Distributed-Agents Model for Network Risk Evaluation,a newly developed model,is proposed.The architecture taken on by the distributed-agents model are given,as well as the approach of analyzing network intrusion detection using Deep Learning,the mechanism of sharing hyper-parameters to improve the efficiency of learning is presented,and the hierarchical evaluative framework for Network Risk Evaluation of the proposed model is built.Furthermore,to examine the proposed model,a series of experiments were conducted in terms of NSLKDD datasets.The proposed model was able to differentiate between normal and abnormal network activities with an accuracy of 97.60%on NSL-KDD datasets.As the results acquired from the experiment indicate,the model developed in this paper is characterized by high-speed and high-accuracy processing which shall offer a preferable solution with regard to the Risk Evaluation in Network.展开更多
Paralytic shellfi sh poisoning(PSP)microalgae,as one of the harmful algal blooms,causes great damage to the of fshore fi shery,marine culture,and marine ecological environment.At present,there is no technique for real...Paralytic shellfi sh poisoning(PSP)microalgae,as one of the harmful algal blooms,causes great damage to the of fshore fi shery,marine culture,and marine ecological environment.At present,there is no technique for real-time accurate identifi cation of toxic microalgae,by combining three-dimensional fluorescence with machine learning(ML)and deep learning(DL),we developed methods to classify the PSP and non-PSP microalgae.The average classifi cation accuracies of these two methods for microalgae are above 90%,and the accuracies for discriminating 12 microalgae species in PSP and non-PSP microalgae are above 94%.When the emission wavelength is 650-690 nm,the fl uorescence characteristics bands(excitation wavelength)occur dif ferently at 410-480 nm and 500-560 nm for PSP and non-PSP microalgae,respectively.The identification accuracies of ML models(support vector machine(SVM),and k-nearest neighbor rule(k-NN)),and DL model(convolutional neural network(CNN))to PSP microalgae are 96.25%,96.36%,and 95.88%respectively,indicating that ML and DL are suitable for the classifi cation of toxic microalgae.展开更多
The accurate segmentation of retinal vessels is a challenging taskdue to the presence of various pathologies as well as the low-contrast ofthin vessels and non-uniform illumination. In recent years, encoder-decodernet...The accurate segmentation of retinal vessels is a challenging taskdue to the presence of various pathologies as well as the low-contrast ofthin vessels and non-uniform illumination. In recent years, encoder-decodernetworks have achieved outstanding performance in retinal vessel segmentation at the cost of high computational complexity. To address the aforementioned challenges and to reduce the computational complexity, we proposea lightweight convolutional neural network (CNN)-based encoder-decoderdeep learning model for accurate retinal vessels segmentation. The proposeddeep learning model consists of encoder-decoder architecture along withbottleneck layers that consist of depth-wise squeezing, followed by fullconvolution, and finally depth-wise stretching. The inspiration for the proposed model is taken from the recently developed Anam-Net model, whichwas tested on CT images for COVID-19 identification. For our lightweightmodel, we used a stack of two 3 × 3 convolution layers (without spatialpooling in between) instead of a single 3 × 3 convolution layer as proposedin Anam-Net to increase the receptive field and to reduce the trainableparameters. The proposed method includes fewer filters in all convolutionallayers than the original Anam-Net and does not have an increasing numberof filters for decreasing resolution. These modifications do not compromiseon the segmentation accuracy, but they do make the architecture significantlylighter in terms of the number of trainable parameters and computation time.The proposed architecture has comparatively fewer parameters (1.01M) thanAnam-Net (4.47M), U-Net (31.05M), SegNet (29.50M), and most of the otherrecent works. The proposed model does not require any problem-specificpre- or post-processing, nor does it rely on handcrafted features. In addition,the attribute of being efficient in terms of segmentation accuracy as well aslightweight makes the proposed method a suitable candidate to be used in thescreening platforms at the point of care. We evaluated our proposed modelon open-access datasets namely, DRIVE, STARE, and CHASE_DB. Theexperimental results show that the proposed model outperforms several stateof-the-art methods, such as U-Net and its variants, fully convolutional network (FCN), SegNet, CCNet, ResWNet, residual connection-based encoderdecoder network (RCED-Net), and scale-space approx. network (SSANet) in terms of {dice coefficient, sensitivity (SN), accuracy (ACC), and the areaunder the ROC curve (AUC)} with the scores of {0.8184, 0.8561, 0.9669, and0.9868} on the DRIVE dataset, the scores of {0.8233, 0.8581, 0.9726, and0.9901} on the STARE dataset, and the scores of {0.8138, 0.8604, 0.9752,and 0.9906} on the CHASE_DB dataset. Additionally, we perform crosstraining experiments on the DRIVE and STARE datasets. The result of thisexperiment indicates the generalization ability and robustness of the proposedmodel.展开更多
Fault detection and isolation of high-speed train suspension systems is of critical importance to guarantee train running safety. Firstly, the existing methods concerning fault detection or isolation of train suspensi...Fault detection and isolation of high-speed train suspension systems is of critical importance to guarantee train running safety. Firstly, the existing methods concerning fault detection or isolation of train suspension systems are briefly reviewed and divided into two categories, i.e., model-based and data-driven approaches. The advantages and disadvantages of these two categories of approaches are briefly summarized. Secondly, a 1D convolution network-based fault diagnostic method for highspeed train suspension systems is designed. To improve the robustness of the method, a Gaussian white noise strategy(GWN-strategy) for immunity to track irregularities and an edge sample training strategy(EST-strategy) for immunity to wheel wear are proposed. The whole network is called GWN-EST-1 DCNN method. Thirdly, to show the performance of this method, a multibody dynamics simulation model of a high-speed train is built to generate the lateral acceleration of a bogie frame corresponding to different track irregularities, wheel profiles, and secondary suspension faults. The simulated signals are then inputted into the diagnostic network, and the results show the correctness and superiority of the GWN-EST-1DCNN method. Finally,the 1DCNN method is further validated using tracking data of a CRH3 train running on a high-speed railway line.展开更多
This study proposed a measurement platform for continuous blood pressure estimation based on dual photoplethysmography(PPG)sensors and a deep learning(DL)that can be used for continuous and rapid measurement of blood ...This study proposed a measurement platform for continuous blood pressure estimation based on dual photoplethysmography(PPG)sensors and a deep learning(DL)that can be used for continuous and rapid measurement of blood pressure and analysis of cardiovascular-related indicators.The proposed platform measured the signal changes in PPG and converted them into physiological indicators,such as pulse transit time(PTT),pulse wave velocity(PWV),perfusion index(PI)and heart rate(HR);these indicators were then fed into the DL to calculate blood pressure.The hardware of the experiment comprised 2 PPG components(i.e.,Raspberry Pi 3 Model B and analog-todigital converter[MCP3008]),which were connected using a serial peripheral interface.The DL algorithm converted the stable dual PPG signals acquired from the strictly standardized experimental process into various physiological indicators as input parameters and finally obtained the systolic blood pressure(SBP),diastolic blood pressure(DBP)and mean arterial pressure(MAP).To increase the robustness of the DL model,this study input data of 100 Asian participants into the training database,including those with and without cardiovascular disease,each with a proportion of approximately 50%.The experimental results revealed that the mean absolute error and standard deviation of SBP was 0.17±0.46 mmHg.The mean absolute error and standard deviation of DBP was 0.27±0.52 mmHg.The mean absolute error and standard deviation of MAP was 0.16±0.40 mmHg.展开更多
For high-speed mobile MIMO-OFDM system,a low-complexity deep learning(DL) based timevarying channel estimation scheme is proposed.To reduce the number of estimated parameters,the basis expansion model(BEM) is employed...For high-speed mobile MIMO-OFDM system,a low-complexity deep learning(DL) based timevarying channel estimation scheme is proposed.To reduce the number of estimated parameters,the basis expansion model(BEM) is employed to model the time-varying channel,which converts the channel estimation into the estimation of the basis coefficient.Specifically,the initial basis coefficients are firstly used to train the neural network in an offline manner,and then the high-precision channel estimation can be obtained by small number of inputs.Moreover,the linear minimum mean square error(LMMSE) estimated channel is considered for the loss function in training phase,which makes the proposed method more practical.Simulation results show that the proposed method has a better performance and lower computational complexity compared with the available schemes,and it is robust to the fast time-varying channel in the high-speed mobile scenarios.展开更多
In recent years,intelligent data-driven prognostic methods have been successfully developed,and good machinery health assessment performance has been achieved through explorations of data from multiple sensors.However...In recent years,intelligent data-driven prognostic methods have been successfully developed,and good machinery health assessment performance has been achieved through explorations of data from multiple sensors.However,existing datafusion prognostic approaches generally rely on the data availability of all sensors,and are vulnerable to potential sensor malfunctions,which are likely to occur in real industries especially for machines in harsh operating environments.In this paper,a deep learning-based remaining useful life(RUL)prediction method is proposed to address the sensor malfunction problem.A global feature extraction scheme is adopted to fully exploit information of different sensors.Adversarial learning is further introduced to extract generalized sensor-invariant features.Through explorations of both global and shared features,promising and robust RUL prediction performance can be achieved by the proposed method in the testing scenarios with sensor malfunctions.The experimental results suggest the proposed approach is well suited for real industrial applications.展开更多
Flash floods are one of the most dangerous natural disasters,especially in hilly terrain,causing loss of life,property,and infrastructures and sudden disruption of traffic.These types of floods are mostly associated w...Flash floods are one of the most dangerous natural disasters,especially in hilly terrain,causing loss of life,property,and infrastructures and sudden disruption of traffic.These types of floods are mostly associated with landslides and erosion of roads within a short time.Most of Vietnamis hilly and mountainous;thus,the problem due to flash flood is severe and requires systematic studies to correctly identify flood susceptible areas for proper landuse planning and traffic management.In this study,three Machine Learning(ML)methods namely Deep Learning Neural Network(DL),Correlation-based FeatureWeighted Naive Bayes(CFWNB),and Adaboost(AB-CFWNB)were used for the development of flash flood susceptibility maps for hilly road section(115 km length)of National Highway(NH)-6 inHoa Binh province,Vietnam.In the proposedmodels,88 past flash flood events were used together with 14 flash floods affecting topographical and geo-environmental factors.The performance of themodels was evaluated using standard statisticalmeasures including Receiver Operating Characteristic(ROC)Curve,Area Under Curve(AUC)and Root Mean Square Error(RMSE).The results revealed that all the models performed well(AUC>0.80)in predicting flash flood susceptibility zones,but the performance of the DL model is the best(AUC:0.972,RMSE:0.352).Therefore,the DL model can be applied to develop an accurate flash flood susceptibility map of hilly terrain which can be used for proper planning and designing of the highways and other infrastructure facilities besides landuse management of the area.展开更多
Skin lesions detection and classification is a prominent issue and difficult even for extremely skilled dermatologists and pathologists.Skin disease is the most common disorder triggered by fungus,viruses,bacteria,all...Skin lesions detection and classification is a prominent issue and difficult even for extremely skilled dermatologists and pathologists.Skin disease is the most common disorder triggered by fungus,viruses,bacteria,allergies,etc.Skin diseases are most dangerous and may be the cause of serious damage.Therefore,it requires to diagnose it at an earlier stage,but the diagnosis therapy itself is complex and needs advanced laser and photonic therapy.This advance therapy involvesfinancial burden and some other ill effects.Therefore,it must use artificial intelligence techniques to detect and diagnose it accurately at an earlier stage.Several techniques have been proposed to detect skin disease at an earlier stage but fail to get accuracy.Therefore,the primary goal of this paper is to classify,detect and provide accurate information about skin diseases.This paper deals with the same issue by proposing a high-performance Convolution neural network(CNN)to classify and detect skin disease at an earlier stage.The complete meth-odology is explained in different folds:firstly,the skin diseases images are pre-processed with processing techniques,and secondly,the important feature of the skin images are extracted.Thirdly,the pre-processed images are analyzed at different stages using a Deep Convolution Neural Network(DCNN).The approach proposed in this paper is simple,fast,and shows accurate results up to 98%and used to detect six different disease types.展开更多
基金supported by the National Natural Science Foundation of China(62171088,U19A2052,62020106011)the Medico-Engineering Cooperation Funds from University of Electronic Science and Technology of China(ZYGX2021YGLH215,ZYGX2022YGRH005)。
文摘Deep neural networks(DNNs)have achieved great success in many data processing applications.However,high computational complexity and storage cost make deep learning difficult to be used on resource-constrained devices,and it is not environmental-friendly with much power cost.In this paper,we focus on low-rank optimization for efficient deep learning techniques.In the space domain,DNNs are compressed by low rank approximation of the network parameters,which directly reduces the storage requirement with a smaller number of network parameters.In the time domain,the network parameters can be trained in a few subspaces,which enables efficient training for fast convergence.The model compression in the spatial domain is summarized into three categories as pre-train,pre-set,and compression-aware methods,respectively.With a series of integrable techniques discussed,such as sparse pruning,quantization,and entropy coding,we can ensemble them in an integration framework with lower computational complexity and storage.In addition to summary of recent technical advances,we have two findings for motivating future works.One is that the effective rank,derived from the Shannon entropy of the normalized singular values,outperforms other conventional sparse measures such as the?_1 norm for network compression.The other is a spatial and temporal balance for tensorized neural networks.For accelerating the training of tensorized neural networks,it is crucial to leverage redundancy for both model compression and subspace training.
文摘Although modulation classification based on deep neural network can achieve high Modulation Classification(MC)accuracies,catastrophic forgetting will occur when the neural network model continues to learn new tasks.In this paper,we simulate the dynamic wireless communication environment and focus on breaking the learning paradigm of isolated automatic MC.We innovate a research algorithm for continuous automatic MC.Firstly,a memory for storing representative old task modulation signals is built,which is employed to limit the gradient update direction of new tasks in the continuous learning stage to ensure that the loss of old tasks is also in a downward trend.Secondly,in order to better simulate the dynamic wireless communication environment,we employ the mini-batch gradient algorithm which is more suitable for continuous learning.Finally,the signal in the memory can be replayed to further strengthen the characteristics of the old task signal in the model.Simulation results verify the effectiveness of the method.
文摘This study is designed to develop Artificial Intelligence(AI)based analysis tool that could accurately detect COVID-19 lung infections based on portable chest x-rays(CXRs).The frontline physicians and radiologists suffer from grand challenges for COVID-19 pandemic due to the suboptimal image quality and the large volume of CXRs.In this study,AI-based analysis tools were developed that can precisely classify COVID-19 lung infection.Publicly available datasets of COVID-19(N=1525),non-COVID-19 normal(N=1525),viral pneumonia(N=1342)and bacterial pneumonia(N=2521)from the Italian Society of Medical and Interventional Radiology(SIRM),Radiopaedia,The Cancer Imaging Archive(TCIA)and Kaggle repositories were taken.A multi-approach utilizing deep learning ResNet101 with and without hyperparameters optimization was employed.Additionally,the fea-tures extracted from the average pooling layer of ResNet101 were used as input to machine learning(ML)algorithms,which twice trained the learning algorithms.The ResNet101 with optimized parameters yielded improved performance to default parameters.The extracted features from ResNet101 are fed to the k-nearest neighbor(KNN)and support vector machine(SVM)yielded the highest 3-class classification performance of 99.86%and 99.46%,respectively.The results indicate that the proposed approach can be bet-ter utilized for improving the accuracy and diagnostic efficiency of CXRs.The proposed deep learning model has the potential to improve further the efficiency of the healthcare systems for proper diagnosis and prognosis of COVID-19 lung infection.
文摘Soil is the major source of infinite lives on Earth and the quality of soil plays significant role on Agriculture practices all around.Hence,the evaluation of soil quality is very important for determining the amount of nutrients that the soil require for proper yield.In present decade,the application of deep learning models in many fields of research has created greater impact.The increasing soil data availability of soil data there is a greater demand for the remotely avail open source model,leads to the incorporation of deep learning method to predict the soil quality.With that concern,this paper proposes a novel model called Improved Soil Quality Prediction Model using Deep Learning(ISQP-DL).The work considers the chemical,physical and biological factors of soil in particular area to estimate the soil quality.Firstly,pH rating of soil samples has been collected from the soil testing laboratory from which the acidic range has been categorized through soil test and the same data has been taken as input to the Deep Neural Network Regression(DNNR)model.Secondly,soil nutrient data has been given as second input to the DNNR model.By utilizing this data set,the DNNR method is used to evaluate the fertility rate by which the soil quality has been estimated.For training and testing,the model uses Deep Neural Network Regression(DNNR),by utilizing the dataset.The results show that the proposed model is effective for SQP(Soil Quality Prediction Model)with efficient good fitting and generality is enhanced with input features with higher rate of classification accuracy.The results show that the proposed model achieves 96.7%of accuracy rate compared with existing models.
文摘Individuals with special needs learn more slowly than their peers and they need repetitions to be permanent.However,in crowded classrooms,it is dif-ficult for a teacher to deal with each student individually.This problem can be overcome by using supportive education applications.However,the majority of such applications are not designed for special education and therefore they are not efficient as expected.Special education students differ from their peers in terms of their development,characteristics,and educational qualifications.The handwriting skills of individuals with special needs are lower than their peers.This makes the task of Handwriting Recognition(HWR)more difficult.To over-come this problem,we propose a new personalized handwriting verification sys-tem that validates digits from the handwriting of special education students.The system uses a Convolutional Neural Network(CNN)created and trained from scratch.The data set used is obtained by collecting the handwriting of the students with the help of a tablet.A special education center is visited and the handwrittenfigures of the students are collected under the supervision of special education tea-chers.The system is designed as a person-dependent system as every student has their writing style.Overall,the system achieves promising results,reaching a recognition accuracy of about 94%.Overall,the system can verify special educa-tion students’handwriting digits with high accuracy and is ready to integrate with a mobile application that is designed to teach digits to special education students.
文摘Undeniably,Deep Learning(DL)has rapidly eroded traditional machine learning in Remote Sensing(RS)and geoscience domains with applications such as scene understanding,material identification,extreme weather detection,oil spill identification,among many others.Traditional machine learning algorithms are given less and less attention in the era of big data.Recently,a substantial amount of work aimed at developing image classification approaches based on the DL model’s success in computer vision.The number of relevant articles has nearly doubled every year since 2015.Advances in remote sensing technology,as well as the rapidly expanding volume of publicly available satellite imagery on a worldwide scale,have opened up the possibilities for a wide range of modern applications.However,there are some challenges related to the availability of annotated data,the complex nature of data,and model parameterization,which strongly impact performance.In this article,a comprehensive review of the literature encompassing a broad spectrum of pioneer work in remote sensing image classification is presented including network architectures(vintage Convolutional Neural Network,CNN;Fully Convolutional Networks,FCN;encoder-decoder,recurrent networks;attention models,and generative adversarial models).The characteristics,capabilities,and limitations of current DL models were examined,and potential research directions were discussed.
基金supported by the National Natural Science Foundation of China(62061003)Sichuan Science and Technology Program(2021YFG0192)the Research Foundation of the Civil Aviation Flight University of China(ZJ2020-04,J2020-033)。
文摘Existing specific emitter identification(SEI)methods based on hand-crafted features have drawbacks of losing feature information and involving multiple processing stages,which reduce the identification accuracy of emitters and complicate the procedures of identification.In this paper,we propose a deep SEI approach via multidimensional feature extraction for radio frequency fingerprints(RFFs),namely,RFFsNet-SEI.Particularly,we extract multidimensional physical RFFs from the received signal by virtue of variational mode decomposition(VMD)and Hilbert transform(HT).The physical RFFs and I-Q data are formed into the balanced-RFFs,which are then used to train RFFsNet-SEI.As introducing model-aided RFFs into neural network,the hybrid-driven scheme including physical features and I-Q data is constructed.It improves physical interpretability of RFFsNet-SEI.Meanwhile,since RFFsNet-SEI identifies individual of emitters from received raw data in end-to-end,it accelerates SEI implementation and simplifies procedures of identification.Moreover,as the temporal features and spectral features of the received signal are both extracted by RFFsNet-SEI,identification accuracy is improved.Finally,we compare RFFsNet-SEI with the counterparts in terms of identification accuracy,computational complexity,and prediction speed.Experimental results illustrate that the proposed method outperforms the counterparts on the basis of simulation dataset and real dataset collected in the anechoic chamber.
文摘In this paper,we summarize recent progresses made in deep learning based acoustic models and the motivation and insights behind the surveyed techniques.We first discuss models such as recurrent neural networks(RNNs) and convolutional neural networks(CNNs) that can effectively exploit variablelength contextual information,and their various combination with other models.We then describe models that are optimized end-to-end and emphasize on feature representations learned jointly with the rest of the system,the connectionist temporal classification(CTC) criterion,and the attention-based sequenceto-sequence translation model.We further illustrate robustness issues in speech recognition systems,and discuss acoustic model adaptation,speech enhancement and separation,and robust training strategies.We also cover modeling techniques that lead to more efficient decoding and discuss possible future directions in acoustic model research.
文摘The performance of deep learning(DL)networks has been increased by elaborating the network structures. However, the DL netowrks have many parameters, which have a lot of influence on the performance of the network. We propose a genetic algorithm(GA) based deep belief neural network(DBNN) method for robot object recognition and grasping purpose. This method optimizes the parameters of the DBNN method, such as the number of hidden units, the number of epochs, and the learning rates, which would reduce the error rate and the network training time of object recognition. After recognizing objects, the robot performs the pick-andplace operations. We build a database of six objects for experimental purpose. Experimental results demonstrate that our method outperforms on the optimized robot object recognition and grasping tasks.
基金supported by the National Key R&D Program of China(2018YFB1802004)111 Project(B08038)。
文摘Channel estimation has been considered as a key issue in the millimeter-wave(mmWave)massive multi-input multioutput(MIMO)communication systems,which becomes more challenging with a large number of antennas.In this paper,we propose a deep learning(DL)-based fast channel estimation method for mmWave massive MIMO systems.The proposed method can directly and effectively estimate channel state information(CSI)from received data without performing pilot signals estimate in advance,which simplifies the estimation process.Specifically,we develop a convolutional neural network(CNN)-based channel estimation network for the case of dimensional mismatch of input and output data,subsequently denoted as channel(H)neural network(HNN).It can quickly estimate the channel information by learning the inherent characteristics of the received data and the relationship between the received data and the channel,while the dimension of the received data is much smaller than the channel matrix.Simulation results show that the proposed HNN can gain better channel estimation accuracy compared with existing schemes.
基金This work is supported by the National Key Research and Development Program of China under Grant 2016YFB0800600the Natural Science Foundation of China under Grant(No.61872254 and No.U1736212)+2 种基金the Fundamental Research Funds for the central Universities(No.YJ201727,No.A0920502051815-98)Academic and Technical Leaders’Training Support Fund of Sichuan Province(2016)the research projects of the Humanity and Social Science Youth Foundation of Ministry of Education(13YJCZH021).We want to convey our grateful appreciation to the corresponding author of this paper,Gang Liang,who has offered advice with huge values in all stages when writing this essay to us.
文摘Deep Learning presents a critical capability to be geared into environments being constantly changed and ongoing learning dynamic,which is especially relevant in Network Intrusion Detection.In this paper,as enlightened by the theory of Deep Learning Neural Networks,Hierarchy Distributed-Agents Model for Network Risk Evaluation,a newly developed model,is proposed.The architecture taken on by the distributed-agents model are given,as well as the approach of analyzing network intrusion detection using Deep Learning,the mechanism of sharing hyper-parameters to improve the efficiency of learning is presented,and the hierarchical evaluative framework for Network Risk Evaluation of the proposed model is built.Furthermore,to examine the proposed model,a series of experiments were conducted in terms of NSLKDD datasets.The proposed model was able to differentiate between normal and abnormal network activities with an accuracy of 97.60%on NSL-KDD datasets.As the results acquired from the experiment indicate,the model developed in this paper is characterized by high-speed and high-accuracy processing which shall offer a preferable solution with regard to the Risk Evaluation in Network.
基金Supported by the National Natural Science Foundation of China(No.41972244)partially supported by the Science and Technology Basic Resources Survey of the Ministry of Science and Technology(No.2018FY100201)+3 种基金the National Key Research and Development Program(No.2019YFC1407900)to Siyu GOUShuai ZHANGWenyu GANand Tianjiu JIANG。
文摘Paralytic shellfi sh poisoning(PSP)microalgae,as one of the harmful algal blooms,causes great damage to the of fshore fi shery,marine culture,and marine ecological environment.At present,there is no technique for real-time accurate identifi cation of toxic microalgae,by combining three-dimensional fluorescence with machine learning(ML)and deep learning(DL),we developed methods to classify the PSP and non-PSP microalgae.The average classifi cation accuracies of these two methods for microalgae are above 90%,and the accuracies for discriminating 12 microalgae species in PSP and non-PSP microalgae are above 94%.When the emission wavelength is 650-690 nm,the fl uorescence characteristics bands(excitation wavelength)occur dif ferently at 410-480 nm and 500-560 nm for PSP and non-PSP microalgae,respectively.The identification accuracies of ML models(support vector machine(SVM),and k-nearest neighbor rule(k-NN)),and DL model(convolutional neural network(CNN))to PSP microalgae are 96.25%,96.36%,and 95.88%respectively,indicating that ML and DL are suitable for the classifi cation of toxic microalgae.
基金The authors extend their appreciation to the Deputyship for Research and Innovation,Ministry of Education in Saudi Arabia for funding this research work through the project number(DRI−KSU−415).
文摘The accurate segmentation of retinal vessels is a challenging taskdue to the presence of various pathologies as well as the low-contrast ofthin vessels and non-uniform illumination. In recent years, encoder-decodernetworks have achieved outstanding performance in retinal vessel segmentation at the cost of high computational complexity. To address the aforementioned challenges and to reduce the computational complexity, we proposea lightweight convolutional neural network (CNN)-based encoder-decoderdeep learning model for accurate retinal vessels segmentation. The proposeddeep learning model consists of encoder-decoder architecture along withbottleneck layers that consist of depth-wise squeezing, followed by fullconvolution, and finally depth-wise stretching. The inspiration for the proposed model is taken from the recently developed Anam-Net model, whichwas tested on CT images for COVID-19 identification. For our lightweightmodel, we used a stack of two 3 × 3 convolution layers (without spatialpooling in between) instead of a single 3 × 3 convolution layer as proposedin Anam-Net to increase the receptive field and to reduce the trainableparameters. The proposed method includes fewer filters in all convolutionallayers than the original Anam-Net and does not have an increasing numberof filters for decreasing resolution. These modifications do not compromiseon the segmentation accuracy, but they do make the architecture significantlylighter in terms of the number of trainable parameters and computation time.The proposed architecture has comparatively fewer parameters (1.01M) thanAnam-Net (4.47M), U-Net (31.05M), SegNet (29.50M), and most of the otherrecent works. The proposed model does not require any problem-specificpre- or post-processing, nor does it rely on handcrafted features. In addition,the attribute of being efficient in terms of segmentation accuracy as well aslightweight makes the proposed method a suitable candidate to be used in thescreening platforms at the point of care. We evaluated our proposed modelon open-access datasets namely, DRIVE, STARE, and CHASE_DB. Theexperimental results show that the proposed model outperforms several stateof-the-art methods, such as U-Net and its variants, fully convolutional network (FCN), SegNet, CCNet, ResWNet, residual connection-based encoderdecoder network (RCED-Net), and scale-space approx. network (SSANet) in terms of {dice coefficient, sensitivity (SN), accuracy (ACC), and the areaunder the ROC curve (AUC)} with the scores of {0.8184, 0.8561, 0.9669, and0.9868} on the DRIVE dataset, the scores of {0.8233, 0.8581, 0.9726, and0.9901} on the STARE dataset, and the scores of {0.8138, 0.8604, 0.9752,and 0.9906} on the CHASE_DB dataset. Additionally, we perform crosstraining experiments on the DRIVE and STARE datasets. The result of thisexperiment indicates the generalization ability and robustness of the proposedmodel.
基金supported by the National Nature Science Foundation of China(No.71871188)the Fundamental Research Funds for the Central Universities(No.2682021CX051)supported by China Scholarship Council(No.201707000113)。
文摘Fault detection and isolation of high-speed train suspension systems is of critical importance to guarantee train running safety. Firstly, the existing methods concerning fault detection or isolation of train suspension systems are briefly reviewed and divided into two categories, i.e., model-based and data-driven approaches. The advantages and disadvantages of these two categories of approaches are briefly summarized. Secondly, a 1D convolution network-based fault diagnostic method for highspeed train suspension systems is designed. To improve the robustness of the method, a Gaussian white noise strategy(GWN-strategy) for immunity to track irregularities and an edge sample training strategy(EST-strategy) for immunity to wheel wear are proposed. The whole network is called GWN-EST-1 DCNN method. Thirdly, to show the performance of this method, a multibody dynamics simulation model of a high-speed train is built to generate the lateral acceleration of a bogie frame corresponding to different track irregularities, wheel profiles, and secondary suspension faults. The simulated signals are then inputted into the diagnostic network, and the results show the correctness and superiority of the GWN-EST-1DCNN method. Finally,the 1DCNN method is further validated using tracking data of a CRH3 train running on a high-speed railway line.
基金This study was supported in part by the Ministry of Science and Technology MOST 108-2221-E-150-022-MY3 and Taiwan Ocean University.
文摘This study proposed a measurement platform for continuous blood pressure estimation based on dual photoplethysmography(PPG)sensors and a deep learning(DL)that can be used for continuous and rapid measurement of blood pressure and analysis of cardiovascular-related indicators.The proposed platform measured the signal changes in PPG and converted them into physiological indicators,such as pulse transit time(PTT),pulse wave velocity(PWV),perfusion index(PI)and heart rate(HR);these indicators were then fed into the DL to calculate blood pressure.The hardware of the experiment comprised 2 PPG components(i.e.,Raspberry Pi 3 Model B and analog-todigital converter[MCP3008]),which were connected using a serial peripheral interface.The DL algorithm converted the stable dual PPG signals acquired from the strictly standardized experimental process into various physiological indicators as input parameters and finally obtained the systolic blood pressure(SBP),diastolic blood pressure(DBP)and mean arterial pressure(MAP).To increase the robustness of the DL model,this study input data of 100 Asian participants into the training database,including those with and without cardiovascular disease,each with a proportion of approximately 50%.The experimental results revealed that the mean absolute error and standard deviation of SBP was 0.17±0.46 mmHg.The mean absolute error and standard deviation of DBP was 0.27±0.52 mmHg.The mean absolute error and standard deviation of MAP was 0.16±0.40 mmHg.
基金Supported by the National Science Foundation Program of Jiangsu Province (No.BK20191378)the National Science Research Project of Jiangsu Higher Education Institutions (No.18KJB510034)+2 种基金China Postdoctoral Science Fund Special Funding Project (No.2018T110530)the Key Technologies R&D Program of Jiangsu Province (No.BE2022067,BE2022067-2)Major Research Program Key Project(No.92067201)。
文摘For high-speed mobile MIMO-OFDM system,a low-complexity deep learning(DL) based timevarying channel estimation scheme is proposed.To reduce the number of estimated parameters,the basis expansion model(BEM) is employed to model the time-varying channel,which converts the channel estimation into the estimation of the basis coefficient.Specifically,the initial basis coefficients are firstly used to train the neural network in an offline manner,and then the high-precision channel estimation can be obtained by small number of inputs.Moreover,the linear minimum mean square error(LMMSE) estimated channel is considered for the loss function in training phase,which makes the proposed method more practical.Simulation results show that the proposed method has a better performance and lower computational complexity compared with the available schemes,and it is robust to the fast time-varying channel in the high-speed mobile scenarios.
基金supported by the National Science Fund for Distinguished Young Scholars of China(52025056)Fundamental Research Funds for the Central Universities(xzy012022062)。
文摘In recent years,intelligent data-driven prognostic methods have been successfully developed,and good machinery health assessment performance has been achieved through explorations of data from multiple sensors.However,existing datafusion prognostic approaches generally rely on the data availability of all sensors,and are vulnerable to potential sensor malfunctions,which are likely to occur in real industries especially for machines in harsh operating environments.In this paper,a deep learning-based remaining useful life(RUL)prediction method is proposed to address the sensor malfunction problem.A global feature extraction scheme is adopted to fully exploit information of different sensors.Adversarial learning is further introduced to extract generalized sensor-invariant features.Through explorations of both global and shared features,promising and robust RUL prediction performance can be achieved by the proposed method in the testing scenarios with sensor malfunctions.The experimental results suggest the proposed approach is well suited for real industrial applications.
基金funded by Vietnam National Foundation for Science and Technology Development (NAFOSTED)under Grant No.105.08-2019.03.
文摘Flash floods are one of the most dangerous natural disasters,especially in hilly terrain,causing loss of life,property,and infrastructures and sudden disruption of traffic.These types of floods are mostly associated with landslides and erosion of roads within a short time.Most of Vietnamis hilly and mountainous;thus,the problem due to flash flood is severe and requires systematic studies to correctly identify flood susceptible areas for proper landuse planning and traffic management.In this study,three Machine Learning(ML)methods namely Deep Learning Neural Network(DL),Correlation-based FeatureWeighted Naive Bayes(CFWNB),and Adaboost(AB-CFWNB)were used for the development of flash flood susceptibility maps for hilly road section(115 km length)of National Highway(NH)-6 inHoa Binh province,Vietnam.In the proposedmodels,88 past flash flood events were used together with 14 flash floods affecting topographical and geo-environmental factors.The performance of themodels was evaluated using standard statisticalmeasures including Receiver Operating Characteristic(ROC)Curve,Area Under Curve(AUC)and Root Mean Square Error(RMSE).The results revealed that all the models performed well(AUC>0.80)in predicting flash flood susceptibility zones,but the performance of the DL model is the best(AUC:0.972,RMSE:0.352).Therefore,the DL model can be applied to develop an accurate flash flood susceptibility map of hilly terrain which can be used for proper planning and designing of the highways and other infrastructure facilities besides landuse management of the area.
基金supported by Taif university Researchers Supporting Project Number(TURSP-2020/114),Taif University,Taif,Saudi Arabia.
文摘Skin lesions detection and classification is a prominent issue and difficult even for extremely skilled dermatologists and pathologists.Skin disease is the most common disorder triggered by fungus,viruses,bacteria,allergies,etc.Skin diseases are most dangerous and may be the cause of serious damage.Therefore,it requires to diagnose it at an earlier stage,but the diagnosis therapy itself is complex and needs advanced laser and photonic therapy.This advance therapy involvesfinancial burden and some other ill effects.Therefore,it must use artificial intelligence techniques to detect and diagnose it accurately at an earlier stage.Several techniques have been proposed to detect skin disease at an earlier stage but fail to get accuracy.Therefore,the primary goal of this paper is to classify,detect and provide accurate information about skin diseases.This paper deals with the same issue by proposing a high-performance Convolution neural network(CNN)to classify and detect skin disease at an earlier stage.The complete meth-odology is explained in different folds:firstly,the skin diseases images are pre-processed with processing techniques,and secondly,the important feature of the skin images are extracted.Thirdly,the pre-processed images are analyzed at different stages using a Deep Convolution Neural Network(DCNN).The approach proposed in this paper is simple,fast,and shows accurate results up to 98%and used to detect six different disease types.