Encrypted traffic plays a crucial role in safeguarding network security and user privacy.However,encrypting malicious traffic can lead to numerous security issues,making the effective classification of encrypted traff...Encrypted traffic plays a crucial role in safeguarding network security and user privacy.However,encrypting malicious traffic can lead to numerous security issues,making the effective classification of encrypted traffic essential.Existing methods for detecting encrypted traffic face two significant challenges.First,relying solely on the original byte information for classification fails to leverage the rich temporal relationships within network traffic.Second,machine learning and convolutional neural network methods lack sufficient network expression capabilities,hindering the full exploration of traffic’s potential characteristics.To address these limitations,this study introduces a traffic classification method that utilizes time relationships and a higher-order graph neural network,termed HGNN-ETC.This approach fully exploits the original byte information and chronological relationships of traffic packets,transforming traffic data into a graph structure to provide the model with more comprehensive context information.HGNN-ETC employs an innovative k-dimensional graph neural network to effectively capture the multi-scale structural features of traffic graphs,enabling more accurate classification.We select the ISCXVPN and the USTC-TK2016 dataset for our experiments.The results show that compared with other state-of-the-art methods,our method can obtain a better classification effect on different datasets,and the accuracy rate is about 97.00%.In addition,by analyzing the impact of varying input specifications on classification performance,we determine the optimal network data truncation strategy and confirm the model’s excellent generalization ability on different datasets.展开更多
The vector vortex beam(VVB)has attracted significant attention due to its intrinsic diversity of information and has found great applications in both classical and quantum communications.However,a VVB is unavoidably a...The vector vortex beam(VVB)has attracted significant attention due to its intrinsic diversity of information and has found great applications in both classical and quantum communications.However,a VVB is unavoidably affected by atmospheric turbulence(AT)when it propagates through the free-space optical communication environment,which results in detection errors at the receiver.In this paper,we propose a VVB classification scheme to detect VVBs with continuously changing polarization states under AT,where a diffractive deep neural network(DDNN)is designed and trained to classify the intensity distribution of the input distorted VVBs,and the horizontal direction of polarization of the input distorted beam is adopted as the feature for the classification through the DDNN.The numerical simulations and experimental results demonstrate that the proposed scheme has high accuracy in classification tasks.The energy distribution percentage remains above 95%from weak to medium AT,and the classification accuracy can remain above 95%for various strengths of turbulence.It has a faster convergence and better accuracy than that based on a convolutional neural network.展开更多
We redesign the parameterized quantum circuit in the quantum deep neural network, construct a three-layer structure as the hidden layer, and then use classical optimization algorithms to train the parameterized quantu...We redesign the parameterized quantum circuit in the quantum deep neural network, construct a three-layer structure as the hidden layer, and then use classical optimization algorithms to train the parameterized quantum circuit, thereby propose a novel hybrid quantum deep neural network(HQDNN) used for image classification. After bilinear interpolation reduces the original image to a suitable size, an improved novel enhanced quantum representation(INEQR) is used to encode it into quantum states as the input of the HQDNN. Multi-layer parameterized quantum circuits are used as the main structure to implement feature extraction and classification. The output results of parameterized quantum circuits are converted into classical data through quantum measurements and then optimized on a classical computer. To verify the performance of the HQDNN, we conduct binary classification and three classification experiments on the MNIST(Modified National Institute of Standards and Technology) data set. In the first binary classification, the accuracy of 0 and 4 exceeds98%. Then we compare the performance of three classification with other algorithms, the results on two datasets show that the classification accuracy is higher than that of quantum deep neural network and general quantum convolutional neural network.展开更多
Hyperspectral image classification stands as a pivotal task within the field of remote sensing,yet achieving highprecision classification remains a significant challenge.In response to this challenge,a Spectral Convol...Hyperspectral image classification stands as a pivotal task within the field of remote sensing,yet achieving highprecision classification remains a significant challenge.In response to this challenge,a Spectral Convolutional Neural Network model based on Adaptive Fick’s Law Algorithm(AFLA-SCNN)is proposed.The Adaptive Fick’s Law Algorithm(AFLA)constitutes a novel metaheuristic algorithm introduced herein,encompassing three new strategies:Adaptive weight factor,Gaussian mutation,and probability update policy.With adaptive weight factor,the algorithmcan adjust theweights according to the change in the number of iterations to improve the performance of the algorithm.Gaussianmutation helps the algorithm avoid falling into local optimal solutions and improves the searchability of the algorithm.The probability update strategy helps to improve the exploitability and adaptability of the algorithm.Within the AFLA-SCNN model,AFLA is employed to optimize two hyperparameters in the SCNN model,namely,“numEpochs”and“miniBatchSize”,to attain their optimal values.AFLA’s performance is initially validated across 28 functions in 10D,30D,and 50D for CEC2013 and 29 functions in 10D,30D,and 50D for CEC2017.Experimental results indicate AFLA’s marked performance superiority over nine other prominent optimization algorithms.Subsequently,the AFLA-SCNN model was compared with the Spectral Convolutional Neural Network model based on Fick’s Law Algorithm(FLA-SCNN),Spectral Convolutional Neural Network model based on Harris Hawks Optimization(HHO-SCNN),Spectral Convolutional Neural Network model based onDifferential Evolution(DE-SCNN),SpectralConvolutionalNeuralNetwork(SCNN)model,and SupportVector Machines(SVM)model using the Indian Pines dataset and PaviaUniversity dataset.The experimental results show that the AFLA-SCNN model outperforms other models in terms of Accuracy,Precision,Recall,and F1-score on Indian Pines and Pavia University.Among them,the Accuracy of the AFLA-SCNN model on Indian Pines reached 99.875%,and the Accuracy on PaviaUniversity reached 98.022%.In conclusion,our proposed AFLA-SCNN model is deemed to significantly enhance the precision of hyperspectral image classification.展开更多
Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware reso...Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware resources. To address this issue, the MobileNetV1 network was developed, which employs depthwise convolution to reduce network complexity. MobileNetV1 employs a stride of 2 in several convolutional layers to decrease the spatial resolution of feature maps, thereby lowering computational costs. However, this stride setting can lead to a loss of spatial information, particularly affecting the detection and representation of smaller objects or finer details in images. To maintain the trade-off between complexity and model performance, a lightweight convolutional neural network with hierarchical multi-scale feature fusion based on the MobileNetV1 network is proposed. The network consists of two main subnetworks. The first subnetwork uses a depthwise dilated separable convolution (DDSC) layer to learn imaging features with fewer parameters, which results in a lightweight and computationally inexpensive network. Furthermore, depthwise dilated convolution in DDSC layer effectively expands the field of view of filters, allowing them to incorporate a larger context. The second subnetwork is a hierarchical multi-scale feature fusion (HMFF) module that uses parallel multi-resolution branches architecture to process the input feature map in order to extract the multi-scale feature information of the input image. Experimental results on the CIFAR-10, Malaria, and KvasirV1 datasets demonstrate that the proposed method is efficient, reducing the network parameters and computational cost by 65.02% and 39.78%, respectively, while maintaining the network performance compared to the MobileNetV1 baseline.展开更多
Locomotor intent classification has become a research hotspot due to its importance to the development of assistive robotics and wearable devices.Previous work have achieved impressive performance in classifying stead...Locomotor intent classification has become a research hotspot due to its importance to the development of assistive robotics and wearable devices.Previous work have achieved impressive performance in classifying steady locomotion states.However,it remains challenging for these methods to attain high accuracy when facing transitions between steady locomotion states.Due to the similarities between the information of the transitions and their adjacent steady states.Furthermore,most of these methods rely solely on data and overlook the objective laws between physical activities,resulting in lower accuracy,particularly when encountering complex locomotion modes such as transitions.To address the existing deficiencies,we propose the locomotion rule embedding long short-term memory(LSTM)network with Attention(LREAL)for human locomotor intent classification,with a particular focus on transitions,using data from fewer sensors(two inertial measurement units and four goniometers).The LREAL network consists of two levels:One responsible for distinguishing between steady states and transitions,and the other for the accurate identification of locomotor intent.Each classifier in these levels is composed of multiple-LSTM layers and an attention mechanism.To introduce real-world motion rules and apply constraints to the network,a prior knowledge was added to the network via a rule-modulating block.The method was tested on the ENABL3S dataset,which contains continuous locomotion date for seven steady and twelve transitions states.Experimental results showed that the LREAL network could recognize locomotor intents with an average accuracy of 99.03%and 96.52%for the steady and transitions states,respectively.It is worth noting that the LREAL network accuracy for transition-state recognition improved by 0.18%compared to other state-of-the-art network,while using data from fewer sensors.展开更多
With limited number of labeled samples,hyperspectral image(HSI)classification is a difficult Problem in current research.The graph neural network(GNN)has emerged as an approach to semi-supervised classification,and th...With limited number of labeled samples,hyperspectral image(HSI)classification is a difficult Problem in current research.The graph neural network(GNN)has emerged as an approach to semi-supervised classification,and the application of GNN to hyperspectral images has attracted much attention.However,in the existing GNN-based methods a single graph neural network or graph filter is mainly used to extract HSI features,which does not take full advantage of various graph neural networks(graph filters).Moreover,the traditional GNNs have the problem of oversmoothing.To alleviate these shortcomings,we introduce a deep hybrid multi-graph neural network(DHMG),where two different graph filters,i.e.,the spectral filter and the autoregressive moving average(ARMA)filter,are utilized in two branches.The former can well extract the spectral features of the nodes,and the latter has a good suppression effect on graph noise.The network realizes information interaction between the two branches and takes good advantage of different graph filters.In addition,to address the problem of oversmoothing,a dense network is proposed,where the local graph features are preserved.The dense structure satisfies the needs of different classification targets presenting different features.Finally,we introduce a GraphSAGEbased network to refine the graph features produced by the deep hybrid network.Extensive experiments on three public HSI datasets strongly demonstrate that the DHMG dramatically outperforms the state-ofthe-art models.展开更多
Automatic modulation classification(AMC)aims at identifying the modulation of the received signals,which is a significant approach to identifying the target in military and civil applications.In this paper,a novel dat...Automatic modulation classification(AMC)aims at identifying the modulation of the received signals,which is a significant approach to identifying the target in military and civil applications.In this paper,a novel data-driven framework named convolutional and transformer-based deep neural network(CTDNN)is proposed to improve the classification performance.CTDNN can be divided into four modules,i.e.,convolutional neural network(CNN)backbone,transition module,transformer module,and final classifier.In the CNN backbone,a wide and deep convolution structure is designed,which consists of 1×15 convolution kernels and intensive cross-layer connections instead of traditional 1×3 kernels and sequential connections.In the transition module,a 1×1 convolution layer is utilized to compress the channels of the previous multi-scale CNN features.In the transformer module,three self-attention layers are designed for extracting global features and generating the classification vector.In the classifier,the final decision is made based on the maximum a posterior probability.Extensive simulations are conducted,and the result shows that our proposed CTDNN can achieve superior classification performance than traditional deep models.展开更多
Achieving accurate classification of colorectal polyps during colonoscopy can avoid unnecessary endoscopic biopsy or resection.This study aimed to develop a deep learning model that can automatically classify colorect...Achieving accurate classification of colorectal polyps during colonoscopy can avoid unnecessary endoscopic biopsy or resection.This study aimed to develop a deep learning model that can automatically classify colorectal polyps histologically on white-light and narrow-band imaging(NBI)colonoscopy images based on World Health Organization(WHO)and Workgroup serrAted polypS and Polyposis(WASP)classification criteria for colorectal polyps.White-light and NBI colonoscopy images of colorectal polyps exhibiting pathological results were firstly collected and classified into four categories:conventional adenoma,hyperplastic polyp,sessile serrated adenoma/polyp(SSAP)and normal,among which conventional adenoma could be further divided into three sub-categories of tubular adenoma,villous adenoma and villioustublar adenoma,subsequently the images were re-classified into six categories.In this paper,we proposed a novel convolutional neural network termed Polyp-DedNet for the four-and six-category classification tasks of colorectal polyps.Based on the existing classification network ResNet50,Polyp-DedNet adopted dilated convolution to retain more high-dimensional spatial information and an Efficient Channel Attention(ECA)module to improve the classification performance further.To eliminate gridding artifacts caused by dilated convolutions,traditional convolutional layers were used instead of the max pooling layer,and two convolutional layers with progressively decreasing dilation were added at the end of the network.Due to the inevitable imbalance of medical image data,a regularization method DropBlock and a Class-Balanced(CB)Loss were performed to prevent network overfitting.Furthermore,the 5-fold cross-validation was adopted to estimate the performance of Polyp-DedNet for the multi-classification task of colorectal polyps.Mean accuracies of the proposed Polyp-DedNet for the four-and six-category classifications of colorectal polyps were 89.91%±0.92%and 85.13%±1.10%,respectively.The metrics of precision,recall and F1-score were also improved by 1%∼2%compared to the baseline ResNet50.The proposed Polyp-DedNet presented state-of-the-art performance for colorectal polyp classifying on white-light and NBI colonoscopy images,highlighting its considerable potential as an AI-assistant system for accurate colorectal polyp diagnosis in colonoscopy.展开更多
Graphs help to define the relationships between entities in the data.These relationships,represented by edges,often provide additional context information which can be utilised to discover patterns in the data.Graph N...Graphs help to define the relationships between entities in the data.These relationships,represented by edges,often provide additional context information which can be utilised to discover patterns in the data.Graph Neural Networks(GNNs)employ the inductive bias of the graph structure to learn and predict on various tasks.The primary operation of graph neural networks is the feature aggregation step performed over neighbours of the node based on the structure of the graph.In addition to its own features,for each hop,the node gets additional combined features from its neighbours.These aggregated features help define the similarity or dissimilarity of the nodes with respect to the labels and are useful for tasks like node classification.However,in real-world data,features of neighbours at different hops may not correlate with the node's features.Thus,any indiscriminate feature aggregation by GNN might cause the addition of noisy features leading to degradation in model's performance.In this work,we show that selective aggregation of node features from various hops leads to better performance than default aggregation on the node classification task.Furthermore,we propose a Dual-Net GNN architecture with a classifier model and a selector model.The classifier model trains over a subset of input node features to predict node labels while the selector model learns to provide optimal input subset to the classifier for the best performance.These two models are trained jointly to learn the best subset of features that give higher accuracy in node label predictions.With extensive experiments,we show that our proposed model outperforms both feature selection methods and state-of-the-art GNN models with remarkable improvements up to 27.8%.展开更多
Time series classification(TSC)has attracted a lot of attention for time series data mining tasks and has been applied in various fields.With the success of deep learning(DL)in computer vision recognition,people are s...Time series classification(TSC)has attracted a lot of attention for time series data mining tasks and has been applied in various fields.With the success of deep learning(DL)in computer vision recognition,people are starting to use deep learning to tackle TSC tasks.Quantum neural networks(QNN)have recently demonstrated their superiority over traditional machine learning in methods such as image processing and natural language processing,but research using quantum neural networks to handle TSC tasks has not received enough attention.Therefore,we proposed a learning framework based on multiple imaging and hybrid QNN(MIHQNN)for TSC tasks.We investigate the possibility of converting 1D time series to 2D images and classifying the converted images using hybrid QNN.We explored the differences between MIHQNN based on single time series imaging and MIHQNN based on the fusion of multiple time series imaging.Four quantum circuits were also selected and designed to study the impact of quantum circuits on TSC tasks.We tested our method on several standard datasets and achieved significant results compared to several current TSC methods,demonstrating the effectiveness of MIHQNN.This research highlights the potential of applying quantum computing to TSC and provides the theoretical and experimental background for future research.展开更多
Use of deep learning algorithms for the investigation and analysis of medical images has emerged as a powerful technique.The increase in retinal dis-eases is alarming as it may lead to permanent blindness if left untr...Use of deep learning algorithms for the investigation and analysis of medical images has emerged as a powerful technique.The increase in retinal dis-eases is alarming as it may lead to permanent blindness if left untreated.Automa-tion of the diagnosis process of retinal diseases not only assists ophthalmologists in correct decision-making but saves time also.Several researchers have worked on automated retinal disease classification but restricted either to hand-crafted fea-ture selection or binary classification.This paper presents a deep learning-based approach for the automated classification of multiple retinal diseases using fundus images.For this research,the data has been collected and combined from three distinct sources.The images are preprocessed for enhancing the details.Six layers of the convolutional neural network(CNN)are used for the automated feature extraction and classification of 20 retinal diseases.It is observed that the results are reliant on the number of classes.For binary classification(healthy vs.unhealthy),up to 100%accuracy has been achieved.When 16 classes are used(treating stages of a disease as a single class),93.3%accuracy,92%sensitivity and 93%specificity have been obtained respectively.For 20 classes(treating stages of the disease as separate classes),the accuracy,sensitivity and specificity have dropped to 92.4%,92%and 92%respectively.展开更多
The quality of maize seeds affects the outcome of planting and harvesting,so seed quality inspection has become very important.Traditional seed quality detection methods are labor-intensive and time-consuming,whereas ...The quality of maize seeds affects the outcome of planting and harvesting,so seed quality inspection has become very important.Traditional seed quality detection methods are labor-intensive and time-consuming,whereas seed quality detection using computer vision techniques is efficient and accurate.In this study,we conducted migration learning training in AlexNet,VGG11 and ShuffleNetV2 network models respectively,and found that ShuffleNetV2 has a high accuracy rate for maize seed classification and recognition by comparing various metrics.In this study,the features of the seed images were extracted through image pre-processing methods,and then the AlexNet,VGG11 and ShuffleNetV2 models were used for training and classification respectively.A total of 2081 seed images containing four varieties were used for training and testing.The experimental results showed that ShuffleNetV2 could efficiently distinguish different varieties of maize seeds with the highest classification accuracy of 100%,where the parameter size of the model was at 20.65 MB and the response time for a single image was at 0.45 s.Therefore,the method is of high practicality and extension value.展开更多
Phishing attacks pose a significant security threat by masquerading as trustworthy entities to steal sensitive information,a problem that persists despite user awareness.This study addresses the pressing issue of phis...Phishing attacks pose a significant security threat by masquerading as trustworthy entities to steal sensitive information,a problem that persists despite user awareness.This study addresses the pressing issue of phishing attacks on websites and assesses the performance of three prominent Machine Learning(ML)models—Artificial Neural Networks(ANN),Convolutional Neural Networks(CNN),and Long Short-Term Memory(LSTM)—utilizing authentic datasets sourced from Kaggle and Mendeley repositories.Extensive experimentation and analysis reveal that the CNN model achieves a better accuracy of 98%.On the other hand,LSTM shows the lowest accuracy of 96%.These findings underscore the potential of ML techniques in enhancing phishing detection systems and bolstering cybersecurity measures against evolving phishing tactics,offering a promising avenue for safeguarding sensitive information and online security.展开更多
A rough set based corner classification neural network, the Rough-CC4, is presented to solve document classification problems such as document representation of different document sizes, document feature selection and...A rough set based corner classification neural network, the Rough-CC4, is presented to solve document classification problems such as document representation of different document sizes, document feature selection and document feature encoding. In the Rough-CC4, the documents are described by the equivalent classes of the approximate words. By this method, the dimensions representing the documents can be reduced, which can solve the precision problems caused by the different document sizes and also blur the differences caused by the approximate words. In the Rough-CC4, a binary encoding method is introduced, through which the importance of documents relative to each equivalent class is encoded. By this encoding method, the precision of the Rough-CC4 is improved greatly and the space complexity of the Rough-CC4 is reduced. The Rough-CC4 can be used in automatic classification of documents.展开更多
The classification of infrasound events has considerable importance in improving the capability to identify the types of natural disasters.The traditional infrasound classification mainly relies on machine learning al...The classification of infrasound events has considerable importance in improving the capability to identify the types of natural disasters.The traditional infrasound classification mainly relies on machine learning algorithms after artificial feature extraction.However,guaranteeing the effectiveness of the extracted features is difficult.The current trend focuses on using a convolution neural network to automatically extract features for classification.This method can be used to extract signal spatial features automatically through a convolution kernel;however,infrasound signals contain not only spatial information but also temporal information when used as a time series.These extracted temporal features are also crucial.If only a convolution neural network is used,then the time dependence of the infrasound sequence will be missed.Using long short-term memory networks can compensate for the missing time-series features but induces spatial feature information loss of the infrasound signal.A multiscale squeeze excitation–convolution neural network–bidirectional long short-term memory network infrasound event classification fusion model is proposed in this study to address these problems.This model automatically extracted temporal and spatial features,adaptively selected features,and also realized the fusion of the two types of features.Experimental results showed that the classification accuracy of the model was more than 98%,thus verifying the effectiveness and superiority of the proposed model.展开更多
In the coal mining industry,the gangue separation phase imposes a key challenge due to the high visual similaritybetween coal and gangue.Recently,separation methods have become more intelligent and efficient,using new...In the coal mining industry,the gangue separation phase imposes a key challenge due to the high visual similaritybetween coal and gangue.Recently,separation methods have become more intelligent and efficient,using newtechnologies and applying different features for recognition.One such method exploits the difference in substancedensity,leading to excellent coal/gangue recognition.Therefore,this study uses density differences to distinguishcoal from gangue by performing volume prediction on the samples.Our training samples maintain a record of3-side images as input,volume,and weight as the ground truth for the classification.The prediction process relieson a Convolutional neural network(CGVP-CNN)model that receives an input of a 3-side image and then extractsthe needed features to estimate an approximation for the volume.The classification was comparatively performedvia ten different classifiers,namely,K-Nearest Neighbors(KNN),Linear Support Vector Machines(Linear SVM),Radial Basis Function(RBF)SVM,Gaussian Process,Decision Tree,Random Forest,Multi-Layer Perceptron(MLP),Adaptive Boosting(AdaBosst),Naive Bayes,and Quadratic Discriminant Analysis(QDA).After severalexperiments on testing and training data,results yield a classification accuracy of 100%,92%,95%,96%,100%,100%,100%,96%,81%,and 92%,respectively.The test reveals the best timing with KNN,which maintained anaccuracy level of 100%.Assessing themodel generalization capability to newdata is essential to ensure the efficiencyof the model,so by applying a cross-validation experiment,the model generalization was measured.The useddataset was isolated based on the volume values to ensure the model generalization not only on new images of thesame volume but with a volume outside the trained range.Then,the predicted volume values were passed to theclassifiers group,where classification reported accuracy was found to be(100%,100%,100%,98%,88%,87%,100%,87%,97%,100%),respectively.Although obtaining a classification with high accuracy is the main motive,this workhas a remarkable reduction in the data preprocessing time compared to related works.The CGVP-CNN modelmanaged to reduce the data preprocessing time of previous works to 0.017 s while maintaining high classificationaccuracy using the estimated volume value.展开更多
With the increasing proportion of encrypted traffic in cyberspace, the classification of encrypted traffic has becomea core key technology in network supervision. In recent years, many different solutions have emerged...With the increasing proportion of encrypted traffic in cyberspace, the classification of encrypted traffic has becomea core key technology in network supervision. In recent years, many different solutions have emerged in this field.Most methods identify and classify traffic by extracting spatiotemporal characteristics of data flows or byte-levelfeatures of packets. However, due to changes in data transmission mediums, such as fiber optics and satellites,temporal features can exhibit significant variations due to changes in communication links and transmissionquality. Additionally, partial spatial features can change due to reasons like data reordering and retransmission.Faced with these challenges, identifying encrypted traffic solely based on packet byte-level features is significantlydifficult. To address this, we propose a universal packet-level encrypted traffic identification method, ComboPacket. This method utilizes convolutional neural networks to extract deep features of the current packet andits contextual information and employs spatial and channel attention mechanisms to select and locate effectivefeatures. Experimental data shows that Combo Packet can effectively distinguish between encrypted traffic servicecategories (e.g., File Transfer Protocol, FTP, and Peer-to-Peer, P2P) and encrypted traffic application categories (e.g.,BitTorrent and Skype). Validated on the ISCX VPN-non VPN dataset, it achieves classification accuracies of 97.0%and 97.1% for service and application categories, respectively. It also provides shorter training times and higherrecognition speeds. The performance and recognition capabilities of Combo Packet are significantly superior tothe existing classification methods mentioned.展开更多
The ocean plays an important role in maintaining the equilibrium of Earth’s ecology and providing humans access to a wealth of resources.To obtain a high-precision underwater image classification model,we propose a c...The ocean plays an important role in maintaining the equilibrium of Earth’s ecology and providing humans access to a wealth of resources.To obtain a high-precision underwater image classification model,we propose a classification model that combines an EfficientnetB0 neural network and a two-hidden-layer random vector functional link network(EfficientnetB0-TRVFL).The features of underwater images were extracted using the EfficientnetB0 neural network pretrained via ImageNet,and a new fully connected layer was trained on the underwater image dataset using the transfer learning method.Transfer learning ensures the initial performance of the network and helps in the development of a high-precision classification model.Subsequently,a TRVFL was proposed to improve the classification property of the model.Net construction of the two hidden layers exhibited a high accuracy when the same hidden layer nodes were used.The parameters of the second hidden layer were obtained using a novel calculation method,which reduced the outcome error to improve the performance instability caused by the random generation of parameters of RVFL.Finally,the TRVFL classifier was used to classify features and obtain classification results.The proposed EfficientnetB0-TRVFL classification model achieved 87.28%,74.06%,and 99.59%accuracy on the MLC2008,MLC2009,and Fish-gres datasets,respectively.The best convolutional neural networks and existing methods were stacked up through box plots and Kolmogorov-Smirnov tests,respectively.The increases imply improved systematization properties in underwater image classification tasks.The image classification model offers important performance advantages and better stability compared with existing methods.展开更多
The application of deep learning techniques in the medical field,specifically for Atrial Fibrillation(AFib)detection through Electrocardiogram(ECG)signals,has witnessed significant interest.Accurate and timely diagnos...The application of deep learning techniques in the medical field,specifically for Atrial Fibrillation(AFib)detection through Electrocardiogram(ECG)signals,has witnessed significant interest.Accurate and timely diagnosis increases the patient’s chances of recovery.However,issues like overfitting and inconsistent accuracy across datasets remain challenges.In a quest to address these challenges,a study presents two prominent deep learning architectures,ResNet-50 and DenseNet-121,to evaluate their effectiveness in AFib detection.The aim was to create a robust detection mechanism that consistently performs well.Metrics such as loss,accuracy,precision,sensitivity,and Area Under the Curve(AUC)were utilized for evaluation.The findings revealed that ResNet-50 surpassed DenseNet-121 in all evaluated categories.It demonstrated lower loss rate 0.0315 and 0.0305 superior accuracy of 98.77%and 98.88%,precision of 98.78%and 98.89%and sensitivity of 98.76%and 98.86%for training and validation,hinting at its advanced capability for AFib detection.These insights offer a substantial contribution to the existing literature on deep learning applications for AFib detection from ECG signals.The comparative performance data assists future researchers in selecting suitable deep-learning architectures for AFib detection.Moreover,the outcomes of this study are anticipated to stimulate the development of more advanced and efficient ECG-based AFib detection methodologies,for more accurate and early detection of AFib,thereby fostering improved patient care and outcomes.展开更多
基金supported in part by the National Key Research and Development Program of China(No.2022YFB4500800)the National Science Foundation of China(No.42071431).
文摘Encrypted traffic plays a crucial role in safeguarding network security and user privacy.However,encrypting malicious traffic can lead to numerous security issues,making the effective classification of encrypted traffic essential.Existing methods for detecting encrypted traffic face two significant challenges.First,relying solely on the original byte information for classification fails to leverage the rich temporal relationships within network traffic.Second,machine learning and convolutional neural network methods lack sufficient network expression capabilities,hindering the full exploration of traffic’s potential characteristics.To address these limitations,this study introduces a traffic classification method that utilizes time relationships and a higher-order graph neural network,termed HGNN-ETC.This approach fully exploits the original byte information and chronological relationships of traffic packets,transforming traffic data into a graph structure to provide the model with more comprehensive context information.HGNN-ETC employs an innovative k-dimensional graph neural network to effectively capture the multi-scale structural features of traffic graphs,enabling more accurate classification.We select the ISCXVPN and the USTC-TK2016 dataset for our experiments.The results show that compared with other state-of-the-art methods,our method can obtain a better classification effect on different datasets,and the accuracy rate is about 97.00%.In addition,by analyzing the impact of varying input specifications on classification performance,we determine the optimal network data truncation strategy and confirm the model’s excellent generalization ability on different datasets.
基金Project supported by the National Natural Science Foundation of China(Grant Nos.62375140 and 62001249)the Open Research Fund of National Laboratory of Solid State Microstructures(Grant No.M36055).
文摘The vector vortex beam(VVB)has attracted significant attention due to its intrinsic diversity of information and has found great applications in both classical and quantum communications.However,a VVB is unavoidably affected by atmospheric turbulence(AT)when it propagates through the free-space optical communication environment,which results in detection errors at the receiver.In this paper,we propose a VVB classification scheme to detect VVBs with continuously changing polarization states under AT,where a diffractive deep neural network(DDNN)is designed and trained to classify the intensity distribution of the input distorted VVBs,and the horizontal direction of polarization of the input distorted beam is adopted as the feature for the classification through the DDNN.The numerical simulations and experimental results demonstrate that the proposed scheme has high accuracy in classification tasks.The energy distribution percentage remains above 95%from weak to medium AT,and the classification accuracy can remain above 95%for various strengths of turbulence.It has a faster convergence and better accuracy than that based on a convolutional neural network.
基金Project supported by the Natural Science Foundation of Shandong Province,China (Grant No. ZR2021MF049)the Joint Fund of Natural Science Foundation of Shandong Province (Grant Nos. ZR2022LLZ012 and ZR2021LLZ001)。
文摘We redesign the parameterized quantum circuit in the quantum deep neural network, construct a three-layer structure as the hidden layer, and then use classical optimization algorithms to train the parameterized quantum circuit, thereby propose a novel hybrid quantum deep neural network(HQDNN) used for image classification. After bilinear interpolation reduces the original image to a suitable size, an improved novel enhanced quantum representation(INEQR) is used to encode it into quantum states as the input of the HQDNN. Multi-layer parameterized quantum circuits are used as the main structure to implement feature extraction and classification. The output results of parameterized quantum circuits are converted into classical data through quantum measurements and then optimized on a classical computer. To verify the performance of the HQDNN, we conduct binary classification and three classification experiments on the MNIST(Modified National Institute of Standards and Technology) data set. In the first binary classification, the accuracy of 0 and 4 exceeds98%. Then we compare the performance of three classification with other algorithms, the results on two datasets show that the classification accuracy is higher than that of quantum deep neural network and general quantum convolutional neural network.
基金Natural Science Foundation of Shandong Province,China(Grant No.ZR202111230202).
文摘Hyperspectral image classification stands as a pivotal task within the field of remote sensing,yet achieving highprecision classification remains a significant challenge.In response to this challenge,a Spectral Convolutional Neural Network model based on Adaptive Fick’s Law Algorithm(AFLA-SCNN)is proposed.The Adaptive Fick’s Law Algorithm(AFLA)constitutes a novel metaheuristic algorithm introduced herein,encompassing three new strategies:Adaptive weight factor,Gaussian mutation,and probability update policy.With adaptive weight factor,the algorithmcan adjust theweights according to the change in the number of iterations to improve the performance of the algorithm.Gaussianmutation helps the algorithm avoid falling into local optimal solutions and improves the searchability of the algorithm.The probability update strategy helps to improve the exploitability and adaptability of the algorithm.Within the AFLA-SCNN model,AFLA is employed to optimize two hyperparameters in the SCNN model,namely,“numEpochs”and“miniBatchSize”,to attain their optimal values.AFLA’s performance is initially validated across 28 functions in 10D,30D,and 50D for CEC2013 and 29 functions in 10D,30D,and 50D for CEC2017.Experimental results indicate AFLA’s marked performance superiority over nine other prominent optimization algorithms.Subsequently,the AFLA-SCNN model was compared with the Spectral Convolutional Neural Network model based on Fick’s Law Algorithm(FLA-SCNN),Spectral Convolutional Neural Network model based on Harris Hawks Optimization(HHO-SCNN),Spectral Convolutional Neural Network model based onDifferential Evolution(DE-SCNN),SpectralConvolutionalNeuralNetwork(SCNN)model,and SupportVector Machines(SVM)model using the Indian Pines dataset and PaviaUniversity dataset.The experimental results show that the AFLA-SCNN model outperforms other models in terms of Accuracy,Precision,Recall,and F1-score on Indian Pines and Pavia University.Among them,the Accuracy of the AFLA-SCNN model on Indian Pines reached 99.875%,and the Accuracy on PaviaUniversity reached 98.022%.In conclusion,our proposed AFLA-SCNN model is deemed to significantly enhance the precision of hyperspectral image classification.
文摘Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware resources. To address this issue, the MobileNetV1 network was developed, which employs depthwise convolution to reduce network complexity. MobileNetV1 employs a stride of 2 in several convolutional layers to decrease the spatial resolution of feature maps, thereby lowering computational costs. However, this stride setting can lead to a loss of spatial information, particularly affecting the detection and representation of smaller objects or finer details in images. To maintain the trade-off between complexity and model performance, a lightweight convolutional neural network with hierarchical multi-scale feature fusion based on the MobileNetV1 network is proposed. The network consists of two main subnetworks. The first subnetwork uses a depthwise dilated separable convolution (DDSC) layer to learn imaging features with fewer parameters, which results in a lightweight and computationally inexpensive network. Furthermore, depthwise dilated convolution in DDSC layer effectively expands the field of view of filters, allowing them to incorporate a larger context. The second subnetwork is a hierarchical multi-scale feature fusion (HMFF) module that uses parallel multi-resolution branches architecture to process the input feature map in order to extract the multi-scale feature information of the input image. Experimental results on the CIFAR-10, Malaria, and KvasirV1 datasets demonstrate that the proposed method is efficient, reducing the network parameters and computational cost by 65.02% and 39.78%, respectively, while maintaining the network performance compared to the MobileNetV1 baseline.
基金funded by the National Natural Science Foundation of China(Nos.62072212,62302218)the Development Project of Jilin Province of China(Nos.20220508125RC,20230201065GX,20240101364JC)+1 种基金National Key R&D Program(No.2018YFC2001302)the Jilin Provincial Key Laboratory of Big Data Intelligent Cognition(No.20210504003GH).
文摘Locomotor intent classification has become a research hotspot due to its importance to the development of assistive robotics and wearable devices.Previous work have achieved impressive performance in classifying steady locomotion states.However,it remains challenging for these methods to attain high accuracy when facing transitions between steady locomotion states.Due to the similarities between the information of the transitions and their adjacent steady states.Furthermore,most of these methods rely solely on data and overlook the objective laws between physical activities,resulting in lower accuracy,particularly when encountering complex locomotion modes such as transitions.To address the existing deficiencies,we propose the locomotion rule embedding long short-term memory(LSTM)network with Attention(LREAL)for human locomotor intent classification,with a particular focus on transitions,using data from fewer sensors(two inertial measurement units and four goniometers).The LREAL network consists of two levels:One responsible for distinguishing between steady states and transitions,and the other for the accurate identification of locomotor intent.Each classifier in these levels is composed of multiple-LSTM layers and an attention mechanism.To introduce real-world motion rules and apply constraints to the network,a prior knowledge was added to the network via a rule-modulating block.The method was tested on the ENABL3S dataset,which contains continuous locomotion date for seven steady and twelve transitions states.Experimental results showed that the LREAL network could recognize locomotor intents with an average accuracy of 99.03%and 96.52%for the steady and transitions states,respectively.It is worth noting that the LREAL network accuracy for transition-state recognition improved by 0.18%compared to other state-of-the-art network,while using data from fewer sensors.
文摘With limited number of labeled samples,hyperspectral image(HSI)classification is a difficult Problem in current research.The graph neural network(GNN)has emerged as an approach to semi-supervised classification,and the application of GNN to hyperspectral images has attracted much attention.However,in the existing GNN-based methods a single graph neural network or graph filter is mainly used to extract HSI features,which does not take full advantage of various graph neural networks(graph filters).Moreover,the traditional GNNs have the problem of oversmoothing.To alleviate these shortcomings,we introduce a deep hybrid multi-graph neural network(DHMG),where two different graph filters,i.e.,the spectral filter and the autoregressive moving average(ARMA)filter,are utilized in two branches.The former can well extract the spectral features of the nodes,and the latter has a good suppression effect on graph noise.The network realizes information interaction between the two branches and takes good advantage of different graph filters.In addition,to address the problem of oversmoothing,a dense network is proposed,where the local graph features are preserved.The dense structure satisfies the needs of different classification targets presenting different features.Finally,we introduce a GraphSAGEbased network to refine the graph features produced by the deep hybrid network.Extensive experiments on three public HSI datasets strongly demonstrate that the DHMG dramatically outperforms the state-ofthe-art models.
基金supported in part by the National Natural Science Foundation of China under Grant(62171045,62201090)in part by the National Key Research and Development Program of China under Grants(2020YFB1807602,2019YFB1804404).
文摘Automatic modulation classification(AMC)aims at identifying the modulation of the received signals,which is a significant approach to identifying the target in military and civil applications.In this paper,a novel data-driven framework named convolutional and transformer-based deep neural network(CTDNN)is proposed to improve the classification performance.CTDNN can be divided into four modules,i.e.,convolutional neural network(CNN)backbone,transition module,transformer module,and final classifier.In the CNN backbone,a wide and deep convolution structure is designed,which consists of 1×15 convolution kernels and intensive cross-layer connections instead of traditional 1×3 kernels and sequential connections.In the transition module,a 1×1 convolution layer is utilized to compress the channels of the previous multi-scale CNN features.In the transformer module,three self-attention layers are designed for extracting global features and generating the classification vector.In the classifier,the final decision is made based on the maximum a posterior probability.Extensive simulations are conducted,and the result shows that our proposed CTDNN can achieve superior classification performance than traditional deep models.
基金funded by the Research Fund for Foundation of Hebei University(DXK201914)the President of Hebei University(XZJJ201914)+3 种基金the Post-graduate’s Innovation Fund Project of Hebei University(HBU2022SS003)the Special Project for Cultivating College Students’Scientific and Technological Innovation Ability in Hebei Province(22E50041D)Guangdong Basic and Applied Basic Research Foundation(2021A1515011654)the Fundamental Research Funds for the Central Universities of China(20720210117).
文摘Achieving accurate classification of colorectal polyps during colonoscopy can avoid unnecessary endoscopic biopsy or resection.This study aimed to develop a deep learning model that can automatically classify colorectal polyps histologically on white-light and narrow-band imaging(NBI)colonoscopy images based on World Health Organization(WHO)and Workgroup serrAted polypS and Polyposis(WASP)classification criteria for colorectal polyps.White-light and NBI colonoscopy images of colorectal polyps exhibiting pathological results were firstly collected and classified into four categories:conventional adenoma,hyperplastic polyp,sessile serrated adenoma/polyp(SSAP)and normal,among which conventional adenoma could be further divided into three sub-categories of tubular adenoma,villous adenoma and villioustublar adenoma,subsequently the images were re-classified into six categories.In this paper,we proposed a novel convolutional neural network termed Polyp-DedNet for the four-and six-category classification tasks of colorectal polyps.Based on the existing classification network ResNet50,Polyp-DedNet adopted dilated convolution to retain more high-dimensional spatial information and an Efficient Channel Attention(ECA)module to improve the classification performance further.To eliminate gridding artifacts caused by dilated convolutions,traditional convolutional layers were used instead of the max pooling layer,and two convolutional layers with progressively decreasing dilation were added at the end of the network.Due to the inevitable imbalance of medical image data,a regularization method DropBlock and a Class-Balanced(CB)Loss were performed to prevent network overfitting.Furthermore,the 5-fold cross-validation was adopted to estimate the performance of Polyp-DedNet for the multi-classification task of colorectal polyps.Mean accuracies of the proposed Polyp-DedNet for the four-and six-category classifications of colorectal polyps were 89.91%±0.92%and 85.13%±1.10%,respectively.The metrics of precision,recall and F1-score were also improved by 1%∼2%compared to the baseline ResNet50.The proposed Polyp-DedNet presented state-of-the-art performance for colorectal polyp classifying on white-light and NBI colonoscopy images,highlighting its considerable potential as an AI-assistant system for accurate colorectal polyp diagnosis in colonoscopy.
基金New Energy and Industrial Technology Development Organization,Grant/Award Number:JPNP20006JSPS Grant-in-Aid for Scientific Research,Grant/Award Numbers:21K12042,17H01785。
文摘Graphs help to define the relationships between entities in the data.These relationships,represented by edges,often provide additional context information which can be utilised to discover patterns in the data.Graph Neural Networks(GNNs)employ the inductive bias of the graph structure to learn and predict on various tasks.The primary operation of graph neural networks is the feature aggregation step performed over neighbours of the node based on the structure of the graph.In addition to its own features,for each hop,the node gets additional combined features from its neighbours.These aggregated features help define the similarity or dissimilarity of the nodes with respect to the labels and are useful for tasks like node classification.However,in real-world data,features of neighbours at different hops may not correlate with the node's features.Thus,any indiscriminate feature aggregation by GNN might cause the addition of noisy features leading to degradation in model's performance.In this work,we show that selective aggregation of node features from various hops leads to better performance than default aggregation on the node classification task.Furthermore,we propose a Dual-Net GNN architecture with a classifier model and a selector model.The classifier model trains over a subset of input node features to predict node labels while the selector model learns to provide optimal input subset to the classifier for the best performance.These two models are trained jointly to learn the best subset of features that give higher accuracy in node label predictions.With extensive experiments,we show that our proposed model outperforms both feature selection methods and state-of-the-art GNN models with remarkable improvements up to 27.8%.
基金Project supported by the National Natural Science Foundation of China (Grant Nos.61772295 and 61572270)the PHD foundation of Chongqing Normal University (Grant No.19XLB003)Chongqing Technology Foresight and Institutional Innovation Project (Grant No.cstc2021jsyjyzysbAX0011)。
文摘Time series classification(TSC)has attracted a lot of attention for time series data mining tasks and has been applied in various fields.With the success of deep learning(DL)in computer vision recognition,people are starting to use deep learning to tackle TSC tasks.Quantum neural networks(QNN)have recently demonstrated their superiority over traditional machine learning in methods such as image processing and natural language processing,but research using quantum neural networks to handle TSC tasks has not received enough attention.Therefore,we proposed a learning framework based on multiple imaging and hybrid QNN(MIHQNN)for TSC tasks.We investigate the possibility of converting 1D time series to 2D images and classifying the converted images using hybrid QNN.We explored the differences between MIHQNN based on single time series imaging and MIHQNN based on the fusion of multiple time series imaging.Four quantum circuits were also selected and designed to study the impact of quantum circuits on TSC tasks.We tested our method on several standard datasets and achieved significant results compared to several current TSC methods,demonstrating the effectiveness of MIHQNN.This research highlights the potential of applying quantum computing to TSC and provides the theoretical and experimental background for future research.
文摘Use of deep learning algorithms for the investigation and analysis of medical images has emerged as a powerful technique.The increase in retinal dis-eases is alarming as it may lead to permanent blindness if left untreated.Automa-tion of the diagnosis process of retinal diseases not only assists ophthalmologists in correct decision-making but saves time also.Several researchers have worked on automated retinal disease classification but restricted either to hand-crafted fea-ture selection or binary classification.This paper presents a deep learning-based approach for the automated classification of multiple retinal diseases using fundus images.For this research,the data has been collected and combined from three distinct sources.The images are preprocessed for enhancing the details.Six layers of the convolutional neural network(CNN)are used for the automated feature extraction and classification of 20 retinal diseases.It is observed that the results are reliant on the number of classes.For binary classification(healthy vs.unhealthy),up to 100%accuracy has been achieved.When 16 classes are used(treating stages of a disease as a single class),93.3%accuracy,92%sensitivity and 93%specificity have been obtained respectively.For 20 classes(treating stages of the disease as separate classes),the accuracy,sensitivity and specificity have dropped to 92.4%,92%and 92%respectively.
文摘The quality of maize seeds affects the outcome of planting and harvesting,so seed quality inspection has become very important.Traditional seed quality detection methods are labor-intensive and time-consuming,whereas seed quality detection using computer vision techniques is efficient and accurate.In this study,we conducted migration learning training in AlexNet,VGG11 and ShuffleNetV2 network models respectively,and found that ShuffleNetV2 has a high accuracy rate for maize seed classification and recognition by comparing various metrics.In this study,the features of the seed images were extracted through image pre-processing methods,and then the AlexNet,VGG11 and ShuffleNetV2 models were used for training and classification respectively.A total of 2081 seed images containing four varieties were used for training and testing.The experimental results showed that ShuffleNetV2 could efficiently distinguish different varieties of maize seeds with the highest classification accuracy of 100%,where the parameter size of the model was at 20.65 MB and the response time for a single image was at 0.45 s.Therefore,the method is of high practicality and extension value.
文摘Phishing attacks pose a significant security threat by masquerading as trustworthy entities to steal sensitive information,a problem that persists despite user awareness.This study addresses the pressing issue of phishing attacks on websites and assesses the performance of three prominent Machine Learning(ML)models—Artificial Neural Networks(ANN),Convolutional Neural Networks(CNN),and Long Short-Term Memory(LSTM)—utilizing authentic datasets sourced from Kaggle and Mendeley repositories.Extensive experimentation and analysis reveal that the CNN model achieves a better accuracy of 98%.On the other hand,LSTM shows the lowest accuracy of 96%.These findings underscore the potential of ML techniques in enhancing phishing detection systems and bolstering cybersecurity measures against evolving phishing tactics,offering a promising avenue for safeguarding sensitive information and online security.
基金The National Natural Science Foundation of China(No.60503020,60373066,60403016,60425206),the Natural Science Foundation of Jiangsu Higher Education Institutions ( No.04KJB520096),the Doctoral Foundation of Nanjing University of Posts and Telecommunication (No.0302).
文摘A rough set based corner classification neural network, the Rough-CC4, is presented to solve document classification problems such as document representation of different document sizes, document feature selection and document feature encoding. In the Rough-CC4, the documents are described by the equivalent classes of the approximate words. By this method, the dimensions representing the documents can be reduced, which can solve the precision problems caused by the different document sizes and also blur the differences caused by the approximate words. In the Rough-CC4, a binary encoding method is introduced, through which the importance of documents relative to each equivalent class is encoded. By this encoding method, the precision of the Rough-CC4 is improved greatly and the space complexity of the Rough-CC4 is reduced. The Rough-CC4 can be used in automatic classification of documents.
基金supported by the Shaanxi Province Natural Science Basic Research Plan Project(2023-JC-YB-244).
文摘The classification of infrasound events has considerable importance in improving the capability to identify the types of natural disasters.The traditional infrasound classification mainly relies on machine learning algorithms after artificial feature extraction.However,guaranteeing the effectiveness of the extracted features is difficult.The current trend focuses on using a convolution neural network to automatically extract features for classification.This method can be used to extract signal spatial features automatically through a convolution kernel;however,infrasound signals contain not only spatial information but also temporal information when used as a time series.These extracted temporal features are also crucial.If only a convolution neural network is used,then the time dependence of the infrasound sequence will be missed.Using long short-term memory networks can compensate for the missing time-series features but induces spatial feature information loss of the infrasound signal.A multiscale squeeze excitation–convolution neural network–bidirectional long short-term memory network infrasound event classification fusion model is proposed in this study to address these problems.This model automatically extracted temporal and spatial features,adaptively selected features,and also realized the fusion of the two types of features.Experimental results showed that the classification accuracy of the model was more than 98%,thus verifying the effectiveness and superiority of the proposed model.
基金the National Natural Science Foundation of China under Grant No.52274159 received by E.Hu,https://www.nsfc.gov.cn/Grant No.52374165 received by E.Hu,https://www.nsfc.gov.cn/the China National Coal Group Key Technology Project Grant No.(20221CY001)received by Z.Guan,and E.Hu,https://www.chinacoal.com/.
文摘In the coal mining industry,the gangue separation phase imposes a key challenge due to the high visual similaritybetween coal and gangue.Recently,separation methods have become more intelligent and efficient,using newtechnologies and applying different features for recognition.One such method exploits the difference in substancedensity,leading to excellent coal/gangue recognition.Therefore,this study uses density differences to distinguishcoal from gangue by performing volume prediction on the samples.Our training samples maintain a record of3-side images as input,volume,and weight as the ground truth for the classification.The prediction process relieson a Convolutional neural network(CGVP-CNN)model that receives an input of a 3-side image and then extractsthe needed features to estimate an approximation for the volume.The classification was comparatively performedvia ten different classifiers,namely,K-Nearest Neighbors(KNN),Linear Support Vector Machines(Linear SVM),Radial Basis Function(RBF)SVM,Gaussian Process,Decision Tree,Random Forest,Multi-Layer Perceptron(MLP),Adaptive Boosting(AdaBosst),Naive Bayes,and Quadratic Discriminant Analysis(QDA).After severalexperiments on testing and training data,results yield a classification accuracy of 100%,92%,95%,96%,100%,100%,100%,96%,81%,and 92%,respectively.The test reveals the best timing with KNN,which maintained anaccuracy level of 100%.Assessing themodel generalization capability to newdata is essential to ensure the efficiencyof the model,so by applying a cross-validation experiment,the model generalization was measured.The useddataset was isolated based on the volume values to ensure the model generalization not only on new images of thesame volume but with a volume outside the trained range.Then,the predicted volume values were passed to theclassifiers group,where classification reported accuracy was found to be(100%,100%,100%,98%,88%,87%,100%,87%,97%,100%),respectively.Although obtaining a classification with high accuracy is the main motive,this workhas a remarkable reduction in the data preprocessing time compared to related works.The CGVP-CNN modelmanaged to reduce the data preprocessing time of previous works to 0.017 s while maintaining high classificationaccuracy using the estimated volume value.
基金the National Natural Science Foundation of China Youth Project(62302520).
文摘With the increasing proportion of encrypted traffic in cyberspace, the classification of encrypted traffic has becomea core key technology in network supervision. In recent years, many different solutions have emerged in this field.Most methods identify and classify traffic by extracting spatiotemporal characteristics of data flows or byte-levelfeatures of packets. However, due to changes in data transmission mediums, such as fiber optics and satellites,temporal features can exhibit significant variations due to changes in communication links and transmissionquality. Additionally, partial spatial features can change due to reasons like data reordering and retransmission.Faced with these challenges, identifying encrypted traffic solely based on packet byte-level features is significantlydifficult. To address this, we propose a universal packet-level encrypted traffic identification method, ComboPacket. This method utilizes convolutional neural networks to extract deep features of the current packet andits contextual information and employs spatial and channel attention mechanisms to select and locate effectivefeatures. Experimental data shows that Combo Packet can effectively distinguish between encrypted traffic servicecategories (e.g., File Transfer Protocol, FTP, and Peer-to-Peer, P2P) and encrypted traffic application categories (e.g.,BitTorrent and Skype). Validated on the ISCX VPN-non VPN dataset, it achieves classification accuracies of 97.0%and 97.1% for service and application categories, respectively. It also provides shorter training times and higherrecognition speeds. The performance and recognition capabilities of Combo Packet are significantly superior tothe existing classification methods mentioned.
基金support of the National Key R&D Program of China(No.2022YFC2803903)the Key R&D Program of Zhejiang Province(No.2021C03013)the Zhejiang Provincial Natural Science Foundation of China(No.LZ20F020003).
文摘The ocean plays an important role in maintaining the equilibrium of Earth’s ecology and providing humans access to a wealth of resources.To obtain a high-precision underwater image classification model,we propose a classification model that combines an EfficientnetB0 neural network and a two-hidden-layer random vector functional link network(EfficientnetB0-TRVFL).The features of underwater images were extracted using the EfficientnetB0 neural network pretrained via ImageNet,and a new fully connected layer was trained on the underwater image dataset using the transfer learning method.Transfer learning ensures the initial performance of the network and helps in the development of a high-precision classification model.Subsequently,a TRVFL was proposed to improve the classification property of the model.Net construction of the two hidden layers exhibited a high accuracy when the same hidden layer nodes were used.The parameters of the second hidden layer were obtained using a novel calculation method,which reduced the outcome error to improve the performance instability caused by the random generation of parameters of RVFL.Finally,the TRVFL classifier was used to classify features and obtain classification results.The proposed EfficientnetB0-TRVFL classification model achieved 87.28%,74.06%,and 99.59%accuracy on the MLC2008,MLC2009,and Fish-gres datasets,respectively.The best convolutional neural networks and existing methods were stacked up through box plots and Kolmogorov-Smirnov tests,respectively.The increases imply improved systematization properties in underwater image classification tasks.The image classification model offers important performance advantages and better stability compared with existing methods.
文摘The application of deep learning techniques in the medical field,specifically for Atrial Fibrillation(AFib)detection through Electrocardiogram(ECG)signals,has witnessed significant interest.Accurate and timely diagnosis increases the patient’s chances of recovery.However,issues like overfitting and inconsistent accuracy across datasets remain challenges.In a quest to address these challenges,a study presents two prominent deep learning architectures,ResNet-50 and DenseNet-121,to evaluate their effectiveness in AFib detection.The aim was to create a robust detection mechanism that consistently performs well.Metrics such as loss,accuracy,precision,sensitivity,and Area Under the Curve(AUC)were utilized for evaluation.The findings revealed that ResNet-50 surpassed DenseNet-121 in all evaluated categories.It demonstrated lower loss rate 0.0315 and 0.0305 superior accuracy of 98.77%and 98.88%,precision of 98.78%and 98.89%and sensitivity of 98.76%and 98.86%for training and validation,hinting at its advanced capability for AFib detection.These insights offer a substantial contribution to the existing literature on deep learning applications for AFib detection from ECG signals.The comparative performance data assists future researchers in selecting suitable deep-learning architectures for AFib detection.Moreover,the outcomes of this study are anticipated to stimulate the development of more advanced and efficient ECG-based AFib detection methodologies,for more accurate and early detection of AFib,thereby fostering improved patient care and outcomes.