Intrusion detection is a predominant task that monitors and protects the network infrastructure.Therefore,many datasets have been published and investigated by researchers to analyze and understand the problem of intr...Intrusion detection is a predominant task that monitors and protects the network infrastructure.Therefore,many datasets have been published and investigated by researchers to analyze and understand the problem of intrusion prediction and detection.In particular,the Network Security Laboratory-Knowledge Discovery in Databases(NSL-KDD)is an extensively used benchmark dataset for evaluating intrusion detection systems(IDSs)as it incorporates various network traffic attacks.It is worth mentioning that a large number of studies have tackled the problem of intrusion detection using machine learning models,but the performance of these models often decreases when evaluated on new attacks.This has led to the utilization of deep learning techniques,which have showcased significant potential for processing large datasets and therefore improving detection accuracy.For that reason,this paper focuses on the role of stacking deep learning models,including convolution neural network(CNN)and deep neural network(DNN)for improving the intrusion detection rate of the NSL-KDD dataset.Each base model is trained on the NSL-KDD dataset to extract significant features.Once the base models have been trained,the stacking process proceeds to the second stage,where a simple meta-model has been trained on the predictions generated from the proposed base models.The combination of the predictions allows the meta-model to distinguish different classes of attacks and increase the detection rate.Our experimental evaluations using the NSL-KDD dataset have shown the efficacy of stacking deep learning models for intrusion detection.The performance of the ensemble of base models,combined with the meta-model,exceeds the performance of individual models.Our stacking model has attained an accuracy of 99%and an average F1-score of 93%for the multi-classification scenario.Besides,the training time of the proposed ensemble model is lower than the training time of benchmark techniques,demonstrating its efficiency and robustness.展开更多
Purpose:Many science,technology and innovation(STI)resources are attached with several different labels.To assign automatically the resulting labels to an interested instance,many approaches with good performance on t...Purpose:Many science,technology and innovation(STI)resources are attached with several different labels.To assign automatically the resulting labels to an interested instance,many approaches with good performance on the benchmark datasets have been proposed for multi-label classification task in the literature.Furthermore,several open-source tools implementing these approaches have also been developed.However,the characteristics of real-world multi-label patent and publication datasets are not completely in line with those of benchmark ones.Therefore,the main purpose of this paper is to evaluate comprehensively seven multi-label classification methods on real-world datasets.Research limitations:Three real-world datasets differ in the following aspects:statement,data quality,and purposes.Additionally,open-source tools designed for multi-label classification also have intrinsic differences in their approaches for data processing and feature selection,which in turn impacts the performance of a multi-label classification approach.In the near future,we will enhance experimental precision and reinforce the validity of conclusions by employing more rigorous control over variables through introducing expanded parameter settings.Practical implications:The observed Macro F1 and Micro F1 scores on real-world datasets typically fall short of those achieved on benchmark datasets,underscoring the complexity of real-world multi-label classification tasks.Approaches leveraging deep learning techniques offer promising solutions by accommodating the hierarchical relationships and interdependencies among labels.With ongoing enhancements in deep learning algorithms and large-scale models,it is expected that the efficacy of multi-label classification tasks will be significantly improved,reaching a level of practical utility in the foreseeable future.Originality/value:(1)Seven multi-label classification methods are comprehensively compared on three real-world datasets.(2)The TextCNN and TextRCNN models perform better on small-scale datasets with more complex hierarchical structure of labels and more balanced document-label distribution.(3)The MLkNN method works better on the larger-scale dataset with more unbalanced document-label distribution.展开更多
Gliomas have the highest mortality rate of all brain tumors.Correctly classifying the glioma risk period can help doctors make reasonable treatment plans and improve patients’survival rates.This paper proposes a hier...Gliomas have the highest mortality rate of all brain tumors.Correctly classifying the glioma risk period can help doctors make reasonable treatment plans and improve patients’survival rates.This paper proposes a hierarchical multi-scale attention feature fusion medical image classification network(HMAC-Net),which effectively combines global features and local features.The network framework consists of three parallel layers:The global feature extraction layer,the local feature extraction layer,and the multi-scale feature fusion layer.A linear sparse attention mechanism is designed in the global feature extraction layer to reduce information redundancy.In the local feature extraction layer,a bilateral local attention mechanism is introduced to improve the extraction of relevant information between adjacent slices.In the multi-scale feature fusion layer,a channel fusion block combining convolutional attention mechanism and residual inverse multi-layer perceptron is proposed to prevent gradient disappearance and network degradation and improve feature representation capability.The double-branch iterative multi-scale classification block is used to improve the classification performance.On the brain glioma risk grading dataset,the results of the ablation experiment and comparison experiment show that the proposed HMAC-Net has the best performance in both qualitative analysis of heat maps and quantitative analysis of evaluation indicators.On the dataset of skin cancer classification,the generalization experiment results show that the proposed HMAC-Net has a good generalization effect.展开更多
Convolutional neural network(CNN)has excellent ability to model locally contextual information.However,CNNs face challenges for descripting long-range semantic features,which will lead to relatively low classification...Convolutional neural network(CNN)has excellent ability to model locally contextual information.However,CNNs face challenges for descripting long-range semantic features,which will lead to relatively low classification accuracy of hyperspectral images.To address this problem,this article proposes an algorithm based on multiscale fusion and transformer network for hyperspectral image classification.Firstly,the low-level spatial-spectral features are extracted by multi-scale residual structure.Secondly,an attention module is introduced to focus on the more important spatialspectral information.Finally,high-level semantic features are represented and learned by a token learner and an improved transformer encoder.The proposed algorithm is compared with six classical hyperspectral classification algorithms on real hyperspectral images.The experimental results show that the proposed algorithm effectively improves the land cover classification accuracy of hyperspectral images.展开更多
●AIM:To establish a classification for congenital cataracts that can facilitate individualized treatment and help identify individuals with a high likelihood of different visual outcomes.●METHODS:Consecutive patient...●AIM:To establish a classification for congenital cataracts that can facilitate individualized treatment and help identify individuals with a high likelihood of different visual outcomes.●METHODS:Consecutive patients diagnosed with congenital cataracts and undergoing surgery between January 2005 and November 2021 were recruited.Data on visual outcomes and the phenotypic characteristics of ocular biometry and the anterior and posterior segments were extracted from the patients’medical records.A hierarchical cluster analysis was performed.The main outcome measure was the identification of distinct clusters of eyes with congenital cataracts.●RESULTS:A total of 164 children(299 eyes)were divided into two clusters based on their ocular features.Cluster 1(96 eyes)had a shorter axial length(mean±SD,19.44±1.68 mm),a low prevalence of macular abnormalities(1.04%),and no retinal abnormalities or posterior cataracts.Cluster 2(203 eyes)had a greater axial length(mean±SD,20.42±2.10 mm)and a higher prevalence of macular abnormalities(8.37%),retinal abnormalities(98.52%),and posterior cataracts(4.93%).Compared with the eyes in Cluster 2(57.14%),those in Cluster 1(71.88%)had a 2.2 times higher chance of good best-corrected visual acuity[<0.7 logMAR;OR(95%CI),2.20(1.25–3.81);P=0.006].●CONCLUSION:This retrospective study categorizes congenital cataracts into two distinct clusters,each associated with a different likelihood of visual outcomes.This innovative classification may enable the personalization and prioritization of early interventions for patients who may gain the greatest benefit,thereby making strides toward precision medicine in the field of congenital cataracts.展开更多
Locomotor intent classification has become a research hotspot due to its importance to the development of assistive robotics and wearable devices.Previous work have achieved impressive performance in classifying stead...Locomotor intent classification has become a research hotspot due to its importance to the development of assistive robotics and wearable devices.Previous work have achieved impressive performance in classifying steady locomotion states.However,it remains challenging for these methods to attain high accuracy when facing transitions between steady locomotion states.Due to the similarities between the information of the transitions and their adjacent steady states.Furthermore,most of these methods rely solely on data and overlook the objective laws between physical activities,resulting in lower accuracy,particularly when encountering complex locomotion modes such as transitions.To address the existing deficiencies,we propose the locomotion rule embedding long short-term memory(LSTM)network with Attention(LREAL)for human locomotor intent classification,with a particular focus on transitions,using data from fewer sensors(two inertial measurement units and four goniometers).The LREAL network consists of two levels:One responsible for distinguishing between steady states and transitions,and the other for the accurate identification of locomotor intent.Each classifier in these levels is composed of multiple-LSTM layers and an attention mechanism.To introduce real-world motion rules and apply constraints to the network,a prior knowledge was added to the network via a rule-modulating block.The method was tested on the ENABL3S dataset,which contains continuous locomotion date for seven steady and twelve transitions states.Experimental results showed that the LREAL network could recognize locomotor intents with an average accuracy of 99.03%and 96.52%for the steady and transitions states,respectively.It is worth noting that the LREAL network accuracy for transition-state recognition improved by 0.18%compared to other state-of-the-art network,while using data from fewer sensors.展开更多
Bitcoin is widely used as the most classic electronic currency for various electronic services such as exchanges,gambling,marketplaces,and also scams such as high-yield investment projects.Identifying the services ope...Bitcoin is widely used as the most classic electronic currency for various electronic services such as exchanges,gambling,marketplaces,and also scams such as high-yield investment projects.Identifying the services operated by a Bitcoin address can help determine the risk level of that address and build an alert model accordingly.Feature engineering can also be used to flesh out labeled addresses and to analyze the current state of Bitcoin in a small way.In this paper,we address the problem of identifying multiple classes of Bitcoin services,and for the poor classification of individual addresses that do not have significant features,we propose a Bitcoin address identification scheme based on joint multi-model prediction using the mapping relationship between addresses and entities.The innovation of the method is to(1)Extract as many valuable features as possible when an address is given to facilitate the multi-class service identification task.(2)Unlike the general supervised model approach,this paper proposes a joint prediction scheme for multiple learners based on address-entity mapping relationships.Specifically,after obtaining the overall features,the address classification and entity clustering tasks are performed separately,and the results are subjected to graph-basedmaximization consensus.The final result ismade to baseline the individual address classification results while satisfying the constraint of having similarly behaving entities as far as possible.By testing and evaluating over 26,000 Bitcoin addresses,our feature extraction method captures more useful features.In addition,the combined multi-learner model obtained results that exceeded the baseline classifier reaching an accuracy of 77.4%.展开更多
Phononic crystals,as artificial composite materials,have sparked significant interest due to their novel characteristics that emerge upon the introduction of nonlinearity.Among these properties,second-harmonic feature...Phononic crystals,as artificial composite materials,have sparked significant interest due to their novel characteristics that emerge upon the introduction of nonlinearity.Among these properties,second-harmonic features exhibit potential applications in acoustic frequency conversion,non-reciprocal wave propagation,and non-destructive testing.Precisely manipulating the harmonic band structure presents a major challenge in the design of nonlinear phononic crystals.Traditional design approaches based on parameter adjustments to meet specific application requirements are inefficient and often yield suboptimal performance.Therefore,this paper develops a design methodology using Softmax logistic regression and multi-label classification learning to inversely design the material distribution of nonlinear phononic crystals by exploiting information from harmonic transmission spectra.The results demonstrate that the neural network-based inverse design method can effectively tailor nonlinear phononic crystals with desired functionalities.This work establishes a mapping relationship between the band structure and the material distribution within phononic crystals,providing valuable insights into the inverse design of metamaterials.展开更多
Effective development and utilization of wood resources is critical.Wood modification research has become an integral dimension of wood science research,however,the similarities between modified wood and original wood...Effective development and utilization of wood resources is critical.Wood modification research has become an integral dimension of wood science research,however,the similarities between modified wood and original wood render it challenging for accurate identification and classification using conventional image classification techniques.So,the development of efficient and accurate wood classification techniques is inevitable.This paper presents a one-dimensional,convolutional neural network(i.e.,BACNN)that combines near-infrared spectroscopy and deep learning techniques to classify poplar,tung,and balsa woods,and PVA,nano-silica-sol and PVA-nano silica sol modified woods of poplar.The results show that BACNN achieves an accuracy of 99.3%on the test set,higher than the 52.9%of the BP neural network and 98.7%of Support Vector Machine compared with traditional machine learning methods and deep learning based methods;it is also higher than the 97.6%of LeNet,98.7%of AlexNet and 99.1%of VGGNet-11.Therefore,the classification method proposed offers potential applications in wood classification,especially with homogeneous modified wood,and it also provides a basis for subsequent wood properties studies.展开更多
The direction-of-arrival(DoA) estimation is one of the hot research areas in signal processing. To overcome the DoA estimation challenge without the prior information about signal sources number and multipath number i...The direction-of-arrival(DoA) estimation is one of the hot research areas in signal processing. To overcome the DoA estimation challenge without the prior information about signal sources number and multipath number in millimeter wave system,the multi-task deep residual shrinkage network(MTDRSN) and transfer learning-based convolutional neural network(TCNN), namely MDTCNet, are proposed. The sampling covariance matrix based on the received signal is used as the input to the proposed network. A DRSN-based multi-task classifications model is first introduced to estimate signal sources number and multipath number simultaneously. Then, the DoAs with multi-signal and multipath are estimated by the regression model. The proposed CNN is applied for DoAs estimation with the predicted number of signal sources and paths. Furthermore, the modelbased transfer learning is also introduced into the regression model. The TCNN inherits the partial network parameters of the already formed optimization model obtained by the CNN. A series of experimental results show that the MDTCNet-based DoAs estimation method can accurately predict the signal sources number and multipath number under a range of signal-to-noise ratios. Remarkably, the proposed method achieves the lower root mean square error compared with some existing deep learning-based and traditional methods.展开更多
As digital technologies have advanced more rapidly,the number of paper documents recently converted into a digital format has exponentially increased.To respond to the urgent need to categorize the growing number of d...As digital technologies have advanced more rapidly,the number of paper documents recently converted into a digital format has exponentially increased.To respond to the urgent need to categorize the growing number of digitized documents,the classification of digitized documents in real time has been identified as the primary goal of our study.A paper classification is the first stage in automating document control and efficient knowledge discovery with no or little human involvement.Artificial intelligence methods such as Deep Learning are now combined with segmentation to study and interpret those traits,which were not conceivable ten years ago.Deep learning aids in comprehending input patterns so that object classes may be predicted.The segmentation process divides the input image into separate segments for a more thorough image study.This study proposes a deep learning-enabled framework for automated document classification,which can be implemented in higher education.To further this goal,a dataset was developed that includes seven categories:Diplomas,Personal documents,Journal of Accounting of higher education diplomas,Service letters,Orders,Production orders,and Student orders.Subsequently,a deep learning model based on Conv2D layers is proposed for the document classification process.In the final part of this research,the proposed model is evaluated and compared with other machine-learning techniques.The results demonstrate that the proposed deep learning model shows high results in document categorization overtaking the other machine learning models by reaching 94.84%,94.79%,94.62%,94.43%,94.07%in accuracy,precision,recall,F-score,and AUC-ROC,respectively.The achieved results prove that the proposed deep model is acceptable to use in practice as an assistant to an office worker.展开更多
Hyperspectral(HS)image classification plays a crucial role in numerous areas including remote sensing(RS),agriculture,and the monitoring of the environment.Optimal band selection in HS images is crucial for improving ...Hyperspectral(HS)image classification plays a crucial role in numerous areas including remote sensing(RS),agriculture,and the monitoring of the environment.Optimal band selection in HS images is crucial for improving the efficiency and accuracy of image classification.This process involves selecting the most informative spectral bands,which leads to a reduction in data volume.Focusing on these key bands also enhances the accuracy of classification algorithms,as redundant or irrelevant bands,which can introduce noise and lower model performance,are excluded.In this paper,we propose an approach for HS image classification using deep Q learning(DQL)and a novel multi-objective binary grey wolf optimizer(MOBGWO).We investigate the MOBGWO for optimal band selection to further enhance the accuracy of HS image classification.In the suggested MOBGWO,a new sigmoid function is introduced as a transfer function to modify the wolves’position.The primary objective of this classification is to reduce the number of bands while maximizing classification accuracy.To evaluate the effectiveness of our approach,we conducted experiments on publicly available HS image datasets,including Pavia University,Washington Mall,and Indian Pines datasets.We compared the performance of our proposed method with several state-of-the-art deep learning(DL)and machine learning(ML)algorithms,including long short-term memory(LSTM),deep neural network(DNN),recurrent neural network(RNN),support vector machine(SVM),and random forest(RF).Our experimental results demonstrate that the Hybrid MOBGWO-DQL significantly improves classification accuracy compared to traditional optimization and DL techniques.MOBGWO-DQL shows greater accuracy in classifying most categories in both datasets used.For the Indian Pine dataset,the MOBGWO-DQL architecture achieved a kappa coefficient(KC)of 97.68%and an overall accuracy(OA)of 94.32%.This was accompanied by the lowest root mean square error(RMSE)of 0.94,indicating very precise predictions with minimal error.In the case of the Pavia University dataset,the MOBGWO-DQL model demonstrated outstanding performance with the highest KC of 98.72%and an impressive OA of 96.01%.It also recorded the lowest RMSE at 0.63,reinforcing its accuracy in predictions.The results clearly demonstrate that the proposed MOBGWO-DQL architecture not only reaches a highly accurate model more quickly but also maintains superior performance throughout the training process.展开更多
Objective:To explore the feasibility of remotely obtaining complex information on traditional Chinese medicine(TCM)pulse conditions through voice signals.Methods: We used multi-label pulse conditions as the entry poin...Objective:To explore the feasibility of remotely obtaining complex information on traditional Chinese medicine(TCM)pulse conditions through voice signals.Methods: We used multi-label pulse conditions as the entry point and modeled and analyzed TCM pulse diagnosis by combining voice analysis and machine learning.Audio features were extracted from voice recordings in the TCM pulse condition dataset.The obtained features were combined with information from tongue and facial diagnoses.A multi-label pulse condition voice classification DNN model was built using 10-fold cross-validation,and the modeling methods were validated using publicly available datasets.Results: The analysis showed that the proposed method achieved an accuracy of 92.59%on the public dataset.The accuracies of the three single-label pulse manifestation models in the test set were 94.27%,96.35%,and 95.39%.The absolute accuracy of the multi-label model was 92.74%.Conclusion: Voice data analysis may serve as a remote adjunct to the TCM diagnostic method for pulse condition assessment.展开更多
The classification of infrasound events has considerable importance in improving the capability to identify the types of natural disasters.The traditional infrasound classification mainly relies on machine learning al...The classification of infrasound events has considerable importance in improving the capability to identify the types of natural disasters.The traditional infrasound classification mainly relies on machine learning algorithms after artificial feature extraction.However,guaranteeing the effectiveness of the extracted features is difficult.The current trend focuses on using a convolution neural network to automatically extract features for classification.This method can be used to extract signal spatial features automatically through a convolution kernel;however,infrasound signals contain not only spatial information but also temporal information when used as a time series.These extracted temporal features are also crucial.If only a convolution neural network is used,then the time dependence of the infrasound sequence will be missed.Using long short-term memory networks can compensate for the missing time-series features but induces spatial feature information loss of the infrasound signal.A multiscale squeeze excitation–convolution neural network–bidirectional long short-term memory network infrasound event classification fusion model is proposed in this study to address these problems.This model automatically extracted temporal and spatial features,adaptively selected features,and also realized the fusion of the two types of features.Experimental results showed that the classification accuracy of the model was more than 98%,thus verifying the effectiveness and superiority of the proposed model.展开更多
Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware reso...Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware resources. To address this issue, the MobileNetV1 network was developed, which employs depthwise convolution to reduce network complexity. MobileNetV1 employs a stride of 2 in several convolutional layers to decrease the spatial resolution of feature maps, thereby lowering computational costs. However, this stride setting can lead to a loss of spatial information, particularly affecting the detection and representation of smaller objects or finer details in images. To maintain the trade-off between complexity and model performance, a lightweight convolutional neural network with hierarchical multi-scale feature fusion based on the MobileNetV1 network is proposed. The network consists of two main subnetworks. The first subnetwork uses a depthwise dilated separable convolution (DDSC) layer to learn imaging features with fewer parameters, which results in a lightweight and computationally inexpensive network. Furthermore, depthwise dilated convolution in DDSC layer effectively expands the field of view of filters, allowing them to incorporate a larger context. The second subnetwork is a hierarchical multi-scale feature fusion (HMFF) module that uses parallel multi-resolution branches architecture to process the input feature map in order to extract the multi-scale feature information of the input image. Experimental results on the CIFAR-10, Malaria, and KvasirV1 datasets demonstrate that the proposed method is efficient, reducing the network parameters and computational cost by 65.02% and 39.78%, respectively, while maintaining the network performance compared to the MobileNetV1 baseline.展开更多
Detecting brain tumours is complex due to the natural variation in their location, shape, and intensity in images. While having accurate detection and segmentation of brain tumours would be beneficial, current methods...Detecting brain tumours is complex due to the natural variation in their location, shape, and intensity in images. While having accurate detection and segmentation of brain tumours would be beneficial, current methods still need to solve this problem despite the numerous available approaches. Precise analysis of Magnetic Resonance Imaging (MRI) is crucial for detecting, segmenting, and classifying brain tumours in medical diagnostics. Magnetic Resonance Imaging is a vital component in medical diagnosis, and it requires precise, efficient, careful, efficient, and reliable image analysis techniques. The authors developed a Deep Learning (DL) fusion model to classify brain tumours reliably. Deep Learning models require large amounts of training data to achieve good results, so the researchers utilised data augmentation techniques to increase the dataset size for training models. VGG16, ResNet50, and convolutional deep belief networks networks extracted deep features from MRI images. Softmax was used as the classifier, and the training set was supplemented with intentionally created MRI images of brain tumours in addition to the genuine ones. The features of two DL models were combined in the proposed model to generate a fusion model, which significantly increased classification accuracy. An openly accessible dataset from the internet was used to test the model's performance, and the experimental results showed that the proposed fusion model achieved a classification accuracy of 98.98%. Finally, the results were compared with existing methods, and the proposed model outperformed them significantly.展开更多
Thoracic diseases pose significant risks to an individual's chest health and are among the most perilous medical diseases. They can impact either one or both lungs, which leads to a severe impairment of a person’...Thoracic diseases pose significant risks to an individual's chest health and are among the most perilous medical diseases. They can impact either one or both lungs, which leads to a severe impairment of a person’s ability to breathe normally. Some notable examples of such diseases encompass pneumonia, lung cancer, coronavirus disease 2019 (COVID-19), tuberculosis, and chronic obstructive pulmonary disease (COPD). Consequently, early and precise detection of these diseases is paramount during the diagnostic process. Traditionally, the primary methods employed for the detection involve the use of X-ray imaging or computed tomography (CT) scans. Nevertheless, due to the scarcity of proficient radiologists and the inherent similarities between these diseases, the accuracy of detection can be compromised, leading to imprecise or erroneous results. To address this challenge, scientists have turned to computer-based solutions, aiming for swift and accurate diagnoses. The primary objective of this study is to develop two machine learning models, utilizing single-task and multi-task learning frameworks, to enhance classification accuracy. Within the multi-task learning architecture, two principal approaches exist soft parameter sharing and hard parameter sharing. Consequently, this research adopts a multi-task deep learning approach that leverages CNNs to achieve improved classification performance for the specified tasks. These tasks, focusing on pneumonia and COVID-19, are processed and learned simultaneously within a multi-task model. To assess the effectiveness of the trained model, it is rigorously validated using three different real-world datasets for training and testing.展开更多
Recently, there have been some attempts of Transformer in 3D point cloud classification. In order to reduce computations, most existing methods focus on local spatial attention,but ignore their content and fail to est...Recently, there have been some attempts of Transformer in 3D point cloud classification. In order to reduce computations, most existing methods focus on local spatial attention,but ignore their content and fail to establish relationships between distant but relevant points. To overcome the limitation of local spatial attention, we propose a point content-based Transformer architecture, called PointConT for short. It exploits the locality of points in the feature space(content-based), which clusters the sampled points with similar features into the same class and computes the self-attention within each class, thus enabling an effective trade-off between capturing long-range dependencies and computational complexity. We further introduce an inception feature aggregator for point cloud classification, which uses parallel structures to aggregate high-frequency and low-frequency information in each branch separately. Extensive experiments show that our PointConT model achieves a remarkable performance on point cloud shape classification. Especially, our method exhibits 90.3% Top-1 accuracy on the hardest setting of ScanObjectN N. Source code of this paper is available at https://github.com/yahuiliu99/PointC onT.展开更多
When building a classification model,the scenario where the samples of one class are significantly more than those of the other class is called data imbalance.Data imbalance causes the trained classification model to ...When building a classification model,the scenario where the samples of one class are significantly more than those of the other class is called data imbalance.Data imbalance causes the trained classification model to be in favor of the majority class(usually defined as the negative class),which may do harm to the accuracy of the minority class(usually defined as the positive class),and then lead to poor overall performance of the model.A method called MSHR-FCSSVM for solving imbalanced data classification is proposed in this article,which is based on a new hybrid resampling approach(MSHR)and a new fine cost-sensitive support vector machine(CS-SVM)classifier(FCSSVM).The MSHR measures the separability of each negative sample through its Silhouette value calculated by Mahalanobis distance between samples,based on which,the so-called pseudo-negative samples are screened out to generate new positive samples(over-sampling step)through linear interpolation and are deleted finally(under-sampling step).This approach replaces pseudo-negative samples with generated new positive samples one by one to clear up the inter-class overlap on the borderline,without changing the overall scale of the dataset.The FCSSVM is an improved version of the traditional CS-SVM.It considers influences of both the imbalance of sample number and the class distribution on classification simultaneously,and through finely tuning the class cost weights by using the efficient optimization algorithm based on the physical phenomenon of rime-ice(RIME)algorithm with cross-validation accuracy as the fitness function to accurately adjust the classification borderline.To verify the effectiveness of the proposed method,a series of experiments are carried out based on 20 imbalanced datasets including both mildly and extremely imbalanced datasets.The experimental results show that the MSHR-FCSSVM method performs better than the methods for comparison in most cases,and both the MSHR and the FCSSVM played significant roles.展开更多
The complex sand-casting process combined with the interactions between process parameters makes it difficult to control the casting quality,resulting in a high scrap rate.A strategy based on a data-driven model was p...The complex sand-casting process combined with the interactions between process parameters makes it difficult to control the casting quality,resulting in a high scrap rate.A strategy based on a data-driven model was proposed to reduce casting defects and improve production efficiency,which includes the random forest(RF)classification model,the feature importance analysis,and the process parameters optimization with Monte Carlo simulation.The collected data includes four types of defects and corresponding process parameters were used to construct the RF model.Classification results show a recall rate above 90% for all categories.The Gini Index was used to assess the importance of the process parameters in the formation of various defects in the RF model.Finally,the classification model was applied to different production conditions for quality prediction.In the case of process parameters optimization for gas porosity defects,this model serves as an experimental process in the Monte Carlo method to estimate a better temperature distribution.The prediction model,when applied to the factory,greatly improved the efficiency of defect detection.Results show that the scrap rate decreased from 10.16% to 6.68%.展开更多
文摘Intrusion detection is a predominant task that monitors and protects the network infrastructure.Therefore,many datasets have been published and investigated by researchers to analyze and understand the problem of intrusion prediction and detection.In particular,the Network Security Laboratory-Knowledge Discovery in Databases(NSL-KDD)is an extensively used benchmark dataset for evaluating intrusion detection systems(IDSs)as it incorporates various network traffic attacks.It is worth mentioning that a large number of studies have tackled the problem of intrusion detection using machine learning models,but the performance of these models often decreases when evaluated on new attacks.This has led to the utilization of deep learning techniques,which have showcased significant potential for processing large datasets and therefore improving detection accuracy.For that reason,this paper focuses on the role of stacking deep learning models,including convolution neural network(CNN)and deep neural network(DNN)for improving the intrusion detection rate of the NSL-KDD dataset.Each base model is trained on the NSL-KDD dataset to extract significant features.Once the base models have been trained,the stacking process proceeds to the second stage,where a simple meta-model has been trained on the predictions generated from the proposed base models.The combination of the predictions allows the meta-model to distinguish different classes of attacks and increase the detection rate.Our experimental evaluations using the NSL-KDD dataset have shown the efficacy of stacking deep learning models for intrusion detection.The performance of the ensemble of base models,combined with the meta-model,exceeds the performance of individual models.Our stacking model has attained an accuracy of 99%and an average F1-score of 93%for the multi-classification scenario.Besides,the training time of the proposed ensemble model is lower than the training time of benchmark techniques,demonstrating its efficiency and robustness.
基金the Natural Science Foundation of China(Grant Numbers 72074014 and 72004012).
文摘Purpose:Many science,technology and innovation(STI)resources are attached with several different labels.To assign automatically the resulting labels to an interested instance,many approaches with good performance on the benchmark datasets have been proposed for multi-label classification task in the literature.Furthermore,several open-source tools implementing these approaches have also been developed.However,the characteristics of real-world multi-label patent and publication datasets are not completely in line with those of benchmark ones.Therefore,the main purpose of this paper is to evaluate comprehensively seven multi-label classification methods on real-world datasets.Research limitations:Three real-world datasets differ in the following aspects:statement,data quality,and purposes.Additionally,open-source tools designed for multi-label classification also have intrinsic differences in their approaches for data processing and feature selection,which in turn impacts the performance of a multi-label classification approach.In the near future,we will enhance experimental precision and reinforce the validity of conclusions by employing more rigorous control over variables through introducing expanded parameter settings.Practical implications:The observed Macro F1 and Micro F1 scores on real-world datasets typically fall short of those achieved on benchmark datasets,underscoring the complexity of real-world multi-label classification tasks.Approaches leveraging deep learning techniques offer promising solutions by accommodating the hierarchical relationships and interdependencies among labels.With ongoing enhancements in deep learning algorithms and large-scale models,it is expected that the efficacy of multi-label classification tasks will be significantly improved,reaching a level of practical utility in the foreseeable future.Originality/value:(1)Seven multi-label classification methods are comprehensively compared on three real-world datasets.(2)The TextCNN and TextRCNN models perform better on small-scale datasets with more complex hierarchical structure of labels and more balanced document-label distribution.(3)The MLkNN method works better on the larger-scale dataset with more unbalanced document-label distribution.
基金Major Program of National Natural Science Foundation of China(NSFC12292980,NSFC12292984)National Key R&D Program of China(2023YFA1009000,2023YFA1009004,2020YFA0712203,2020YFA0712201)+2 种基金Major Program of National Natural Science Foundation of China(NSFC12031016)Beijing Natural Science Foundation(BNSFZ210003)Department of Science,Technology and Information of the Ministry of Education(8091B042240).
文摘Gliomas have the highest mortality rate of all brain tumors.Correctly classifying the glioma risk period can help doctors make reasonable treatment plans and improve patients’survival rates.This paper proposes a hierarchical multi-scale attention feature fusion medical image classification network(HMAC-Net),which effectively combines global features and local features.The network framework consists of three parallel layers:The global feature extraction layer,the local feature extraction layer,and the multi-scale feature fusion layer.A linear sparse attention mechanism is designed in the global feature extraction layer to reduce information redundancy.In the local feature extraction layer,a bilateral local attention mechanism is introduced to improve the extraction of relevant information between adjacent slices.In the multi-scale feature fusion layer,a channel fusion block combining convolutional attention mechanism and residual inverse multi-layer perceptron is proposed to prevent gradient disappearance and network degradation and improve feature representation capability.The double-branch iterative multi-scale classification block is used to improve the classification performance.On the brain glioma risk grading dataset,the results of the ablation experiment and comparison experiment show that the proposed HMAC-Net has the best performance in both qualitative analysis of heat maps and quantitative analysis of evaluation indicators.On the dataset of skin cancer classification,the generalization experiment results show that the proposed HMAC-Net has a good generalization effect.
基金National Natural Science Foundation of China(No.62201457)Natural Science Foundation of Shaanxi Province(Nos.2022JQ-668,2022JQ-588)。
文摘Convolutional neural network(CNN)has excellent ability to model locally contextual information.However,CNNs face challenges for descripting long-range semantic features,which will lead to relatively low classification accuracy of hyperspectral images.To address this problem,this article proposes an algorithm based on multiscale fusion and transformer network for hyperspectral image classification.Firstly,the low-level spatial-spectral features are extracted by multi-scale residual structure.Secondly,an attention module is introduced to focus on the more important spatialspectral information.Finally,high-level semantic features are represented and learned by a token learner and an improved transformer encoder.The proposed algorithm is compared with six classical hyperspectral classification algorithms on real hyperspectral images.The experimental results show that the proposed algorithm effectively improves the land cover classification accuracy of hyperspectral images.
基金Supported by the Municipal Government and School(Hospital)Joint Funding Programme of Guangzhou(No.2023A03J0174,No.2023A03J0188)the State Key Laboratories’Youth Program of China(No.83000-32030003).
文摘●AIM:To establish a classification for congenital cataracts that can facilitate individualized treatment and help identify individuals with a high likelihood of different visual outcomes.●METHODS:Consecutive patients diagnosed with congenital cataracts and undergoing surgery between January 2005 and November 2021 were recruited.Data on visual outcomes and the phenotypic characteristics of ocular biometry and the anterior and posterior segments were extracted from the patients’medical records.A hierarchical cluster analysis was performed.The main outcome measure was the identification of distinct clusters of eyes with congenital cataracts.●RESULTS:A total of 164 children(299 eyes)were divided into two clusters based on their ocular features.Cluster 1(96 eyes)had a shorter axial length(mean±SD,19.44±1.68 mm),a low prevalence of macular abnormalities(1.04%),and no retinal abnormalities or posterior cataracts.Cluster 2(203 eyes)had a greater axial length(mean±SD,20.42±2.10 mm)and a higher prevalence of macular abnormalities(8.37%),retinal abnormalities(98.52%),and posterior cataracts(4.93%).Compared with the eyes in Cluster 2(57.14%),those in Cluster 1(71.88%)had a 2.2 times higher chance of good best-corrected visual acuity[<0.7 logMAR;OR(95%CI),2.20(1.25–3.81);P=0.006].●CONCLUSION:This retrospective study categorizes congenital cataracts into two distinct clusters,each associated with a different likelihood of visual outcomes.This innovative classification may enable the personalization and prioritization of early interventions for patients who may gain the greatest benefit,thereby making strides toward precision medicine in the field of congenital cataracts.
基金funded by the National Natural Science Foundation of China(Nos.62072212,62302218)the Development Project of Jilin Province of China(Nos.20220508125RC,20230201065GX,20240101364JC)+1 种基金National Key R&D Program(No.2018YFC2001302)the Jilin Provincial Key Laboratory of Big Data Intelligent Cognition(No.20210504003GH).
文摘Locomotor intent classification has become a research hotspot due to its importance to the development of assistive robotics and wearable devices.Previous work have achieved impressive performance in classifying steady locomotion states.However,it remains challenging for these methods to attain high accuracy when facing transitions between steady locomotion states.Due to the similarities between the information of the transitions and their adjacent steady states.Furthermore,most of these methods rely solely on data and overlook the objective laws between physical activities,resulting in lower accuracy,particularly when encountering complex locomotion modes such as transitions.To address the existing deficiencies,we propose the locomotion rule embedding long short-term memory(LSTM)network with Attention(LREAL)for human locomotor intent classification,with a particular focus on transitions,using data from fewer sensors(two inertial measurement units and four goniometers).The LREAL network consists of two levels:One responsible for distinguishing between steady states and transitions,and the other for the accurate identification of locomotor intent.Each classifier in these levels is composed of multiple-LSTM layers and an attention mechanism.To introduce real-world motion rules and apply constraints to the network,a prior knowledge was added to the network via a rule-modulating block.The method was tested on the ENABL3S dataset,which contains continuous locomotion date for seven steady and twelve transitions states.Experimental results showed that the LREAL network could recognize locomotor intents with an average accuracy of 99.03%and 96.52%for the steady and transitions states,respectively.It is worth noting that the LREAL network accuracy for transition-state recognition improved by 0.18%compared to other state-of-the-art network,while using data from fewer sensors.
基金sponsored by the National Natural Science Foundation of China Nos.62172353,62302114 and U20B2046Future Network Scientific Research Fund Project No.FNSRFP-2021-YB-48Innovation Fund Program of the Engineering Research Center for Integration and Application of Digital Learning Technology of Ministry of Education No.1221045。
文摘Bitcoin is widely used as the most classic electronic currency for various electronic services such as exchanges,gambling,marketplaces,and also scams such as high-yield investment projects.Identifying the services operated by a Bitcoin address can help determine the risk level of that address and build an alert model accordingly.Feature engineering can also be used to flesh out labeled addresses and to analyze the current state of Bitcoin in a small way.In this paper,we address the problem of identifying multiple classes of Bitcoin services,and for the poor classification of individual addresses that do not have significant features,we propose a Bitcoin address identification scheme based on joint multi-model prediction using the mapping relationship between addresses and entities.The innovation of the method is to(1)Extract as many valuable features as possible when an address is given to facilitate the multi-class service identification task.(2)Unlike the general supervised model approach,this paper proposes a joint prediction scheme for multiple learners based on address-entity mapping relationships.Specifically,after obtaining the overall features,the address classification and entity clustering tasks are performed separately,and the results are subjected to graph-basedmaximization consensus.The final result ismade to baseline the individual address classification results while satisfying the constraint of having similarly behaving entities as far as possible.By testing and evaluating over 26,000 Bitcoin addresses,our feature extraction method captures more useful features.In addition,the combined multi-learner model obtained results that exceeded the baseline classifier reaching an accuracy of 77.4%.
基金supported by the National Key Research and Development Program of China(Grant No.2020YFA0211400)the State Key Program of the National Natural Science of China(Grant No.11834008)+2 种基金the National Natural Science Foundation of China(Grant Nos.12174192,12174188,and 11974176)the State Key Laboratory of Acoustics,Chinese Academy of Sciences(Grant No.SKLA202410)the Fund from the Key Laboratory of Underwater Acoustic Environment,Chinese Academy of Sciences(Grant No.SSHJ-KFKT-1701).
文摘Phononic crystals,as artificial composite materials,have sparked significant interest due to their novel characteristics that emerge upon the introduction of nonlinearity.Among these properties,second-harmonic features exhibit potential applications in acoustic frequency conversion,non-reciprocal wave propagation,and non-destructive testing.Precisely manipulating the harmonic band structure presents a major challenge in the design of nonlinear phononic crystals.Traditional design approaches based on parameter adjustments to meet specific application requirements are inefficient and often yield suboptimal performance.Therefore,this paper develops a design methodology using Softmax logistic regression and multi-label classification learning to inversely design the material distribution of nonlinear phononic crystals by exploiting information from harmonic transmission spectra.The results demonstrate that the neural network-based inverse design method can effectively tailor nonlinear phononic crystals with desired functionalities.This work establishes a mapping relationship between the band structure and the material distribution within phononic crystals,providing valuable insights into the inverse design of metamaterials.
基金This study was supported by the Fundamental Research Funds for the Central Universities(No.2572023DJ02).
文摘Effective development and utilization of wood resources is critical.Wood modification research has become an integral dimension of wood science research,however,the similarities between modified wood and original wood render it challenging for accurate identification and classification using conventional image classification techniques.So,the development of efficient and accurate wood classification techniques is inevitable.This paper presents a one-dimensional,convolutional neural network(i.e.,BACNN)that combines near-infrared spectroscopy and deep learning techniques to classify poplar,tung,and balsa woods,and PVA,nano-silica-sol and PVA-nano silica sol modified woods of poplar.The results show that BACNN achieves an accuracy of 99.3%on the test set,higher than the 52.9%of the BP neural network and 98.7%of Support Vector Machine compared with traditional machine learning methods and deep learning based methods;it is also higher than the 97.6%of LeNet,98.7%of AlexNet and 99.1%of VGGNet-11.Therefore,the classification method proposed offers potential applications in wood classification,especially with homogeneous modified wood,and it also provides a basis for subsequent wood properties studies.
基金funded by Beijing University of Posts and Telecommunications-China Mobile Research Institute Joint Innovation Center。
文摘The direction-of-arrival(DoA) estimation is one of the hot research areas in signal processing. To overcome the DoA estimation challenge without the prior information about signal sources number and multipath number in millimeter wave system,the multi-task deep residual shrinkage network(MTDRSN) and transfer learning-based convolutional neural network(TCNN), namely MDTCNet, are proposed. The sampling covariance matrix based on the received signal is used as the input to the proposed network. A DRSN-based multi-task classifications model is first introduced to estimate signal sources number and multipath number simultaneously. Then, the DoAs with multi-signal and multipath are estimated by the regression model. The proposed CNN is applied for DoAs estimation with the predicted number of signal sources and paths. Furthermore, the modelbased transfer learning is also introduced into the regression model. The TCNN inherits the partial network parameters of the already formed optimization model obtained by the CNN. A series of experimental results show that the MDTCNet-based DoAs estimation method can accurately predict the signal sources number and multipath number under a range of signal-to-noise ratios. Remarkably, the proposed method achieves the lower root mean square error compared with some existing deep learning-based and traditional methods.
文摘As digital technologies have advanced more rapidly,the number of paper documents recently converted into a digital format has exponentially increased.To respond to the urgent need to categorize the growing number of digitized documents,the classification of digitized documents in real time has been identified as the primary goal of our study.A paper classification is the first stage in automating document control and efficient knowledge discovery with no or little human involvement.Artificial intelligence methods such as Deep Learning are now combined with segmentation to study and interpret those traits,which were not conceivable ten years ago.Deep learning aids in comprehending input patterns so that object classes may be predicted.The segmentation process divides the input image into separate segments for a more thorough image study.This study proposes a deep learning-enabled framework for automated document classification,which can be implemented in higher education.To further this goal,a dataset was developed that includes seven categories:Diplomas,Personal documents,Journal of Accounting of higher education diplomas,Service letters,Orders,Production orders,and Student orders.Subsequently,a deep learning model based on Conv2D layers is proposed for the document classification process.In the final part of this research,the proposed model is evaluated and compared with other machine-learning techniques.The results demonstrate that the proposed deep learning model shows high results in document categorization overtaking the other machine learning models by reaching 94.84%,94.79%,94.62%,94.43%,94.07%in accuracy,precision,recall,F-score,and AUC-ROC,respectively.The achieved results prove that the proposed deep model is acceptable to use in practice as an assistant to an office worker.
文摘Hyperspectral(HS)image classification plays a crucial role in numerous areas including remote sensing(RS),agriculture,and the monitoring of the environment.Optimal band selection in HS images is crucial for improving the efficiency and accuracy of image classification.This process involves selecting the most informative spectral bands,which leads to a reduction in data volume.Focusing on these key bands also enhances the accuracy of classification algorithms,as redundant or irrelevant bands,which can introduce noise and lower model performance,are excluded.In this paper,we propose an approach for HS image classification using deep Q learning(DQL)and a novel multi-objective binary grey wolf optimizer(MOBGWO).We investigate the MOBGWO for optimal band selection to further enhance the accuracy of HS image classification.In the suggested MOBGWO,a new sigmoid function is introduced as a transfer function to modify the wolves’position.The primary objective of this classification is to reduce the number of bands while maximizing classification accuracy.To evaluate the effectiveness of our approach,we conducted experiments on publicly available HS image datasets,including Pavia University,Washington Mall,and Indian Pines datasets.We compared the performance of our proposed method with several state-of-the-art deep learning(DL)and machine learning(ML)algorithms,including long short-term memory(LSTM),deep neural network(DNN),recurrent neural network(RNN),support vector machine(SVM),and random forest(RF).Our experimental results demonstrate that the Hybrid MOBGWO-DQL significantly improves classification accuracy compared to traditional optimization and DL techniques.MOBGWO-DQL shows greater accuracy in classifying most categories in both datasets used.For the Indian Pine dataset,the MOBGWO-DQL architecture achieved a kappa coefficient(KC)of 97.68%and an overall accuracy(OA)of 94.32%.This was accompanied by the lowest root mean square error(RMSE)of 0.94,indicating very precise predictions with minimal error.In the case of the Pavia University dataset,the MOBGWO-DQL model demonstrated outstanding performance with the highest KC of 98.72%and an impressive OA of 96.01%.It also recorded the lowest RMSE at 0.63,reinforcing its accuracy in predictions.The results clearly demonstrate that the proposed MOBGWO-DQL architecture not only reaches a highly accurate model more quickly but also maintains superior performance throughout the training process.
基金supported by Fundamental Research Funds from the Beijing University of Chinese Medicine(2023-JYB-KYPT-13)the Developmental Fund of Beijing University of Chinese Medicine(2020-ZXFZJJ-083).
文摘Objective:To explore the feasibility of remotely obtaining complex information on traditional Chinese medicine(TCM)pulse conditions through voice signals.Methods: We used multi-label pulse conditions as the entry point and modeled and analyzed TCM pulse diagnosis by combining voice analysis and machine learning.Audio features were extracted from voice recordings in the TCM pulse condition dataset.The obtained features were combined with information from tongue and facial diagnoses.A multi-label pulse condition voice classification DNN model was built using 10-fold cross-validation,and the modeling methods were validated using publicly available datasets.Results: The analysis showed that the proposed method achieved an accuracy of 92.59%on the public dataset.The accuracies of the three single-label pulse manifestation models in the test set were 94.27%,96.35%,and 95.39%.The absolute accuracy of the multi-label model was 92.74%.Conclusion: Voice data analysis may serve as a remote adjunct to the TCM diagnostic method for pulse condition assessment.
基金supported by the Shaanxi Province Natural Science Basic Research Plan Project(2023-JC-YB-244).
文摘The classification of infrasound events has considerable importance in improving the capability to identify the types of natural disasters.The traditional infrasound classification mainly relies on machine learning algorithms after artificial feature extraction.However,guaranteeing the effectiveness of the extracted features is difficult.The current trend focuses on using a convolution neural network to automatically extract features for classification.This method can be used to extract signal spatial features automatically through a convolution kernel;however,infrasound signals contain not only spatial information but also temporal information when used as a time series.These extracted temporal features are also crucial.If only a convolution neural network is used,then the time dependence of the infrasound sequence will be missed.Using long short-term memory networks can compensate for the missing time-series features but induces spatial feature information loss of the infrasound signal.A multiscale squeeze excitation–convolution neural network–bidirectional long short-term memory network infrasound event classification fusion model is proposed in this study to address these problems.This model automatically extracted temporal and spatial features,adaptively selected features,and also realized the fusion of the two types of features.Experimental results showed that the classification accuracy of the model was more than 98%,thus verifying the effectiveness and superiority of the proposed model.
文摘Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware resources. To address this issue, the MobileNetV1 network was developed, which employs depthwise convolution to reduce network complexity. MobileNetV1 employs a stride of 2 in several convolutional layers to decrease the spatial resolution of feature maps, thereby lowering computational costs. However, this stride setting can lead to a loss of spatial information, particularly affecting the detection and representation of smaller objects or finer details in images. To maintain the trade-off between complexity and model performance, a lightweight convolutional neural network with hierarchical multi-scale feature fusion based on the MobileNetV1 network is proposed. The network consists of two main subnetworks. The first subnetwork uses a depthwise dilated separable convolution (DDSC) layer to learn imaging features with fewer parameters, which results in a lightweight and computationally inexpensive network. Furthermore, depthwise dilated convolution in DDSC layer effectively expands the field of view of filters, allowing them to incorporate a larger context. The second subnetwork is a hierarchical multi-scale feature fusion (HMFF) module that uses parallel multi-resolution branches architecture to process the input feature map in order to extract the multi-scale feature information of the input image. Experimental results on the CIFAR-10, Malaria, and KvasirV1 datasets demonstrate that the proposed method is efficient, reducing the network parameters and computational cost by 65.02% and 39.78%, respectively, while maintaining the network performance compared to the MobileNetV1 baseline.
基金Ministry of Education,Youth and Sports of the Chezk Republic,Grant/Award Numbers:SP2023/039,SP2023/042the European Union under the REFRESH,Grant/Award Number:CZ.10.03.01/00/22_003/0000048。
文摘Detecting brain tumours is complex due to the natural variation in their location, shape, and intensity in images. While having accurate detection and segmentation of brain tumours would be beneficial, current methods still need to solve this problem despite the numerous available approaches. Precise analysis of Magnetic Resonance Imaging (MRI) is crucial for detecting, segmenting, and classifying brain tumours in medical diagnostics. Magnetic Resonance Imaging is a vital component in medical diagnosis, and it requires precise, efficient, careful, efficient, and reliable image analysis techniques. The authors developed a Deep Learning (DL) fusion model to classify brain tumours reliably. Deep Learning models require large amounts of training data to achieve good results, so the researchers utilised data augmentation techniques to increase the dataset size for training models. VGG16, ResNet50, and convolutional deep belief networks networks extracted deep features from MRI images. Softmax was used as the classifier, and the training set was supplemented with intentionally created MRI images of brain tumours in addition to the genuine ones. The features of two DL models were combined in the proposed model to generate a fusion model, which significantly increased classification accuracy. An openly accessible dataset from the internet was used to test the model's performance, and the experimental results showed that the proposed fusion model achieved a classification accuracy of 98.98%. Finally, the results were compared with existing methods, and the proposed model outperformed them significantly.
文摘Thoracic diseases pose significant risks to an individual's chest health and are among the most perilous medical diseases. They can impact either one or both lungs, which leads to a severe impairment of a person’s ability to breathe normally. Some notable examples of such diseases encompass pneumonia, lung cancer, coronavirus disease 2019 (COVID-19), tuberculosis, and chronic obstructive pulmonary disease (COPD). Consequently, early and precise detection of these diseases is paramount during the diagnostic process. Traditionally, the primary methods employed for the detection involve the use of X-ray imaging or computed tomography (CT) scans. Nevertheless, due to the scarcity of proficient radiologists and the inherent similarities between these diseases, the accuracy of detection can be compromised, leading to imprecise or erroneous results. To address this challenge, scientists have turned to computer-based solutions, aiming for swift and accurate diagnoses. The primary objective of this study is to develop two machine learning models, utilizing single-task and multi-task learning frameworks, to enhance classification accuracy. Within the multi-task learning architecture, two principal approaches exist soft parameter sharing and hard parameter sharing. Consequently, this research adopts a multi-task deep learning approach that leverages CNNs to achieve improved classification performance for the specified tasks. These tasks, focusing on pneumonia and COVID-19, are processed and learned simultaneously within a multi-task model. To assess the effectiveness of the trained model, it is rigorously validated using three different real-world datasets for training and testing.
基金supported in part by the Nationa Natural Science Foundation of China (61876011)the National Key Research and Development Program of China (2022YFB4703700)+1 种基金the Key Research and Development Program 2020 of Guangzhou (202007050002)the Key-Area Research and Development Program of Guangdong Province (2020B090921003)。
文摘Recently, there have been some attempts of Transformer in 3D point cloud classification. In order to reduce computations, most existing methods focus on local spatial attention,but ignore their content and fail to establish relationships between distant but relevant points. To overcome the limitation of local spatial attention, we propose a point content-based Transformer architecture, called PointConT for short. It exploits the locality of points in the feature space(content-based), which clusters the sampled points with similar features into the same class and computes the self-attention within each class, thus enabling an effective trade-off between capturing long-range dependencies and computational complexity. We further introduce an inception feature aggregator for point cloud classification, which uses parallel structures to aggregate high-frequency and low-frequency information in each branch separately. Extensive experiments show that our PointConT model achieves a remarkable performance on point cloud shape classification. Especially, our method exhibits 90.3% Top-1 accuracy on the hardest setting of ScanObjectN N. Source code of this paper is available at https://github.com/yahuiliu99/PointC onT.
基金supported by the Yunnan Major Scientific and Technological Projects(Grant No.202302AD080001)the National Natural Science Foundation,China(No.52065033).
文摘When building a classification model,the scenario where the samples of one class are significantly more than those of the other class is called data imbalance.Data imbalance causes the trained classification model to be in favor of the majority class(usually defined as the negative class),which may do harm to the accuracy of the minority class(usually defined as the positive class),and then lead to poor overall performance of the model.A method called MSHR-FCSSVM for solving imbalanced data classification is proposed in this article,which is based on a new hybrid resampling approach(MSHR)and a new fine cost-sensitive support vector machine(CS-SVM)classifier(FCSSVM).The MSHR measures the separability of each negative sample through its Silhouette value calculated by Mahalanobis distance between samples,based on which,the so-called pseudo-negative samples are screened out to generate new positive samples(over-sampling step)through linear interpolation and are deleted finally(under-sampling step).This approach replaces pseudo-negative samples with generated new positive samples one by one to clear up the inter-class overlap on the borderline,without changing the overall scale of the dataset.The FCSSVM is an improved version of the traditional CS-SVM.It considers influences of both the imbalance of sample number and the class distribution on classification simultaneously,and through finely tuning the class cost weights by using the efficient optimization algorithm based on the physical phenomenon of rime-ice(RIME)algorithm with cross-validation accuracy as the fitness function to accurately adjust the classification borderline.To verify the effectiveness of the proposed method,a series of experiments are carried out based on 20 imbalanced datasets including both mildly and extremely imbalanced datasets.The experimental results show that the MSHR-FCSSVM method performs better than the methods for comparison in most cases,and both the MSHR and the FCSSVM played significant roles.
基金financially supported by the National Key Research and Development Program of China(2022YFB3706800,2020YFB1710100)the National Natural Science Foundation of China(51821001,52090042,52074183)。
文摘The complex sand-casting process combined with the interactions between process parameters makes it difficult to control the casting quality,resulting in a high scrap rate.A strategy based on a data-driven model was proposed to reduce casting defects and improve production efficiency,which includes the random forest(RF)classification model,the feature importance analysis,and the process parameters optimization with Monte Carlo simulation.The collected data includes four types of defects and corresponding process parameters were used to construct the RF model.Classification results show a recall rate above 90% for all categories.The Gini Index was used to assess the importance of the process parameters in the formation of various defects in the RF model.Finally,the classification model was applied to different production conditions for quality prediction.In the case of process parameters optimization for gas porosity defects,this model serves as an experimental process in the Monte Carlo method to estimate a better temperature distribution.The prediction model,when applied to the factory,greatly improved the efficiency of defect detection.Results show that the scrap rate decreased from 10.16% to 6.68%.