Large Language Models(LLMs)are increasingly demonstrating their ability to understand natural language and solve complex tasks,especially through text generation.One of the relevant capabilities is contextual learning...Large Language Models(LLMs)are increasingly demonstrating their ability to understand natural language and solve complex tasks,especially through text generation.One of the relevant capabilities is contextual learning,which involves the ability to receive instructions in natural language or task demonstrations to generate expected outputs for test instances without the need for additional training or gradient updates.In recent years,the popularity of social networking has provided a medium through which some users can engage in offensive and harmful online behavior.In this study,we investigate the ability of different LLMs,ranging from zero-shot and few-shot learning to fine-tuning.Our experiments show that LLMs can identify sexist and hateful online texts using zero-shot and few-shot approaches through information retrieval.Furthermore,it is found that the encoder-decoder model called Zephyr achieves the best results with the fine-tuning approach,scoring 86.811%on the Explainable Detection of Online Sexism(EDOS)test-set and 57.453%on the Multilingual Detection of Hate Speech Against Immigrants and Women in Twitter(HatEval)test-set.Finally,it is confirmed that the evaluated models perform well in hate text detection,as they beat the best result in the HatEval task leaderboard.The error analysis shows that contextual learning had difficulty distinguishing between types of hate speech and figurative language.However,the fine-tuned approach tends to produce many false positives.展开更多
This paper focuses on the task of few-shot 3D point cloud semantic segmentation.Despite some progress,this task still encounters many issues due to the insufficient samples given,e.g.,incomplete object segmentation an...This paper focuses on the task of few-shot 3D point cloud semantic segmentation.Despite some progress,this task still encounters many issues due to the insufficient samples given,e.g.,incomplete object segmentation and inaccurate semantic discrimination.To tackle these issues,we first leverage part-whole relationships into the task of 3D point cloud semantic segmentation to capture semantic integrity,which is empowered by the dynamic capsule routing with the module of 3D Capsule Networks(CapsNets)in the embedding network.Concretely,the dynamic routing amalgamates geometric information of the 3D point cloud data to construct higher-level feature representations,which capture the relationships between object parts and their wholes.Secondly,we designed a multi-prototype enhancement module to enhance the prototype discriminability.Specifically,the single-prototype enhancement mechanism is expanded to the multi-prototype enhancement version for capturing rich semantics.Besides,the shot-correlation within the category is calculated via the interaction of different samples to enhance the intra-category similarity.Ablation studies prove that the involved part-whole relations and proposed multi-prototype enhancement module help to achieve complete object segmentation and improve semantic discrimination.Moreover,under the integration of these two modules,quantitative and qualitative experiments on two public benchmarks,including S3DIS and ScanNet,indicate the superior performance of the proposed framework on the task of 3D point cloud semantic segmentation,compared to some state-of-the-art methods.展开更多
Due to the limited computational capability and the diversity of the Internet of Things devices working in different environment,we consider fewshot learning-based automatic modulation classification(AMC)to improve it...Due to the limited computational capability and the diversity of the Internet of Things devices working in different environment,we consider fewshot learning-based automatic modulation classification(AMC)to improve its reliability.A data enhancement module(DEM)is designed by a convolutional layer to supplement frequency-domain information as well as providing nonlinear mapping that is beneficial for AMC.Multimodal network is designed to have multiple residual blocks,where each residual block has multiple convolutional kernels of different sizes for diverse feature extraction.Moreover,a deep supervised loss function is designed to supervise all parts of the network including the hidden layers and the DEM.Since different model may output different results,cooperative classifier is designed to avoid the randomness of single model and improve the reliability.Simulation results show that this few-shot learning-based AMC method can significantly improve the AMC accuracy compared to the existing methods.展开更多
In order to improve the models capability in expressing features during few-shot learning,a multi-scale features prototypical network(MS-PN)algorithm is proposed.The metric learning algo-rithm is employed to extract i...In order to improve the models capability in expressing features during few-shot learning,a multi-scale features prototypical network(MS-PN)algorithm is proposed.The metric learning algo-rithm is employed to extract image features and project them into a feature space,thus evaluating the similarity between samples based on their relative distances within the metric space.To sufficiently extract feature information from limited sample data and mitigate the impact of constrained data vol-ume,a multi-scale feature extraction network is presented to capture data features at various scales during the process of image feature extraction.Additionally,the position of the prototype is fine-tuned by assigning weights to data points to mitigate the influence of outliers on the experiment.The loss function integrates contrastive loss and label-smoothing to bring similar data points closer and separate dissimilar data points within the metric space.Experimental evaluations are conducted on small-sample datasets mini-ImageNet and CUB200-2011.The method in this paper can achieve higher classification accuracy.Specifically,in the 5-way 1-shot experiment,classification accuracy reaches 50.13%and 66.79%respectively on these two datasets.Moreover,in the 5-way 5-shot ex-periment,accuracy of 66.79%and 85.91%are observed,respectively.展开更多
Few-shot semantic segmentation aims at training a model that can segment novel classes in a query image with only a few densely annotated support exemplars.It remains a challenge because of large intra-class variation...Few-shot semantic segmentation aims at training a model that can segment novel classes in a query image with only a few densely annotated support exemplars.It remains a challenge because of large intra-class variations between the support and query images.Existing approaches utilize 4D convolutions to mine semantic correspondence between the support and query images.However,they still suffer from heavy computation,sparse correspondence,and large memory.We propose axial assembled correspondence network(AACNet)to alleviate these issues.The key point of AACNet is the proposed axial assembled 4D kernel,which constructs the basic block for semantic correspondence encoder(SCE).Furthermore,we propose the deblurring equations to provide more robust correspondence for the aforementioned SCE and design a novel fusion module to mix correspondences in a learnable manner.Experiments on PASCAL-5~i reveal that our AACNet achieves a mean intersection-over-union score of 65.9%for 1-shot segmentation and 70.6%for 5-shot segmentation,surpassing the state-of-the-art method by 5.8%and 5.0%respectively.展开更多
The accurate and intelligent identification of the working conditions of a sucker-rod pumping system is necessary. As onshore oil extraction gradually enters its mid-to late-stage, the cost required to train a deep le...The accurate and intelligent identification of the working conditions of a sucker-rod pumping system is necessary. As onshore oil extraction gradually enters its mid-to late-stage, the cost required to train a deep learning working condition recognition model for pumping wells by obtaining enough new working condition samples is expensive. For the few-shot problem and large calculation issues of new working conditions of oil wells, a working condition recognition method for pumping unit wells based on a 4-dimensional time-frequency signature (4D-TFS) and meta-learning convolutional shrinkage neural network (ML-CSNN) is proposed. First, the measured pumping unit well workup data are converted into 4D-TFS data, and the initial feature extraction task is performed while compressing the data. Subsequently, a convolutional shrinkage neural network (CSNN) with a specific structure that can ablate low-frequency features is designed to extract working conditions features. Finally, a meta-learning fine-tuning framework for learning the network parameters that are susceptible to task changes is merged into the CSNN to solve the few-shot issue. The results of the experiments demonstrate that the trained ML-CSNN has good recognition accuracy and generalization ability for few-shot working condition recognition. More specifically, in the case of lower computational complexity, only few-shot samples are needed to fine-tune the network parameters, and the model can be quickly adapted to new classes of well conditions.展开更多
Traditional object detectors based on deep learning rely on plenty of labeled samples,which are expensive to obtain.Few-shot object detection(FSOD)attempts to solve this problem,learning detection objects from a few l...Traditional object detectors based on deep learning rely on plenty of labeled samples,which are expensive to obtain.Few-shot object detection(FSOD)attempts to solve this problem,learning detection objects from a few labeled samples,but the performance is often unsatisfactory due to the scarcity of samples.We believe that the main reasons that restrict the performance of few-shot detectors are:(1)the positive samples is scarce,and(2)the quality of positive samples is low.Therefore,we put forward a novel few-shot object detector based on YOLOv4,starting from both improving the quantity and quality of positive samples.First,we design a hybrid multivariate positive sample augmentation(HMPSA)module to amplify the quantity of positive samples and increase positive sample diversity while suppressing negative samples.Then,we design a selective non-local fusion attention(SNFA)module to help the detector better learn the target features and improve the feature quality of positive samples.Finally,we optimize the loss function to make it more suitable for the task of FSOD.Experimental results on PASCAL VOC and MS COCO demonstrate that our designed few-shot object detector has competitive performance with other state-of-the-art detectors.展开更多
Objective To develop a few-shot learning(FSL) approach for classifying optical coherence tomography(OCT) images in patients with inherited retinal disorders(IRDs).Methods In this study, an FSL model based on a student...Objective To develop a few-shot learning(FSL) approach for classifying optical coherence tomography(OCT) images in patients with inherited retinal disorders(IRDs).Methods In this study, an FSL model based on a student–teacher learning framework was designed to classify images. 2,317 images from 189 participants were included. Of these, 1,126 images revealed IRDs, 533 were normal samples, and 658 were control samples.Results The FSL model achieved a total accuracy of 0.974–0.983, total sensitivity of 0.934–0.957, total specificity of 0.984–0.990, and total F1 score of 0.935–0.957, which were superior to the total accuracy of the baseline model of 0.943–0.954, total sensitivity of 0.866–0.886, total specificity of 0.962–0.971,and total F1 score of 0.859–0.885. The performance of most subclassifications also exhibited advantages. Moreover, the FSL model had a higher area under curves(AUC) of the receiver operating characteristic(ROC) curves in most subclassifications.Conclusion This study demonstrates the effective use of the FSL model for the classification of OCT images from patients with IRDs, normal, and control participants with a smaller volume of data. The general principle and similar network architectures can also be applied to other retinal diseases with a low prevalence.展开更多
Deep Convolution Neural Networks(DCNNs)can capture discriminative features from large datasets.However,how to incrementally learn new samples without forgetting old ones and recognize novel classes that arise in the d...Deep Convolution Neural Networks(DCNNs)can capture discriminative features from large datasets.However,how to incrementally learn new samples without forgetting old ones and recognize novel classes that arise in the dynamically changing world,e.g.,classifying newly discovered fish species,remains an open problem.We address an even more challenging and realistic setting of this problem where new class samples are insufficient,i.e.,Few-Shot Class-Incremental Learning(FSCIL).Current FSCIL methods augment the training data to alleviate the overfitting of novel classes.By contrast,we propose Filter Bank Networks(FBNs)that augment the learnable filters to capture fine-detailed features for adapting to future new classes.In the forward pass,FBNs augment each convolutional filter to a virtual filter bank containing the canonical one,i.e.,itself,and multiple transformed versions.During back-propagation,FBNs explicitly stimulate fine-detailed features to emerge and collectively align all gradients of each filter bank to learn the canonical one.FBNs capture pattern variants that do not yet exist in the pretraining session,thus making it easy to incorporate new classes in the incremental learning phase.Moreover,FBNs introduce model-level prior knowledge to efficiently utilize the limited few-shot data.Extensive experiments on MNIST,CIFAR100,CUB200,andMini-ImageNet datasets show that FBNs consistently outperformthe baseline by a significantmargin,reporting new state-of-the-art FSCIL results.In addition,we contribute a challenging FSCIL benchmark,Fishshot1K,which contains 8261 underwater images covering 1000 ocean fish species.The code is included in the supplementary materials.展开更多
Few-shot learning is becoming more and more popular in many fields,especially in the computer vision field.This inspires us to introduce few-shot learning to the genomic field,which faces a typical few-shot problem be...Few-shot learning is becoming more and more popular in many fields,especially in the computer vision field.This inspires us to introduce few-shot learning to the genomic field,which faces a typical few-shot problem because some tasks only have a limited number of samples with high-dimensions.The goal of this study was to investigate the few-shot disease sub-type prediction problem and identify patient subgroups through training on small data.Accurate disease subtype classification allows clinicians to efficiently deliver investigations and interventions in clinical practice.We propose the SW-Net,which simulates the clinical process of extracting the shared knowledge from a range of interrelated tasks and generalizes it to unseen data.Our model is built upon a simple baseline,and we modified it for genomic data.Supportbased initialization for the classifier and transductive fine-tuning techniques were applied in our model to improve prediction accuracy,and an Entropy regularization term on the query set was appended to reduce over-fitting.Moreover,to address the high dimension and high noise issue,we future extended a feature selection module to adaptively select important features and a sample weighting module to prioritize high-confidence samples.Experiments on simulated data and The Cancer Genome Atlas meta-dataset show that our new baseline model gets higher prediction accuracy compared to other competing algorithms.展开更多
At present,deep learning has been well applied in many fields.However,due to the high complexity of hypothesis space,numerous training samples are usually required to ensure the reliability of minimizing experience ri...At present,deep learning has been well applied in many fields.However,due to the high complexity of hypothesis space,numerous training samples are usually required to ensure the reliability of minimizing experience risk.Therefore,training a classifier with a small number of training examples is a challenging task.From a biological point of view,based on the assumption that rich prior knowledge and analogical association should enable human beings to quickly distinguish novel things from a few or even one example,we proposed a dynamic analogical association algorithm to make the model use only a few labeled samples for classification.To be specific,the algorithm search for knowledge structures similar to existing tasks in prior knowledge based on manifold matching,and combine sampling distributions to generate offsets instead of two sample points,thereby ensuring high confidence and significant contribution to the classification.The comparative results on two common benchmark datasets substantiate the superiority of the proposed method compared to existing data generation approaches for few-shot learning,and the effectiveness of the algorithm has been proved through ablation experiments.展开更多
This paper presents a novel approach for tire-pattern classification,aimed at conducting forensic analysis on tire marks discovered at crime scenes.The classification model proposed in this study accounts for the intr...This paper presents a novel approach for tire-pattern classification,aimed at conducting forensic analysis on tire marks discovered at crime scenes.The classification model proposed in this study accounts for the intricate and dynamic nature of tire prints found in real-world scenarios,including accident sites.To address this complexity,the classifier model was developed to harness the meta-learning capabilities of few-shot learning algorithms(learning-to-learn).The model is meticulously designed and optimized to effectively classify both tire patterns exhibited on wheels and tire-indentation marks visible on surfaces due to friction.This is achieved by employing a semantic segmentation model to extract the tire pattern marks within the image.These marks are subsequently used as a mask channel,combined with the original image,and fed into the classifier to perform classification.Overall,The proposed model follows a three-step process:(i)the Bilateral Segmentation Network is employed to derive the semantic segmentation of the tire pattern within a given image.(ii)utilizing the semantic image in conjunction with the original image,the model learns and clusters groups to generate vectors that define the relative position of the image in the test set.(iii)the model performs predictions based on these learned features.Empirical verification demonstrates usage of semantic model to extract the tire patterns before performing classification increases the overall accuracy of classification by∼4%.展开更多
Machine learning,especially deep learning,has been highly successful in data-intensive applications;however,the performance of these models will drop significantly when the amount of the training data amount does not ...Machine learning,especially deep learning,has been highly successful in data-intensive applications;however,the performance of these models will drop significantly when the amount of the training data amount does not meet the requirement.This leads to the so-called few-shot learning(FSL)problem,which requires the model rapidly generalize to new tasks that containing only a few labeled samples.In this paper,we proposed a new deep model,called deep convolutional meta-learning networks,to address the low performance of generalization under limited data for bearing fault diagnosis.The essential of our approach is to learn a base model from the multiple learning tasks using a support dataset and finetune the learnt parameters using few-shot tasks before it can adapt to the new learning task based on limited training data.The proposed method was compared to several FSL methods,including methods with and without pre-training the embedding mapping,and methods with finetuning the classifier or the whole model by utilizing the few-shot data from the target domain.The comparisons are carried out on 1-shot and 10-shot tasks using the Case Western Reserve University bearing dataset and a cylindrical roller bearing dataset.The experimental result illustrates that our method has good performance on the bearing fault diagnosis across various few-shot conditions.In addition,we found that the pretraining process does not always improve the prediction accuracy.展开更多
Existing almost deep learning methods rely on a large amount of annotated data, so they are inappropriate for forest fire smoke detection with limited data. In this paper, a novel hybrid attention-based few-shot learn...Existing almost deep learning methods rely on a large amount of annotated data, so they are inappropriate for forest fire smoke detection with limited data. In this paper, a novel hybrid attention-based few-shot learning method, named Attention-Based Prototypical Network, is proposed for forest fire smoke detection. Specifically, feature extraction network, which consists of convolutional block attention module, could extract high-level and discriminative features and further decrease the false alarm rate resulting from suspected smoke areas. Moreover, we design a metalearning module to alleviate the overfitting issue caused by limited smoke images, and the meta-learning network enables achieving effective detection via comparing the distance between the class prototype of support images and the features of query images. A series of experiments on forest fire smoke datasets and miniImageNet dataset testify that the proposed method is superior to state-of-the-art few-shot learning approaches.展开更多
Intrusion Detection Systems(IDSs)have a great interest these days to discover complex attack events and protect the critical infrastructures of the Internet of Things(IoT)networks.Existing IDSs based on shallow and de...Intrusion Detection Systems(IDSs)have a great interest these days to discover complex attack events and protect the critical infrastructures of the Internet of Things(IoT)networks.Existing IDSs based on shallow and deep network architectures demand high computational resources and high volumes of data to establish an adaptive detection engine that discovers new families of attacks from the edge of IoT networks.However,attackers exploit network gateways at the edge using new attacking scenarios(i.e.,zero-day attacks),such as ransomware and Distributed Denial of Service(DDoS)attacks.This paper proposes new IDS based on Few-Shot Deep Learning,named CNN-IDS,which can automatically identify zero-day attacks from the edge of a network and protect its IoT systems.The proposed system comprises two-methodological stages:1)a filtered Information Gain method is to select the most useful features from network data,and 2)one-dimensional Convolutional Neural Network(CNN)algorithm is to recognize new attack types from a network’s edge.The proposed model is trained and validated using two datasets of the UNSW-NB15 and Bot-IoT.The experimental results showed that it enhances about a 3%detection rate and around a 3%–4%falsepositive rate with the UNSW-NB15 dataset and about an 8%detection rate using the BoT-IoT dataset.展开更多
Numerous meta-learning methods focus on the few-shot learning issue,yet most of them assume that various tasks have a shared embedding space,so the generalization ability of the trained model is limited.In order to so...Numerous meta-learning methods focus on the few-shot learning issue,yet most of them assume that various tasks have a shared embedding space,so the generalization ability of the trained model is limited.In order to solve the aforementioned problem,a task-adaptive meta-learning method based on graph neural network(TAGN) is proposed in this paper,where the characterization ability of the original feature extraction network is ameliorated and the classification accuracy is remarkably improved.Firstly,a task-adaptation module based on the self-attention mechanism is employed,where the generalization ability of the model is enhanced on the new task.Secondly,images are classified in non-Euclidean domain,where the disadvantages of poor adaptability of the traditional distance function are overcome.A large number of experiments are conducted and the results show that the proposed methodology has a better performance than traditional task-independent classification methods on two real-word datasets.展开更多
The existing few-shot learning(FSL) approaches based on metric-learning usually lack attention to the distinction of feature contributions,and the importance of each sample is often ignored when obtaining the class re...The existing few-shot learning(FSL) approaches based on metric-learning usually lack attention to the distinction of feature contributions,and the importance of each sample is often ignored when obtaining the class representation,where the performance of the model is limited.Additionally,similarity metric method is also worthy of attention.Therefore,a few-shot learning approach called MWNet based on multi-attention fusion and weighted class representation(WCR) is proposed in this paper.Firstly,a multi-attention fusion module is introduced into the model to highlight the valuable part of the feature and reduce the interference of irrelevant content.Then,when obtaining the class representation,weight is given to each support set sample,and the weighted class representation is used to better express the class.Moreover,a mutual similarity metric method is used to obtain a more accurate similarity relationship through the mutual similarity for each representation.Experiments prove that the approach in this paper performs well in few-shot image classification,and also shows remarkable excellence and competitiveness compared with related advanced techniques.展开更多
Recent advances in OCR show that end-to-end(E2E)training pipelines including detection and identification can achieve the best results.However,many existing methods usually focus on case insensitive English characters...Recent advances in OCR show that end-to-end(E2E)training pipelines including detection and identification can achieve the best results.However,many existing methods usually focus on case insensitive English characters.In this paper,we apply an E2E approach,the multiplex multilingual mask TextSpotter,which performs script recognition at the word level and uses different recognition headers to process different scripts while maintaining uniform loss,thus optimizing script recognition and multiple recognition headers simultaneously.Experiments show that this method is superior to the single-head model with similar number of parameters in endto-end identification tasks.展开更多
Target recognition based on deep learning relies on a large quantity of samples,but in some specific remote sensing scenes,the samples are very rare.Currently,few-shot learning can obtain high-performance target class...Target recognition based on deep learning relies on a large quantity of samples,but in some specific remote sensing scenes,the samples are very rare.Currently,few-shot learning can obtain high-performance target classification models using only a few samples,but most researches are based on the natural scene.Therefore,this paper proposes a metric-based few-shot classification technology in remote sensing.First,we constructed a dataset(RSD-FSC)for few-shot classification in remote sensing,which contained 21 classes typical target sample slices of remote sensing images.Second,based on metric learning,a k-nearest neighbor classification network is proposed,to find multiple training samples similar to the testing target,and then the similarity between the testing target and multiple similar samples is calculated to classify the testing target.Finally,the 5-way 1-shot,5-way 5-shot and 5-way 10-shot experiments are conducted to improve the generalization of the model on few-shot classification tasks.The experimental results show that for the newly emerged classes few-shot samples,when the number of training samples is 1,5 and 10,the average accuracy of target recognition can reach 59.134%,82.553%and 87.796%,respectively.It demonstrates that our proposed method can resolve few-shot classification in remote sensing image and perform better than other few-shot classification methods.展开更多
Localizing discriminative object parts(e.g.,bird head)is crucial for fine-grained classification tasks,especially for the more challenging fine-grained few-shot scenario.Previous work always relies on the learned obje...Localizing discriminative object parts(e.g.,bird head)is crucial for fine-grained classification tasks,especially for the more challenging fine-grained few-shot scenario.Previous work always relies on the learned object parts in a unified manner,where they attend the same object parts(even with common attention weights)for different few-shot episodic tasks.In this paper,we propose that it should adaptively capture the task-specific object parts that require attention for each few-shot task,since the parts that can distinguish different tasks are naturally different.Specifically for a few-shot task,after obtaining part-level deep features,we learn a task-specific part-based dictionary for both aligning and reweighting part features in an episode.Then,part-level categorical prototypes are generated based on the part features of support data,which are later employed by calculating distances to classify query data for evaluation.To retain the discriminative ability of the part-level representations(i.e.,part features and part prototypes),we design an optimal transport solution that also utilizes query data in a transductive way to optimize the aforementioned distance calculation for the final predictions.Extensive experiments on five fine-grained benchmarks show the superiority of our method,especially for the 1-shot setting,gaining 0.12%,8.56%and 5.87%improvements over state-of-the-art methods on CUB,Stanford Dogs,and Stanford Cars,respectively.展开更多
基金This work is part of the research projects LaTe4PoliticES(PID2022-138099OBI00)funded by MICIU/AEI/10.13039/501100011033the European Regional Development Fund(ERDF)-A Way of Making Europe and LT-SWM(TED2021-131167B-I00)funded by MICIU/AEI/10.13039/501100011033the European Union NextGenerationEU/PRTR.Mr.Ronghao Pan is supported by the Programa Investigo grant,funded by the Region of Murcia,the Spanish Ministry of Labour and Social Economy and the European Union-NextGenerationEU under the“Plan de Recuperación,Transformación y Resiliencia(PRTR).”。
文摘Large Language Models(LLMs)are increasingly demonstrating their ability to understand natural language and solve complex tasks,especially through text generation.One of the relevant capabilities is contextual learning,which involves the ability to receive instructions in natural language or task demonstrations to generate expected outputs for test instances without the need for additional training or gradient updates.In recent years,the popularity of social networking has provided a medium through which some users can engage in offensive and harmful online behavior.In this study,we investigate the ability of different LLMs,ranging from zero-shot and few-shot learning to fine-tuning.Our experiments show that LLMs can identify sexist and hateful online texts using zero-shot and few-shot approaches through information retrieval.Furthermore,it is found that the encoder-decoder model called Zephyr achieves the best results with the fine-tuning approach,scoring 86.811%on the Explainable Detection of Online Sexism(EDOS)test-set and 57.453%on the Multilingual Detection of Hate Speech Against Immigrants and Women in Twitter(HatEval)test-set.Finally,it is confirmed that the evaluated models perform well in hate text detection,as they beat the best result in the HatEval task leaderboard.The error analysis shows that contextual learning had difficulty distinguishing between types of hate speech and figurative language.However,the fine-tuned approach tends to produce many false positives.
基金This work is supported by the National Natural Science Foundation of China under Grant No.62001341the National Natural Science Foundation of Jiangsu Province under Grant No.BK20221379the Jiangsu Engineering Research Center of Digital Twinning Technology for Key Equipment in Petrochemical Process under Grant No.DTEC202104.
文摘This paper focuses on the task of few-shot 3D point cloud semantic segmentation.Despite some progress,this task still encounters many issues due to the insufficient samples given,e.g.,incomplete object segmentation and inaccurate semantic discrimination.To tackle these issues,we first leverage part-whole relationships into the task of 3D point cloud semantic segmentation to capture semantic integrity,which is empowered by the dynamic capsule routing with the module of 3D Capsule Networks(CapsNets)in the embedding network.Concretely,the dynamic routing amalgamates geometric information of the 3D point cloud data to construct higher-level feature representations,which capture the relationships between object parts and their wholes.Secondly,we designed a multi-prototype enhancement module to enhance the prototype discriminability.Specifically,the single-prototype enhancement mechanism is expanded to the multi-prototype enhancement version for capturing rich semantics.Besides,the shot-correlation within the category is calculated via the interaction of different samples to enhance the intra-category similarity.Ablation studies prove that the involved part-whole relations and proposed multi-prototype enhancement module help to achieve complete object segmentation and improve semantic discrimination.Moreover,under the integration of these two modules,quantitative and qualitative experiments on two public benchmarks,including S3DIS and ScanNet,indicate the superior performance of the proposed framework on the task of 3D point cloud semantic segmentation,compared to some state-of-the-art methods.
基金supported in part by National Key Research and Development Program of China under Grant 2021YFB2900404.
文摘Due to the limited computational capability and the diversity of the Internet of Things devices working in different environment,we consider fewshot learning-based automatic modulation classification(AMC)to improve its reliability.A data enhancement module(DEM)is designed by a convolutional layer to supplement frequency-domain information as well as providing nonlinear mapping that is beneficial for AMC.Multimodal network is designed to have multiple residual blocks,where each residual block has multiple convolutional kernels of different sizes for diverse feature extraction.Moreover,a deep supervised loss function is designed to supervise all parts of the network including the hidden layers and the DEM.Since different model may output different results,cooperative classifier is designed to avoid the randomness of single model and improve the reliability.Simulation results show that this few-shot learning-based AMC method can significantly improve the AMC accuracy compared to the existing methods.
基金the Scientific Research Foundation of Liaoning Provincial Department of Education(No.LJKZ0139)the Program for Liaoning Excellent Talents in University(No.LR15045).
文摘In order to improve the models capability in expressing features during few-shot learning,a multi-scale features prototypical network(MS-PN)algorithm is proposed.The metric learning algo-rithm is employed to extract image features and project them into a feature space,thus evaluating the similarity between samples based on their relative distances within the metric space.To sufficiently extract feature information from limited sample data and mitigate the impact of constrained data vol-ume,a multi-scale feature extraction network is presented to capture data features at various scales during the process of image feature extraction.Additionally,the position of the prototype is fine-tuned by assigning weights to data points to mitigate the influence of outliers on the experiment.The loss function integrates contrastive loss and label-smoothing to bring similar data points closer and separate dissimilar data points within the metric space.Experimental evaluations are conducted on small-sample datasets mini-ImageNet and CUB200-2011.The method in this paper can achieve higher classification accuracy.Specifically,in the 5-way 1-shot experiment,classification accuracy reaches 50.13%and 66.79%respectively on these two datasets.Moreover,in the 5-way 5-shot ex-periment,accuracy of 66.79%and 85.91%are observed,respectively.
基金supported in part by the Key Research and Development Program of Guangdong Province(2021B0101200001)the Guangdong Basic and Applied Basic Research Foundation(2020B1515120071)。
文摘Few-shot semantic segmentation aims at training a model that can segment novel classes in a query image with only a few densely annotated support exemplars.It remains a challenge because of large intra-class variations between the support and query images.Existing approaches utilize 4D convolutions to mine semantic correspondence between the support and query images.However,they still suffer from heavy computation,sparse correspondence,and large memory.We propose axial assembled correspondence network(AACNet)to alleviate these issues.The key point of AACNet is the proposed axial assembled 4D kernel,which constructs the basic block for semantic correspondence encoder(SCE).Furthermore,we propose the deblurring equations to provide more robust correspondence for the aforementioned SCE and design a novel fusion module to mix correspondences in a learnable manner.Experiments on PASCAL-5~i reveal that our AACNet achieves a mean intersection-over-union score of 65.9%for 1-shot segmentation and 70.6%for 5-shot segmentation,surpassing the state-of-the-art method by 5.8%and 5.0%respectively.
基金supported in part by the National Natural Science Foundation of China under Grant U1908212,62203432 and 92067205in part by the State Key Laboratory of Robotics of China under Grant 2023-Z03 and 2023-Z15in part by the Natural Science Foundation of Liaoning Province under Grant 2020-KF-11-02.
文摘The accurate and intelligent identification of the working conditions of a sucker-rod pumping system is necessary. As onshore oil extraction gradually enters its mid-to late-stage, the cost required to train a deep learning working condition recognition model for pumping wells by obtaining enough new working condition samples is expensive. For the few-shot problem and large calculation issues of new working conditions of oil wells, a working condition recognition method for pumping unit wells based on a 4-dimensional time-frequency signature (4D-TFS) and meta-learning convolutional shrinkage neural network (ML-CSNN) is proposed. First, the measured pumping unit well workup data are converted into 4D-TFS data, and the initial feature extraction task is performed while compressing the data. Subsequently, a convolutional shrinkage neural network (CSNN) with a specific structure that can ablate low-frequency features is designed to extract working conditions features. Finally, a meta-learning fine-tuning framework for learning the network parameters that are susceptible to task changes is merged into the CSNN to solve the few-shot issue. The results of the experiments demonstrate that the trained ML-CSNN has good recognition accuracy and generalization ability for few-shot working condition recognition. More specifically, in the case of lower computational complexity, only few-shot samples are needed to fine-tune the network parameters, and the model can be quickly adapted to new classes of well conditions.
基金the China National Key Research and Development Program(Grant No.2016YFC0802904)National Natural Science Foundation of China(Grant No.61671470)62nd batch of funded projects of China Postdoctoral Science Foundation(Grant No.2017M623423)to provide fund for conducting experiments。
文摘Traditional object detectors based on deep learning rely on plenty of labeled samples,which are expensive to obtain.Few-shot object detection(FSOD)attempts to solve this problem,learning detection objects from a few labeled samples,but the performance is often unsatisfactory due to the scarcity of samples.We believe that the main reasons that restrict the performance of few-shot detectors are:(1)the positive samples is scarce,and(2)the quality of positive samples is low.Therefore,we put forward a novel few-shot object detector based on YOLOv4,starting from both improving the quantity and quality of positive samples.First,we design a hybrid multivariate positive sample augmentation(HMPSA)module to amplify the quantity of positive samples and increase positive sample diversity while suppressing negative samples.Then,we design a selective non-local fusion attention(SNFA)module to help the detector better learn the target features and improve the feature quality of positive samples.Finally,we optimize the loss function to make it more suitable for the task of FSOD.Experimental results on PASCAL VOC and MS COCO demonstrate that our designed few-shot object detector has competitive performance with other state-of-the-art detectors.
基金supported by National Natural Science Foundation of China [No.82171073]。
文摘Objective To develop a few-shot learning(FSL) approach for classifying optical coherence tomography(OCT) images in patients with inherited retinal disorders(IRDs).Methods In this study, an FSL model based on a student–teacher learning framework was designed to classify images. 2,317 images from 189 participants were included. Of these, 1,126 images revealed IRDs, 533 were normal samples, and 658 were control samples.Results The FSL model achieved a total accuracy of 0.974–0.983, total sensitivity of 0.934–0.957, total specificity of 0.984–0.990, and total F1 score of 0.935–0.957, which were superior to the total accuracy of the baseline model of 0.943–0.954, total sensitivity of 0.866–0.886, total specificity of 0.962–0.971,and total F1 score of 0.859–0.885. The performance of most subclassifications also exhibited advantages. Moreover, the FSL model had a higher area under curves(AUC) of the receiver operating characteristic(ROC) curves in most subclassifications.Conclusion This study demonstrates the effective use of the FSL model for the classification of OCT images from patients with IRDs, normal, and control participants with a smaller volume of data. The general principle and similar network architectures can also be applied to other retinal diseases with a low prevalence.
基金support from the Strategic Priority Research Program of the Chinese Academy of Sciences under Grant No.XDA27000000.
文摘Deep Convolution Neural Networks(DCNNs)can capture discriminative features from large datasets.However,how to incrementally learn new samples without forgetting old ones and recognize novel classes that arise in the dynamically changing world,e.g.,classifying newly discovered fish species,remains an open problem.We address an even more challenging and realistic setting of this problem where new class samples are insufficient,i.e.,Few-Shot Class-Incremental Learning(FSCIL).Current FSCIL methods augment the training data to alleviate the overfitting of novel classes.By contrast,we propose Filter Bank Networks(FBNs)that augment the learnable filters to capture fine-detailed features for adapting to future new classes.In the forward pass,FBNs augment each convolutional filter to a virtual filter bank containing the canonical one,i.e.,itself,and multiple transformed versions.During back-propagation,FBNs explicitly stimulate fine-detailed features to emerge and collectively align all gradients of each filter bank to learn the canonical one.FBNs capture pattern variants that do not yet exist in the pretraining session,thus making it easy to incorporate new classes in the incremental learning phase.Moreover,FBNs introduce model-level prior knowledge to efficiently utilize the limited few-shot data.Extensive experiments on MNIST,CIFAR100,CUB200,andMini-ImageNet datasets show that FBNs consistently outperformthe baseline by a significantmargin,reporting new state-of-the-art FSCIL results.In addition,we contribute a challenging FSCIL benchmark,Fishshot1K,which contains 8261 underwater images covering 1000 ocean fish species.The code is included in the supplementary materials.
基金supported by the Macao Science and Technology Development Funds Grands No.0158/2019/A3 from the Macao Special Administrative Region of the People’s Republic of China.
文摘Few-shot learning is becoming more and more popular in many fields,especially in the computer vision field.This inspires us to introduce few-shot learning to the genomic field,which faces a typical few-shot problem because some tasks only have a limited number of samples with high-dimensions.The goal of this study was to investigate the few-shot disease sub-type prediction problem and identify patient subgroups through training on small data.Accurate disease subtype classification allows clinicians to efficiently deliver investigations and interventions in clinical practice.We propose the SW-Net,which simulates the clinical process of extracting the shared knowledge from a range of interrelated tasks and generalizes it to unseen data.Our model is built upon a simple baseline,and we modified it for genomic data.Supportbased initialization for the classifier and transductive fine-tuning techniques were applied in our model to improve prediction accuracy,and an Entropy regularization term on the query set was appended to reduce over-fitting.Moreover,to address the high dimension and high noise issue,we future extended a feature selection module to adaptively select important features and a sample weighting module to prioritize high-confidence samples.Experiments on simulated data and The Cancer Genome Atlas meta-dataset show that our new baseline model gets higher prediction accuracy compared to other competing algorithms.
基金This work was supported by The National Natural Science Foundation of China(No.61402537)Sichuan Science and Technology Program(Nos.2019ZDZX0006,2020YFQ0056)+1 种基金the West Light Foundation of Chinese Academy of Sciences(201899)the Talents by Sichuan provincial Party Committee Organization Department,and Science and Technology Service Network Initiative(KFJ-STS-QYZD-2021-21-001).
文摘At present,deep learning has been well applied in many fields.However,due to the high complexity of hypothesis space,numerous training samples are usually required to ensure the reliability of minimizing experience risk.Therefore,training a classifier with a small number of training examples is a challenging task.From a biological point of view,based on the assumption that rich prior knowledge and analogical association should enable human beings to quickly distinguish novel things from a few or even one example,we proposed a dynamic analogical association algorithm to make the model use only a few labeled samples for classification.To be specific,the algorithm search for knowledge structures similar to existing tasks in prior knowledge based on manifold matching,and combine sampling distributions to generate offsets instead of two sample points,thereby ensuring high confidence and significant contribution to the classification.The comparative results on two common benchmark datasets substantiate the superiority of the proposed method compared to existing data generation approaches for few-shot learning,and the effectiveness of the algorithm has been proved through ablation experiments.
文摘This paper presents a novel approach for tire-pattern classification,aimed at conducting forensic analysis on tire marks discovered at crime scenes.The classification model proposed in this study accounts for the intricate and dynamic nature of tire prints found in real-world scenarios,including accident sites.To address this complexity,the classifier model was developed to harness the meta-learning capabilities of few-shot learning algorithms(learning-to-learn).The model is meticulously designed and optimized to effectively classify both tire patterns exhibited on wheels and tire-indentation marks visible on surfaces due to friction.This is achieved by employing a semantic segmentation model to extract the tire pattern marks within the image.These marks are subsequently used as a mask channel,combined with the original image,and fed into the classifier to perform classification.Overall,The proposed model follows a three-step process:(i)the Bilateral Segmentation Network is employed to derive the semantic segmentation of the tire pattern within a given image.(ii)utilizing the semantic image in conjunction with the original image,the model learns and clusters groups to generate vectors that define the relative position of the image in the test set.(iii)the model performs predictions based on these learned features.Empirical verification demonstrates usage of semantic model to extract the tire patterns before performing classification increases the overall accuracy of classification by∼4%.
基金This research was funded by RECLAIM project“Remanufacturing and Refurbishment of Large Industrial Equipment”and received funding from the European Commission Horizon 2020 research and innovation program under Grant Agreement No.869884The authors also acknowledge the support of The Efficiency and Performance Engineering Network International Collaboration Fund Award 2022(TEPEN-ICF 2022)project“Intelligent Fault Diagnosis Method and System with Few-Shot Learning Technique under Small Sample Data Condition”.
文摘Machine learning,especially deep learning,has been highly successful in data-intensive applications;however,the performance of these models will drop significantly when the amount of the training data amount does not meet the requirement.This leads to the so-called few-shot learning(FSL)problem,which requires the model rapidly generalize to new tasks that containing only a few labeled samples.In this paper,we proposed a new deep model,called deep convolutional meta-learning networks,to address the low performance of generalization under limited data for bearing fault diagnosis.The essential of our approach is to learn a base model from the multiple learning tasks using a support dataset and finetune the learnt parameters using few-shot tasks before it can adapt to the new learning task based on limited training data.The proposed method was compared to several FSL methods,including methods with and without pre-training the embedding mapping,and methods with finetuning the classifier or the whole model by utilizing the few-shot data from the target domain.The comparisons are carried out on 1-shot and 10-shot tasks using the Case Western Reserve University bearing dataset and a cylindrical roller bearing dataset.The experimental result illustrates that our method has good performance on the bearing fault diagnosis across various few-shot conditions.In addition,we found that the pretraining process does not always improve the prediction accuracy.
基金The work was supported by the National Key R&D Program of China(Grant No.2020YFC1511601)Fundamental Research Funds for the Central Universities(Grant No.2019SHFWLC01).
文摘Existing almost deep learning methods rely on a large amount of annotated data, so they are inappropriate for forest fire smoke detection with limited data. In this paper, a novel hybrid attention-based few-shot learning method, named Attention-Based Prototypical Network, is proposed for forest fire smoke detection. Specifically, feature extraction network, which consists of convolutional block attention module, could extract high-level and discriminative features and further decrease the false alarm rate resulting from suspected smoke areas. Moreover, we design a metalearning module to alleviate the overfitting issue caused by limited smoke images, and the meta-learning network enables achieving effective detection via comparing the distance between the class prototype of support images and the features of query images. A series of experiments on forest fire smoke datasets and miniImageNet dataset testify that the proposed method is superior to state-of-the-art few-shot learning approaches.
基金This work has been supported by the Australian Research Data Common(ARDC),project code–RG192500.
文摘Intrusion Detection Systems(IDSs)have a great interest these days to discover complex attack events and protect the critical infrastructures of the Internet of Things(IoT)networks.Existing IDSs based on shallow and deep network architectures demand high computational resources and high volumes of data to establish an adaptive detection engine that discovers new families of attacks from the edge of IoT networks.However,attackers exploit network gateways at the edge using new attacking scenarios(i.e.,zero-day attacks),such as ransomware and Distributed Denial of Service(DDoS)attacks.This paper proposes new IDS based on Few-Shot Deep Learning,named CNN-IDS,which can automatically identify zero-day attacks from the edge of a network and protect its IoT systems.The proposed system comprises two-methodological stages:1)a filtered Information Gain method is to select the most useful features from network data,and 2)one-dimensional Convolutional Neural Network(CNN)algorithm is to recognize new attack types from a network’s edge.The proposed model is trained and validated using two datasets of the UNSW-NB15 and Bot-IoT.The experimental results showed that it enhances about a 3%detection rate and around a 3%–4%falsepositive rate with the UNSW-NB15 dataset and about an 8%detection rate using the BoT-IoT dataset.
基金Supported by the National High Technology Research and Development Program of China(20-H863-05-XXX-XX)the National Natural Science Foundation of China(61171131)+1 种基金Shandong Province Key Research and Development Program(YD01033)the China Scholarship Council Program(201608370049)。
文摘Numerous meta-learning methods focus on the few-shot learning issue,yet most of them assume that various tasks have a shared embedding space,so the generalization ability of the trained model is limited.In order to solve the aforementioned problem,a task-adaptive meta-learning method based on graph neural network(TAGN) is proposed in this paper,where the characterization ability of the original feature extraction network is ameliorated and the classification accuracy is remarkably improved.Firstly,a task-adaptation module based on the self-attention mechanism is employed,where the generalization ability of the model is enhanced on the new task.Secondly,images are classified in non-Euclidean domain,where the disadvantages of poor adaptability of the traditional distance function are overcome.A large number of experiments are conducted and the results show that the proposed methodology has a better performance than traditional task-independent classification methods on two real-word datasets.
基金Supported by the National Natural Science Foundation of China (No.61171131)Key R&D Program of Shandong Province (No.YD01033)。
文摘The existing few-shot learning(FSL) approaches based on metric-learning usually lack attention to the distinction of feature contributions,and the importance of each sample is often ignored when obtaining the class representation,where the performance of the model is limited.Additionally,similarity metric method is also worthy of attention.Therefore,a few-shot learning approach called MWNet based on multi-attention fusion and weighted class representation(WCR) is proposed in this paper.Firstly,a multi-attention fusion module is introduced into the model to highlight the valuable part of the feature and reduce the interference of irrelevant content.Then,when obtaining the class representation,weight is given to each support set sample,and the weighted class representation is used to better express the class.Moreover,a mutual similarity metric method is used to obtain a more accurate similarity relationship through the mutual similarity for each representation.Experiments prove that the approach in this paper performs well in few-shot image classification,and also shows remarkable excellence and competitiveness compared with related advanced techniques.
基金supported by the Advanced Training Project of the Professional Leaders in Jiangsu Higher Vocational Colleges (2020GRFX006).
文摘Recent advances in OCR show that end-to-end(E2E)training pipelines including detection and identification can achieve the best results.However,many existing methods usually focus on case insensitive English characters.In this paper,we apply an E2E approach,the multiplex multilingual mask TextSpotter,which performs script recognition at the word level and uses different recognition headers to process different scripts while maintaining uniform loss,thus optimizing script recognition and multiple recognition headers simultaneously.Experiments show that this method is superior to the single-head model with similar number of parameters in endto-end identification tasks.
基金This work was supported in part by the CETC key laboratory of Aerospace Information Applications under Grant No.SXX19629X060.
文摘Target recognition based on deep learning relies on a large quantity of samples,but in some specific remote sensing scenes,the samples are very rare.Currently,few-shot learning can obtain high-performance target classification models using only a few samples,but most researches are based on the natural scene.Therefore,this paper proposes a metric-based few-shot classification technology in remote sensing.First,we constructed a dataset(RSD-FSC)for few-shot classification in remote sensing,which contained 21 classes typical target sample slices of remote sensing images.Second,based on metric learning,a k-nearest neighbor classification network is proposed,to find multiple training samples similar to the testing target,and then the similarity between the testing target and multiple similar samples is calculated to classify the testing target.Finally,the 5-way 1-shot,5-way 5-shot and 5-way 10-shot experiments are conducted to improve the generalization of the model on few-shot classification tasks.The experimental results show that for the newly emerged classes few-shot samples,when the number of training samples is 1,5 and 10,the average accuracy of target recognition can reach 59.134%,82.553%and 87.796%,respectively.It demonstrates that our proposed method can resolve few-shot classification in remote sensing image and perform better than other few-shot classification methods.
基金supported by National Natural Science Foundation of China(No.62272231)Natural Science Foundation of Jiangsu Province of China(No.BK 20210340)+2 种基金National Key R&D Program of China(No.2021YFA1001100)the Fundamental Research Funds for the Central Universities,China(No.NJ2022028)CAAI-Huawei MindSpore Open Fund,China.
文摘Localizing discriminative object parts(e.g.,bird head)is crucial for fine-grained classification tasks,especially for the more challenging fine-grained few-shot scenario.Previous work always relies on the learned object parts in a unified manner,where they attend the same object parts(even with common attention weights)for different few-shot episodic tasks.In this paper,we propose that it should adaptively capture the task-specific object parts that require attention for each few-shot task,since the parts that can distinguish different tasks are naturally different.Specifically for a few-shot task,after obtaining part-level deep features,we learn a task-specific part-based dictionary for both aligning and reweighting part features in an episode.Then,part-level categorical prototypes are generated based on the part features of support data,which are later employed by calculating distances to classify query data for evaluation.To retain the discriminative ability of the part-level representations(i.e.,part features and part prototypes),we design an optimal transport solution that also utilizes query data in a transductive way to optimize the aforementioned distance calculation for the final predictions.Extensive experiments on five fine-grained benchmarks show the superiority of our method,especially for the 1-shot setting,gaining 0.12%,8.56%and 5.87%improvements over state-of-the-art methods on CUB,Stanford Dogs,and Stanford Cars,respectively.