期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
Self-Supervised Task Augmentation for Few-Shot Intent Detection 被引量:1
1
作者 Peng-Fei Sun Ya-Wen Ouyang +1 位作者 Ding-Jie Song xin-yu dai 《Journal of Computer Science & Technology》 SCIE EI CSCD 2022年第3期527-538,共12页
Few-shot intent detection is a practical challenge task,because new intents are frequently emerging and collecting large-scale data for them could be costly.Meta-learning,a promising technique for leveraging data from... Few-shot intent detection is a practical challenge task,because new intents are frequently emerging and collecting large-scale data for them could be costly.Meta-learning,a promising technique for leveraging data from previous tasks to enable efficient learning of new tasks,has been a popular way to tackle this problem.However,the existing meta-learning models have been evidenced to be overfitting when the meta-training tasks are insufficient.To overcome this challenge,we present a novel self-supervised task augmentation with meta-learning framework,namely STAM.Firstly,we introduce the task augmentation,which explores two different strategies and combines them to extend meta-training tasks.Secondly,we devise two auxiliary losses for integrating self-supervised learning into meta-learning to learn more generalizable and transferable features.Experimental results show that STAM can achieve consistent and considerable performance improvement to existing state-of-the-art methods on four datasets. 展开更多
关键词 self-supervised learning task augmentation META-LEARNING few-shot intent detection
原文传递
Pre-Train and Learn: Preserving Global Information for Graph Neural Networks
2
作者 Dan-Hao Zhu xin-yu dai Jia-Jun Chen 《Journal of Computer Science & Technology》 SCIE EI CSCD 2021年第6期1420-1430,共11页
Graph neural networks(GNNs)have shown great power in learning on graphs.However,it is still a challenge for GNNs to model information faraway from the source node.The ability to preserve global information can enhance... Graph neural networks(GNNs)have shown great power in learning on graphs.However,it is still a challenge for GNNs to model information faraway from the source node.The ability to preserve global information can enhance graph representation and hence improve classification precision.In the paper,we propose a new learning framework named G-GNN(Global information for GNN)to address the challenge.First,the global structure and global attribute features of each node are obtained via unsupervised pre-training,and those global features preserve the global information associated with the node.Then,using the pre-trained global features and the raw attributes of the graph,a set of parallel kernel GNNs is used to learn different aspects from these heterogeneous features.Any general GNN can be used as a kernal and easily obtain the ability of preserving global information,without having to alter their own algorithms.Extensive experiments have shown that state-of-the-art models,e.g.,GCN,GAT,Graphsage and APPNP,can achieve improvement with G-GNN on three standard evaluation datasets.Specially,we establish new benchmark precision records on Cora(84.31%)and Pubmed(80.95%)when learning on attributed graphs. 展开更多
关键词 graph neural network network embedding representation learning global information pre-train
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部