期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
Negation scope detection with a conditional random field model 被引量:1
1
作者 Lydia Lazib Zhao Yanyan +1 位作者 Qin Bing Liu Ting 《High Technology Letters》 EI CAS 2017年第2期191-197,共7页
Identifying negation cues and their scope in a text is an important subtask of information extraction that can benefit other natural language processing tasks,including but not limited to medical data mining,relation ... Identifying negation cues and their scope in a text is an important subtask of information extraction that can benefit other natural language processing tasks,including but not limited to medical data mining,relation extraction,question answering and sentiment analysis.The tasks of negation cue and negation scope detection can be treated as sequence labelling problems.In this paper,a system is presented having two components:negation cue detection and negation scope detection.In the first phase,a conditional random field(CRF) model is trained to detect the negation cues using a lexicon of negation words and some lexical and contextual features.Then,another CRF model is trained to detect the scope of each negation cue identified in the first phase,using basic lexical and contextual features.These two models are trained and tested using the dataset distributed within the* Sem Shared Task 2012 on resolving the scope and focus of negation.Experimental results show that the system outperformed all the systems submitted to this shared task. 展开更多
关键词 negation detection negation cue detection negation scope detection natural language processing
下载PDF
False Negative Sample Detection for Graph Contrastive Learning
2
作者 Binbin Zhang Li Wang 《Tsinghua Science and Technology》 SCIE EI CAS CSCD 2024年第2期529-542,共14页
Recently,self-supervised learning has shown great potential in Graph Neural Networks (GNNs) through contrastive learning,which aims to learn discriminative features for each node without label information. The key to ... Recently,self-supervised learning has shown great potential in Graph Neural Networks (GNNs) through contrastive learning,which aims to learn discriminative features for each node without label information. The key to graph contrastive learning is data augmentation. The anchor node regards its augmented samples as positive samples,and the rest of the samples are regarded as negative samples,some of which may be positive samples. We call these mislabeled samples as “false negative” samples,which will seriously affect the final learning effect. Since such semantically similar samples are ubiquitous in the graph,the problem of false negative samples is very significant. To address this issue,the paper proposes a novel model,False negative sample Detection for Graph Contrastive Learning (FD4GCL),which uses attribute and structure-aware to detect false negative samples. Experimental results on seven datasets show that FD4GCL outperforms the state-of-the-art baselines and even exceeds several supervised methods. 展开更多
关键词 graph representation learning contrastive learning false negative sample detection
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部