Identifying negation cues and their scope in a text is an important subtask of information extraction that can benefit other natural language processing tasks,including but not limited to medical data mining,relation ...Identifying negation cues and their scope in a text is an important subtask of information extraction that can benefit other natural language processing tasks,including but not limited to medical data mining,relation extraction,question answering and sentiment analysis.The tasks of negation cue and negation scope detection can be treated as sequence labelling problems.In this paper,a system is presented having two components:negation cue detection and negation scope detection.In the first phase,a conditional random field(CRF) model is trained to detect the negation cues using a lexicon of negation words and some lexical and contextual features.Then,another CRF model is trained to detect the scope of each negation cue identified in the first phase,using basic lexical and contextual features.These two models are trained and tested using the dataset distributed within the* Sem Shared Task 2012 on resolving the scope and focus of negation.Experimental results show that the system outperformed all the systems submitted to this shared task.展开更多
Recently,self-supervised learning has shown great potential in Graph Neural Networks (GNNs) through contrastive learning,which aims to learn discriminative features for each node without label information. The key to ...Recently,self-supervised learning has shown great potential in Graph Neural Networks (GNNs) through contrastive learning,which aims to learn discriminative features for each node without label information. The key to graph contrastive learning is data augmentation. The anchor node regards its augmented samples as positive samples,and the rest of the samples are regarded as negative samples,some of which may be positive samples. We call these mislabeled samples as “false negative” samples,which will seriously affect the final learning effect. Since such semantically similar samples are ubiquitous in the graph,the problem of false negative samples is very significant. To address this issue,the paper proposes a novel model,False negative sample Detection for Graph Contrastive Learning (FD4GCL),which uses attribute and structure-aware to detect false negative samples. Experimental results on seven datasets show that FD4GCL outperforms the state-of-the-art baselines and even exceeds several supervised methods.展开更多
基金Supported by the National High Technology Research and Development Programme of China(No.2015AA015407)the National Natural Science Foundation of China(No.61273321)the Specialized Research Fund for the Doctoral Program of Higher Education(No.20122302110039)
文摘Identifying negation cues and their scope in a text is an important subtask of information extraction that can benefit other natural language processing tasks,including but not limited to medical data mining,relation extraction,question answering and sentiment analysis.The tasks of negation cue and negation scope detection can be treated as sequence labelling problems.In this paper,a system is presented having two components:negation cue detection and negation scope detection.In the first phase,a conditional random field(CRF) model is trained to detect the negation cues using a lexicon of negation words and some lexical and contextual features.Then,another CRF model is trained to detect the scope of each negation cue identified in the first phase,using basic lexical and contextual features.These two models are trained and tested using the dataset distributed within the* Sem Shared Task 2012 on resolving the scope and focus of negation.Experimental results show that the system outperformed all the systems submitted to this shared task.
基金supported by the National Key Research and Development Program of China(No.2021YFB3300503)Regional Innovation and Development Joint Fund of National Natural Science Foundation of China(No.U22A20167)National Natural Science Foundation of China(No.61872260).
文摘Recently,self-supervised learning has shown great potential in Graph Neural Networks (GNNs) through contrastive learning,which aims to learn discriminative features for each node without label information. The key to graph contrastive learning is data augmentation. The anchor node regards its augmented samples as positive samples,and the rest of the samples are regarded as negative samples,some of which may be positive samples. We call these mislabeled samples as “false negative” samples,which will seriously affect the final learning effect. Since such semantically similar samples are ubiquitous in the graph,the problem of false negative samples is very significant. To address this issue,the paper proposes a novel model,False negative sample Detection for Graph Contrastive Learning (FD4GCL),which uses attribute and structure-aware to detect false negative samples. Experimental results on seven datasets show that FD4GCL outperforms the state-of-the-art baselines and even exceeds several supervised methods.