In the field of information security,a gap exists in the study of coreference resolution of entities.A hybrid method is proposed to solve the problem of coreference resolution in information security.The work consists...In the field of information security,a gap exists in the study of coreference resolution of entities.A hybrid method is proposed to solve the problem of coreference resolution in information security.The work consists of two parts:the first extracts all candidates(including noun phrases,pronouns,entities,and nested phrases)from a given document and classifies them;the second is coreference resolution of the selected candidates.In the first part,a method combining rules with a deep learning model(Dictionary BiLSTM-Attention-CRF,or DBAC)is proposed to extract all candidates in the text and classify them.In the DBAC model,the domain dictionary matching mechanism is introduced,and new features of words and their contexts are obtained according to the domain dictionary.In this way,full use can be made of the entities and entity-type information contained in the domain dictionary,which can help solve the recognition problem of both rare and long entities.In the second part,candidates are divided into pronoun candidates and noun phrase candidates according to the part of speech,and the coreference resolution of pronoun candidates is solved by making rules and coreference resolution of noun phrase candidates by machine learning.Finally,a dataset is created with which to evaluate our methods using information security data.The experimental results show that the proposed model exhibits better performance than the other baseline models.展开更多
Due to the small size of the annotated corpora and the sparsity of the event trigger words, the event coreference resolver cannot capture enough event semantics, especially the trigger semantics, to identify coreferen...Due to the small size of the annotated corpora and the sparsity of the event trigger words, the event coreference resolver cannot capture enough event semantics, especially the trigger semantics, to identify coreferential event mentions. To address the above issues, this paper proposes a trigger semantics augmentation mechanism to boost event coreference resolution. First, this mechanism performs a trigger-oriented masking strategy to pre-train a BERT (Bidirectional Encoder Representations from Transformers)-based encoder (Trigger-BERT), which is fine-tuned on a large-scale unlabeled dataset Gigaword. Second, it combines the event semantic relations from the Trigger-BERT encoder with the event interactions from the soft-attention mechanism to resolve event coreference. Experimental results on both the KBP2016 and KBP2017 datasets show that our proposed model outperforms several state-of-the-art baselines.展开更多
Knowledge of noun phrase anaphoricity might be profitably exploited in coreference resolution to bypass the resolution of non-anaphoric noun phrases. However, it is surprising to notice that recent attempts to incorpo...Knowledge of noun phrase anaphoricity might be profitably exploited in coreference resolution to bypass the resolution of non-anaphoric noun phrases. However, it is surprising to notice that recent attempts to incorporate automatically acquired anaphoricity information into coreferenee resolution systems have been far from expectation. This paper proposes a global learning method in determining the anaphoricity of noun phrases via a label propagation algorithm to improve learning-based coreference resolution. In order to eliminate the huge computational burden in the label propagation algorithm, we employ the weighted support vectors as the critical instances in the training texts. In addition, two kinds of kernels, i.e instances to represent all the anaphoricity-labeled NP , the feature-based RBF (Radial Basis Function) kernel and the convolution tree kernel with approximate matching, are explored to compute the anaphoricity similarity between two noun phrases. Experiments on the ACE2003 corpus demonstrate the great effectiveness of our method in anaphoricity determination of noun phrases and its application in learning-based coreference resolution.展开更多
大多数先前的事件共指消解模型都属于成对相似度模型,通过编码两个事件提及的表示并计算相似度来判断是否共指。但是,当两个事件提及在文档内出现的位置接近时,编码其中一个事件提及的上下文表示会引入另一事件的信息,从而降低模型的性...大多数先前的事件共指消解模型都属于成对相似度模型,通过编码两个事件提及的表示并计算相似度来判断是否共指。但是,当两个事件提及在文档内出现的位置接近时,编码其中一个事件提及的上下文表示会引入另一事件的信息,从而降低模型的性能。针对此问题,提出了一种基于核心句的端到端事件共指消解模型(End-to-end Event Coreference Resolution Based on Core Sentence,ECR-CS),该模型自动抽取事件信息并按照预先设置好的模板为每个事件提及构造核心句,利用核心句的表示代替事件提及的表示。由于核心句中只包含单个事件的信息,因此所提模型可以在编码事件表示时消除其他事件信息的干扰。此外,受到事件信息抽取工具的性能限制,构造的核心句可能会丢失事件的部分重要信息,提出利用事件在文档中的上下文表示来进行出弥补。所提模型引入了一种门控机制,将上下文嵌入向量分解为分别与核心句嵌入向量平行和正交的两个分量,平行分量可以认为是与核心句信息维度相同的信息,正交分量则是核心句中不包含的新信息。通过上下文信息和核心句信息的相关度,控制正交分量中被用来补充核心句中缺失的重要信息的新信息的量。在ACE2005数据集上进行实验,结果表明,相比最先进的模型,ECR-CS的CoNLL和AVG分数分别提升了1.76和1.04。展开更多
基金This work was supported by the National Natural Science Foundation of China(grant no.61602515).
文摘In the field of information security,a gap exists in the study of coreference resolution of entities.A hybrid method is proposed to solve the problem of coreference resolution in information security.The work consists of two parts:the first extracts all candidates(including noun phrases,pronouns,entities,and nested phrases)from a given document and classifies them;the second is coreference resolution of the selected candidates.In the first part,a method combining rules with a deep learning model(Dictionary BiLSTM-Attention-CRF,or DBAC)is proposed to extract all candidates in the text and classify them.In the DBAC model,the domain dictionary matching mechanism is introduced,and new features of words and their contexts are obtained according to the domain dictionary.In this way,full use can be made of the entities and entity-type information contained in the domain dictionary,which can help solve the recognition problem of both rare and long entities.In the second part,candidates are divided into pronoun candidates and noun phrase candidates according to the part of speech,and the coreference resolution of pronoun candidates is solved by making rules and coreference resolution of noun phrase candidates by machine learning.Finally,a dataset is created with which to evaluate our methods using information security data.The experimental results show that the proposed model exhibits better performance than the other baseline models.
基金supported by the National Natural Science Foundation of China under Grant Nos.61836007 and 61772354.
文摘Due to the small size of the annotated corpora and the sparsity of the event trigger words, the event coreference resolver cannot capture enough event semantics, especially the trigger semantics, to identify coreferential event mentions. To address the above issues, this paper proposes a trigger semantics augmentation mechanism to boost event coreference resolution. First, this mechanism performs a trigger-oriented masking strategy to pre-train a BERT (Bidirectional Encoder Representations from Transformers)-based encoder (Trigger-BERT), which is fine-tuned on a large-scale unlabeled dataset Gigaword. Second, it combines the event semantic relations from the Trigger-BERT encoder with the event interactions from the soft-attention mechanism to resolve event coreference. Experimental results on both the KBP2016 and KBP2017 datasets show that our proposed model outperforms several state-of-the-art baselines.
基金Supported by the National Natural Science Foundation of China under Grant Nos.60873150,90920004 and 61003153
文摘Knowledge of noun phrase anaphoricity might be profitably exploited in coreference resolution to bypass the resolution of non-anaphoric noun phrases. However, it is surprising to notice that recent attempts to incorporate automatically acquired anaphoricity information into coreferenee resolution systems have been far from expectation. This paper proposes a global learning method in determining the anaphoricity of noun phrases via a label propagation algorithm to improve learning-based coreference resolution. In order to eliminate the huge computational burden in the label propagation algorithm, we employ the weighted support vectors as the critical instances in the training texts. In addition, two kinds of kernels, i.e instances to represent all the anaphoricity-labeled NP , the feature-based RBF (Radial Basis Function) kernel and the convolution tree kernel with approximate matching, are explored to compute the anaphoricity similarity between two noun phrases. Experiments on the ACE2003 corpus demonstrate the great effectiveness of our method in anaphoricity determination of noun phrases and its application in learning-based coreference resolution.
文摘大多数先前的事件共指消解模型都属于成对相似度模型,通过编码两个事件提及的表示并计算相似度来判断是否共指。但是,当两个事件提及在文档内出现的位置接近时,编码其中一个事件提及的上下文表示会引入另一事件的信息,从而降低模型的性能。针对此问题,提出了一种基于核心句的端到端事件共指消解模型(End-to-end Event Coreference Resolution Based on Core Sentence,ECR-CS),该模型自动抽取事件信息并按照预先设置好的模板为每个事件提及构造核心句,利用核心句的表示代替事件提及的表示。由于核心句中只包含单个事件的信息,因此所提模型可以在编码事件表示时消除其他事件信息的干扰。此外,受到事件信息抽取工具的性能限制,构造的核心句可能会丢失事件的部分重要信息,提出利用事件在文档中的上下文表示来进行出弥补。所提模型引入了一种门控机制,将上下文嵌入向量分解为分别与核心句嵌入向量平行和正交的两个分量,平行分量可以认为是与核心句信息维度相同的信息,正交分量则是核心句中不包含的新信息。通过上下文信息和核心句信息的相关度,控制正交分量中被用来补充核心句中缺失的重要信息的新信息的量。在ACE2005数据集上进行实验,结果表明,相比最先进的模型,ECR-CS的CoNLL和AVG分数分别提升了1.76和1.04。