Social media like Twitter who serves as a novel news medium and has become increasingly popular since its establishment. Large scale first-hand user-generated tweets motivate automatic event detection on Twitter. Prev...Social media like Twitter who serves as a novel news medium and has become increasingly popular since its establishment. Large scale first-hand user-generated tweets motivate automatic event detection on Twitter. Previous unsupervised approaches detected events by clustering words. These methods detect events using burstiness,which measures surging frequencies of words at certain time windows. However,event clusters represented by a set of individual words are difficult to understand. This issue is addressed by building a document-level event detection model that directly calculates the burstiness of tweets,leveraging distributed word representations for modeling semantic information,thereby avoiding sparsity. Results show that the document-level model not only offers event summaries that are directly human-readable,but also gives significantly improved accuracies compared to previous methods on unsupervised tweet event detection,which are based on words/segments.展开更多
Relation Extraction(RE)is to obtain a predefined relation type of two entities mentioned in a piece of text,e.g.,a sentence-level or a document-level text.Most existing studies suffer from the noise in the text,and ne...Relation Extraction(RE)is to obtain a predefined relation type of two entities mentioned in a piece of text,e.g.,a sentence-level or a document-level text.Most existing studies suffer from the noise in the text,and necessary pruning is of great importance.The conventional sentence-level RE task addresses this issue by a denoising method using the shortest dependency path to build a long-range semantic dependency between entity pairs.However,this kind of denoising method is scarce in document-level RE.In this work,we explicitly model a denoised document-level graph based on linguistic knowledge to capture various long-range semantic dependencies among entities.We first formalize a Syntactic Dependency Tree forest(SDT-forest)by introducing the syntax and discourse dependency relation.Then,the Steiner tree algorithm extracts a mention-level denoised graph,Steiner Graph(SG),removing linguistically irrelevant words from the SDT-forest.We then devise a slide residual attention to highlight word-level evidence on text and SG.Finally,the classification is established on the SG to infer the relations of entity pairs.We conduct extensive experiments on three public datasets.The results evidence that our method is beneficial to establish long-range semantic dependency and can improve the classification performance with longer texts.展开更多
事件抽取旨在从非结构化文本中检测事件类型并抽取事件要素。现有方法在处理文档级文本时仍存在局限性。这是因为文档级文本可能由多个事件组成,并且构成某一事件的事件要素通常分散在不同句子中。为应对上述挑战,提出了一种文档级事件...事件抽取旨在从非结构化文本中检测事件类型并抽取事件要素。现有方法在处理文档级文本时仍存在局限性。这是因为文档级文本可能由多个事件组成,并且构成某一事件的事件要素通常分散在不同句子中。为应对上述挑战,提出了一种文档级事件抽取反向推理模型(reverse inference model for document-level event extraction,RIDEE)。基于无触发词的设计,将文档级事件抽取转化为候选事件要素抽取和事件触发推理两个子任务,并行式抽取事件要素并检测事件类型。此外,设计了一种用于存储历史事件的事件依赖池,使得模型在处理多事件文本时可以充分利用事件之间的依赖关系。公开数据集上的实验结果表明,与现有事件抽取模型相比,RIDEE在进行文档级事件抽取时具有更优的性能。展开更多
Document-level machine translation(MT)remains challenging due to its difficulty in efficiently using documentlevel global context for translation.In this paper,we propose a hierarchical model to learn the global conte...Document-level machine translation(MT)remains challenging due to its difficulty in efficiently using documentlevel global context for translation.In this paper,we propose a hierarchical model to learn the global context for documentlevel neural machine translation(NMT).This is done through a sentence encoder to capture intra-sentence dependencies and a document encoder to model document-level inter-sentence consistency and coherence.With this hierarchical architecture,we feedback the extracted document-level global context to each word in a top-down fashion to distinguish different translations of a word according to its specific surrounding context.Notably,we explore the effect of three popular attention functions during the information backward-distribution phase to take a deep look into the global context information distribution of our model.In addition,since large-scale in-domain document-level parallel corpora are usually unavailable,we use a two-step training strategy to take advantage of a large-scale corpus with out-of-domain parallel sentence pairs and a small-scale corpus with in-domain parallel document pairs to achieve the domain adaptability.Experimental results of our model on Chinese-English and English-German corpora significantly improve the Transformer baseline by 4.5 BLEU points on average which demonstrates the effectiveness of our proposed hierarchical model in document-level NMT.展开更多
基金Supported by the National High Technology Research and Development Programme of China(No.2015AA015405)
文摘Social media like Twitter who serves as a novel news medium and has become increasingly popular since its establishment. Large scale first-hand user-generated tweets motivate automatic event detection on Twitter. Previous unsupervised approaches detected events by clustering words. These methods detect events using burstiness,which measures surging frequencies of words at certain time windows. However,event clusters represented by a set of individual words are difficult to understand. This issue is addressed by building a document-level event detection model that directly calculates the burstiness of tweets,leveraging distributed word representations for modeling semantic information,thereby avoiding sparsity. Results show that the document-level model not only offers event summaries that are directly human-readable,but also gives significantly improved accuracies compared to previous methods on unsupervised tweet event detection,which are based on words/segments.
基金supported by the National Natural Science Foundation of China(Nos.U19A2059&62176046).
文摘Relation Extraction(RE)is to obtain a predefined relation type of two entities mentioned in a piece of text,e.g.,a sentence-level or a document-level text.Most existing studies suffer from the noise in the text,and necessary pruning is of great importance.The conventional sentence-level RE task addresses this issue by a denoising method using the shortest dependency path to build a long-range semantic dependency between entity pairs.However,this kind of denoising method is scarce in document-level RE.In this work,we explicitly model a denoised document-level graph based on linguistic knowledge to capture various long-range semantic dependencies among entities.We first formalize a Syntactic Dependency Tree forest(SDT-forest)by introducing the syntax and discourse dependency relation.Then,the Steiner tree algorithm extracts a mention-level denoised graph,Steiner Graph(SG),removing linguistically irrelevant words from the SDT-forest.We then devise a slide residual attention to highlight word-level evidence on text and SG.Finally,the classification is established on the SG to infer the relations of entity pairs.We conduct extensive experiments on three public datasets.The results evidence that our method is beneficial to establish long-range semantic dependency and can improve the classification performance with longer texts.
文摘事件抽取旨在从非结构化文本中检测事件类型并抽取事件要素。现有方法在处理文档级文本时仍存在局限性。这是因为文档级文本可能由多个事件组成,并且构成某一事件的事件要素通常分散在不同句子中。为应对上述挑战,提出了一种文档级事件抽取反向推理模型(reverse inference model for document-level event extraction,RIDEE)。基于无触发词的设计,将文档级事件抽取转化为候选事件要素抽取和事件触发推理两个子任务,并行式抽取事件要素并检测事件类型。此外,设计了一种用于存储历史事件的事件依赖池,使得模型在处理多事件文本时可以充分利用事件之间的依赖关系。公开数据集上的实验结果表明,与现有事件抽取模型相比,RIDEE在进行文档级事件抽取时具有更优的性能。
基金supported by the National Natural Science Foundation of China under Grant Nos.61751206,61673290 and 61876118the Postgraduate Research&Practice Innovation Program of Jiangsu Province of China under Grant No.KYCX20_2669a project funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions(PAPD).
文摘Document-level machine translation(MT)remains challenging due to its difficulty in efficiently using documentlevel global context for translation.In this paper,we propose a hierarchical model to learn the global context for documentlevel neural machine translation(NMT).This is done through a sentence encoder to capture intra-sentence dependencies and a document encoder to model document-level inter-sentence consistency and coherence.With this hierarchical architecture,we feedback the extracted document-level global context to each word in a top-down fashion to distinguish different translations of a word according to its specific surrounding context.Notably,we explore the effect of three popular attention functions during the information backward-distribution phase to take a deep look into the global context information distribution of our model.In addition,since large-scale in-domain document-level parallel corpora are usually unavailable,we use a two-step training strategy to take advantage of a large-scale corpus with out-of-domain parallel sentence pairs and a small-scale corpus with in-domain parallel document pairs to achieve the domain adaptability.Experimental results of our model on Chinese-English and English-German corpora significantly improve the Transformer baseline by 4.5 BLEU points on average which demonstrates the effectiveness of our proposed hierarchical model in document-level NMT.