Information content security is a branch of cyberspace security. How to effectively manage and use Weibo comment information has become a research focus in the field of information content security. Three main tasks i...Information content security is a branch of cyberspace security. How to effectively manage and use Weibo comment information has become a research focus in the field of information content security. Three main tasks involved are emotion sentence identification and classification,emotion tendency classification,and emotion expression extraction. Combining with the latent Dirichlet allocation(LDA) model,a Gibbs sampling implementation for inference of our algorithm is presented,and can be used to categorize emotion tendency automatically with the computer. In accordance with the lower ratio of recall for emotion expression extraction in Weibo,use dependency parsing,divided into two categories with subject and object,summarized six kinds of dependency models from evaluating objects and emotion words,and proposed that a merge algorithm for evaluating objects can be accurately evaluated by participating in a public bakeoff and in the shared tasks among the best methods in the sub-task of emotion expression extraction,indicating the value of our method as not only innovative but practical.展开更多
In Chinese, dependency analysis has been shown to be a powerful syntactic parser because the order of phrases in a sentence is relatively free compared with English. Conventional dependency parsers require a number of...In Chinese, dependency analysis has been shown to be a powerful syntactic parser because the order of phrases in a sentence is relatively free compared with English. Conventional dependency parsers require a number of sophisticated rules that have to be handcrafted by linguists, and are too cumbersome to maintain. To solve the problem, a parser using SVM (Support Vector Machine) is introduced. First, a new strategy of dependency analysis is proposed. Then some chosen feature types are used for learning and for creating the modification matrix using SVM. Finally, the dependency of phrases in the sentence is generated. Experiments conducted to analyze how each type of feature affects parsing accuracy, showed that the model can increase accuracy of the dependency parser by 9.2%.展开更多
Head-driven statistical models for natural language parsing are the most representative lexicalized syntactic parsing models, but they only utilize semantic dependency between words, and do not incorporate other seman...Head-driven statistical models for natural language parsing are the most representative lexicalized syntactic parsing models, but they only utilize semantic dependency between words, and do not incorporate other semantic information such as semantic collocation and semantic category. Some improvements on this distinctive parser are presented. Firstly, "valency" is an essential semantic feature of words. Once the valency of word is determined, the collocation of the word is clear, and the sentence structure can be directly derived. Thus, a syntactic parsing model combining valence structure with semantic dependency is purposed on the base of head-driven statistical syntactic parsing models. Secondly, semantic role labeling(SRL) is very necessary for deep natural language processing. An integrated parsing approach is proposed to integrate semantic parsing into the syntactic parsing process. Experiments are conducted for the refined statistical parser. The results show that 87.12% precision and 85.04% recall are obtained, and F measure is improved by 5.68% compared with the head-driven parsing model introduced by Collins.展开更多
Recently dependency information has been used in different ways to improve neural machine translation.For example,add dependency labels to the hidden states of source words.Or the contiguous information of a source wo...Recently dependency information has been used in different ways to improve neural machine translation.For example,add dependency labels to the hidden states of source words.Or the contiguous information of a source word would be found according to the dependency tree and then be learned independently and be added into Neural Machine Translation(NMT)model as a unit in various ways.However,these works are all limited to the use of dependency information to enrich the hidden states of source words.Since many works in Statistical Machine Translation(SMT)and NMT have proven the validity and potential of using dependency information.We believe that there are still many ways to apply dependency information in the NMT structure.In this paper,we explore a new way to use dependency information to improve NMT.Based on the theory of local attention mechanism,we present Dependency-based Local Attention Approach(DLAA),a new attention mechanism that allowed the NMT model to trace the dependency words related to the current translating words.Our work also indicates that dependency information could help to supervise attention mechanism.Experiment results on WMT 17 Chineseto-English translation task shared training datasets show that our model is effective and perform distinctively on long sentence translation.展开更多
隐喻是人类语言中经常出现的一种特殊现象,隐喻识别对于自然语言处理各项任务来说具有十分基础和重要的意义。针对中文领域的隐喻识别任务,该文提出了一种基于句法感知图卷积神经网络和ELECTRA的隐喻识别模型(S yntax-a ware G CN with ...隐喻是人类语言中经常出现的一种特殊现象,隐喻识别对于自然语言处理各项任务来说具有十分基础和重要的意义。针对中文领域的隐喻识别任务,该文提出了一种基于句法感知图卷积神经网络和ELECTRA的隐喻识别模型(S yntax-a ware G CN with E LECTRA,SaGE)。该模型从语言学出发,使用ELECTRA和Transformer编码器抽取句子的语义特征,将句子按照依存关系组织成一张图并使用图卷积神经网络抽取其句法特征,在此基础上对两类特征进行融合以进行隐喻识别。该模型在CCL 2018中文隐喻识别评测数据集上以85.22%的宏平均F 1值超越了此前的最佳成绩,验证了融合语义信息和句法信息对于隐喻识别任务具有重要作用。展开更多
Syntactic and semantic parsing has been investigated for decades,which is one primary topic in the natural language processing community.This article aims for a brief survey on this topic.The parsing community include...Syntactic and semantic parsing has been investigated for decades,which is one primary topic in the natural language processing community.This article aims for a brief survey on this topic.The parsing community includes many tasks,which are difficult to be covered fully.Here we focus on two of the most popular formalizations of parsing:constituent parsing and dependency parsing.Constituent parsing is majorly targeted to syntactic analysis,and dependency parsing can handle both syntactic and semantic analysis.This article briefly reviews the representative models of constituent parsing and dependency parsing,and also dependency graph parsing with rich semantics.Besides,we also review the closely-related topics such as cross-domain,cross-lingual and joint parsing models,parser application as well as corpus development of parsing in the article.展开更多
Discriminative approaches have shown their effectiveness in unsupervised dependency parsing.However,due to their strong representational power,discriminative approaches tend to quickly converge to poor local optima du...Discriminative approaches have shown their effectiveness in unsupervised dependency parsing.However,due to their strong representational power,discriminative approaches tend to quickly converge to poor local optima during unsupervised training.In this paper,we tackle this problem by drawing inspiration from robust deep learning techniques.Specifically,we propose robust unsupervised discriminative dependency parsing,a framework that integrates the concepts of denoising autoencoders and conditional random field autoencoders.Within this framework,we propose two types of sentence corruption mechanisms as well as a posterior regularization method for robust training.We tested our methods on eight languages and the results show that our methods lead to significant improvements over previous work.展开更多
基金supported by National Key Basic Research Program of China (No.2014CB340600)partially supported by National Natural Science Foundation of China (Grant Nos.61332019,61672531)partially supported by National Social Science Foundation of China (Grant No.14GJ003-152)
文摘Information content security is a branch of cyberspace security. How to effectively manage and use Weibo comment information has become a research focus in the field of information content security. Three main tasks involved are emotion sentence identification and classification,emotion tendency classification,and emotion expression extraction. Combining with the latent Dirichlet allocation(LDA) model,a Gibbs sampling implementation for inference of our algorithm is presented,and can be used to categorize emotion tendency automatically with the computer. In accordance with the lower ratio of recall for emotion expression extraction in Weibo,use dependency parsing,divided into two categories with subject and object,summarized six kinds of dependency models from evaluating objects and emotion words,and proposed that a merge algorithm for evaluating objects can be accurately evaluated by participating in a public bakeoff and in the shared tasks among the best methods in the sub-task of emotion expression extraction,indicating the value of our method as not only innovative but practical.
文摘In Chinese, dependency analysis has been shown to be a powerful syntactic parser because the order of phrases in a sentence is relatively free compared with English. Conventional dependency parsers require a number of sophisticated rules that have to be handcrafted by linguists, and are too cumbersome to maintain. To solve the problem, a parser using SVM (Support Vector Machine) is introduced. First, a new strategy of dependency analysis is proposed. Then some chosen feature types are used for learning and for creating the modification matrix using SVM. Finally, the dependency of phrases in the sentence is generated. Experiments conducted to analyze how each type of feature affects parsing accuracy, showed that the model can increase accuracy of the dependency parser by 9.2%.
基金Project(61262035) supported by the National Natural Science Foundation of ChinaProjects(GJJ12271,GJJ12742) supported by the Science and Technology Foundation of Education Department of Jiangxi Province,ChinaProject(20122BAB201033) supported by the Natural Science Foundation of Jiangxi Province,China
文摘Head-driven statistical models for natural language parsing are the most representative lexicalized syntactic parsing models, but they only utilize semantic dependency between words, and do not incorporate other semantic information such as semantic collocation and semantic category. Some improvements on this distinctive parser are presented. Firstly, "valency" is an essential semantic feature of words. Once the valency of word is determined, the collocation of the word is clear, and the sentence structure can be directly derived. Thus, a syntactic parsing model combining valence structure with semantic dependency is purposed on the base of head-driven statistical syntactic parsing models. Secondly, semantic role labeling(SRL) is very necessary for deep natural language processing. An integrated parsing approach is proposed to integrate semantic parsing into the syntactic parsing process. Experiments are conducted for the refined statistical parser. The results show that 87.12% precision and 85.04% recall are obtained, and F measure is improved by 5.68% compared with the head-driven parsing model introduced by Collins.
基金This research was funded in part by the National Natural Science Foundation of China(61871140,61872100,61572153,U1636215,61572492,61672020)the National Key research and Development Plan(Grant No.2018YFB0803504)Open Fund of Beijing Key Laboratory of IOT Information Security Technology(J6V0011104).
文摘Recently dependency information has been used in different ways to improve neural machine translation.For example,add dependency labels to the hidden states of source words.Or the contiguous information of a source word would be found according to the dependency tree and then be learned independently and be added into Neural Machine Translation(NMT)model as a unit in various ways.However,these works are all limited to the use of dependency information to enrich the hidden states of source words.Since many works in Statistical Machine Translation(SMT)and NMT have proven the validity and potential of using dependency information.We believe that there are still many ways to apply dependency information in the NMT structure.In this paper,we explore a new way to use dependency information to improve NMT.Based on the theory of local attention mechanism,we present Dependency-based Local Attention Approach(DLAA),a new attention mechanism that allowed the NMT model to trace the dependency words related to the current translating words.Our work also indicates that dependency information could help to supervise attention mechanism.Experiment results on WMT 17 Chineseto-English translation task shared training datasets show that our model is effective and perform distinctively on long sentence translation.
文摘隐喻是人类语言中经常出现的一种特殊现象,隐喻识别对于自然语言处理各项任务来说具有十分基础和重要的意义。针对中文领域的隐喻识别任务,该文提出了一种基于句法感知图卷积神经网络和ELECTRA的隐喻识别模型(S yntax-a ware G CN with E LECTRA,SaGE)。该模型从语言学出发,使用ELECTRA和Transformer编码器抽取句子的语义特征,将句子按照依存关系组织成一张图并使用图卷积神经网络抽取其句法特征,在此基础上对两类特征进行融合以进行隐喻识别。该模型在CCL 2018中文隐喻识别评测数据集上以85.22%的宏平均F 1值超越了此前的最佳成绩,验证了融合语义信息和句法信息对于隐喻识别任务具有重要作用。
基金the National Natural Science Foundation of China(Grant Nos.61602160 and 61672211)。
文摘Syntactic and semantic parsing has been investigated for decades,which is one primary topic in the natural language processing community.This article aims for a brief survey on this topic.The parsing community includes many tasks,which are difficult to be covered fully.Here we focus on two of the most popular formalizations of parsing:constituent parsing and dependency parsing.Constituent parsing is majorly targeted to syntactic analysis,and dependency parsing can handle both syntactic and semantic analysis.This article briefly reviews the representative models of constituent parsing and dependency parsing,and also dependency graph parsing with rich semantics.Besides,we also review the closely-related topics such as cross-domain,cross-lingual and joint parsing models,parser application as well as corpus development of parsing in the article.
基金supported by the National Natural Science Foundation of China (No.61503248)the Major Program of Science and Technology Commission Shanghai Municipal (No.17JC1404102)
文摘Discriminative approaches have shown their effectiveness in unsupervised dependency parsing.However,due to their strong representational power,discriminative approaches tend to quickly converge to poor local optima during unsupervised training.In this paper,we tackle this problem by drawing inspiration from robust deep learning techniques.Specifically,we propose robust unsupervised discriminative dependency parsing,a framework that integrates the concepts of denoising autoencoders and conditional random field autoencoders.Within this framework,we propose two types of sentence corruption mechanisms as well as a posterior regularization method for robust training.We tested our methods on eight languages and the results show that our methods lead to significant improvements over previous work.