摘要
在学界,单词语义相似度属于自然语言处理方面最为热门的研究方向之一,它的研究成果对多方面应用产生深远的影响,比如翻译系统、计算语言学等方面.文章以证据理论为研究依据,再与知识库进行相结合,提出一种新型的单词语义相似度度量算法.此文采用统计生成基本信任分配函数,再与多方面相结合来对信息融合取得全局基本信任分配函数进行实施,比如与证据冲突处理、重要度分配、D-S合成规则等,同时立足于此基础之上对单词语义相似度进行量化计算.
In the academic field,word semantic similarity is one of the most popular research directions in natural language processing.Its research results have a profound impact on many applications,such as translation system,computational linguistics and so on.Based on evidencel theory and knowledge base,this paper proposes a new semantic similarity measurement algorithm.In this paper,the basic trust distribution function is generated by statistics,and then combined with multi facets to implement the global basic trust distribution function obtained by information fusion,such as conflict with evidence processing,importance distribution,D-S synthesis rules,etc.,and based on this,the semantic similarity of single words is calculated quantitatively.
作者
王欣欣
马发民
Wang Xinxin;Ma Famin(Shangluo University,School of Humanities,Shangluo Shanxi,China,726000;Shangluo University,School of Mathematics and Computer Science,Shangluo Shanxi,China,726000)
出处
《现代科学仪器》
2020年第2期144-148,共5页
Modern Scientific Instruments
基金
陕西省教育教改项目(编号:SGH18H400)。
关键词
证据视角
英语单词
相似度算法
evidence perspective
English words
similarity algorithm