摘要
词语相似度计算是自然语言处理的关键技术之一,是一个被广泛研究的基础课题。传统的词语相似度量方法大多是基于语义知识和基于语料库统计的方法,即这两类方法需要具有层次关系组织的语义词典和大规模的语料库。提出了一种新的基于百度百科的词语相似度量方法,通过分析百度百科词条信息,从表征词条的解释内容方面综合分析词条相似度,并定义了词条间的相似度计算公式,通过计算部分之间的相似度得到整体的相似度。实验结果表明,与已有的相似度计算方法对比,提出的算法更加有效合理。
Research on word similarity measurement has been popular not only in natural language processing but also in other basic research. Traditional word similarity measurements use semantic lexieal or large-scale corpus. We first discussed the background of the applications of word similarity measurement, such as information retrieval, information extraction, text classification, example-based machine translation, etc. Then two strategies of word similarity measure- ment were summarized:one is based on ontology or a semantic taxonomy, the other is based on large collocations of words in corpus. BaiduBaike,an online open encyclopedia, could be used not only as a corpus but also a knowledge re- souree with rich semantic information. Based on BaiduBaike with its rich semantic information and category graph, we proposed a new method to analyze and compute Chinese word similarity from four dimensions: the baike card, the eon- tent of word, the open classification of word and the correlation words. We used language-network to choose top key terms of content of word. Based on vector space mode (VSM) theory, we calculated the similarity between parts of words. We presented a new "multi-path searching" algorithm on BaiduBaike category graph. A comprehensive similarity measuring method based on the four parts was proposed. Experiment results show that the method has a good performane.
出处
《计算机科学》
CSCD
北大核心
2013年第6期199-202,共4页
Computer Science
基金
国家自然科学基金(70871115)资助