摘要
针对Lucene自带中文分词器分词效果差的缺点,在分析现有分词词典机制的基础上,设计了基于全哈希整词二分算法的分词器,并集成到Lucene中,算法通过对整词进行哈希,减少词条匹配次数,提高分词效率。该分词器词典文件维护方便,可以根据不同应用的要求进行定制,从而提高了检索效率。
According to the low efficiency of the Chinese words segmentation machines of Lucene, this paper designs a new word segmentation machine based on all-Hash segmentation mechanism according to binary-seek-by-word by analyzing many old dictionary mechanisms. The new mechanism uses the word's Hash value to reduce the number of string findings. The maintenance of dictionary file is convenient, and the developers can customize the dictionary based on different application to improve search efficiency.
出处
《微型机与应用》
2011年第18期62-64,共3页
Microcomputer & Its Applications
基金
南京工程学院科研青年基金项目(QKJB2009026)