摘要
中文分词是中文信息处理的一个重要的组成部分。一些应用不仅要求有较高的准确率,速度也是至关重要的。通过对已有分词算法的分析,尤其是对快速分词算法的分析,提出了一种新的词典结构,并根据新的词典结构提出新的分词算法。该算法不仅实现对词首字的Hash查找,也实现了词的其它字的Hash查找。理论分析和实验结果表明,该算法在速度上优于现有的其它分词算法。
Chinese word segmentation is a very important component and the preparation for Chinese information process. In a lot of application, the precision of word segmentation is paramount, at the same time the velocity is also needed. Through the analysis of the existing algorithms of Chinese word segmentation, especially the fast algorithms, a highly efficient algorithm for Chinese word segmentation is introduced, which is based on the improvement of existing data structure for Chinese dictionary. It not only supports hashing operation on the first Chinese character, but also on the other characters. In theory, the above data structure achieve much more efficiency than other methods,
出处
《计算机工程与设计》
CSCD
北大核心
2007年第7期1716-1718,共3页
Computer Engineering and Design
关键词
中文分词
中文信息处理
哈希
数据结构
时间复杂度
Chinese word segmentation
Chinese information processing
Hash
data structure
time complexity