摘要
Lucene.net实现中文分词依靠的是Analyzer类,但通过分析其内置的Keyword Analyzer,Standard Analyzer,Sto-pAnalyzer,Simple Analyzer,Whitespace Analyzer5个分词类发现,它们几乎都是按单字的标准进行切分的,为更好处理中文信息,必须引用外部独立开发的中文分词包.在对Chinese Analyzer,CJK Analyzer和IK Analyzer这3种典型的中文分词包分别测试后,发现采用字典分词以及正反双向搜索方法的IK Analyzer分词器的分词效果更胜一筹.
The segment of Chinese word relies on the Class Analyzer. By analyzing the five built-in analyzers of Lucene. net, it was found that their segment were based on the single character of KeywordAnalyzer, StandardAnalyzer, StopAnalyzer, SimpleAnalyzer and WhitespaceAnalyzer. An improted segment kit for a better Chinese information disposal was added. By testing the three typical kits, ChineseAnalyzer, CJKAnalyzer and IKAnalyzer, it was found that IKAnalyzer which uses Dictionary participle and the positive and nezative two-way search method, worked well.
出处
《郑州大学学报(理学版)》
CAS
北大核心
2011年第3期73-77,共5页
Journal of Zhengzhou University:Natural Science Edition