摘要
词切分是像汉语这类非拉丁语言的一个特有问题 ,并且由于汉语组词中普遍存在着歧义性和语境依赖性 ,这一问题也是一个尚未得到彻底解决的难题 .本文通过仔细分析汉语分词歧义的规律 ,将追求整体最优效果的松弛算法引入到汉语自动分词的排歧研究中 .借助于语词之间搭配关系等上下文约束条件以及词频、字频等统计数据 ,构造了一种汉语分词排歧的新方法 .实验结果表明 ,这种方法在切分精度和切分速度上都取得了较好的效果 。
Word segmentation is unique to non-Latin languages including Chinese. Owing to the existence of ambiguity and context dependence among Chinese words, Chinese word segmentation is a pains-taking undergoing and its accomplishment is far from satisfaction. This paper presents an automatic segmentation method for Chinese words by using contextual information aiming at resolving ambiguities. The major strategy of this method is preserving the possibilities of all kinds of word segmentation consequences, and then disambiguating by relaxation algorithm. The favorable effect from the experiment shows that this disambiguation method is feasible not only in theory but also in practice.
出处
《厦门大学学报(自然科学版)》
CAS
CSCD
北大核心
2002年第6期711-714,共4页
Journal of Xiamen University:Natural Science
基金
国家自然科学基金资助项目 (6 99830 0 6 )