摘要
文本分割在信息提取、文摘自动生成、语言建模、首语消解等诸多领域都有极为重要的应用·基于PLSA模型的文本分割试图使隐藏于片段内的不同主题与文本表面的词、句对建立联系·实验以汉语的整句作为基本块,尝试了多种相似性度量手段及边界估计策略,同时考虑相邻句重复的未登录词对相似值的影响,其最佳结果表明,片段边界的识别错误率为6·06%,远远低于其他同类算法·
Text segmentation is very important for many fields including information retrieval, summarization, language modeling, anaphora resolution and so on. Text segmentation based on PLSA associates different latent topics with observable pairs of word and sentence. In the experiments, Chinese whole sentences are taken as elementary blocks. Variety of similarity metrics and several approaches to discovering boundaries are tried. The influences of repetition of unknown words in adjacent sentences on similarity values are considered. The best results show the error rate is 6.06 % , which is far lower than that of other algorithms of text segmentation.
出处
《计算机研究与发展》
EI
CSCD
北大核心
2007年第2期242-248,共7页
Journal of Computer Research and Development
基金
国家自然科学基金项目(60373056)
国家"九七三"重点基础研究发展规划基金项目(2002CB312103)
中国科学院软件研究所创新工程重大项目
关键词
文本分割
概率潜在语义分析
相似性度量
边界识别
text segmentation
probabilistic latent semantic analysis (PLSA)
similarity metric
boundary discovering