摘要
针对网页正文提取算法缺乏通用性,以及对新闻网页的提取缺乏标题、时间、来源信息的问题,提出一种新闻关键信息的提取算法news Extractor。该算法首先通过预处理将网页转换成行号和文本的集合,然后根据字数最长的一句话出现在新闻正文的概率极高的特点,从正文中间开始向两端寻找正文的起点和终点提取新闻正文,根据最长公共子串算法提取标题,构造正则表达式并以行号辅助判断提取时间,根据来源的格式特点并辅以行号提取来源;最后构造了数据集与国外开源软件news Paper进行提取准确率的对比实验。实验结果表明,news Extractor在正文、标题、时间、来源的平均提取准确率上均优于news Paper,具有通用性和鲁棒性。
Since information extraction algorithm for Web pages lacks generality and information of title, release-time and source in news Web page, a new information extraction algorithm was proposed to resolve those problems. Firstly, HTML code of Web page was parsed to text sets combined with line number and text; then, extractor began to search boundary of news content from line which the longest sentence belonged to due to the characteristic that the longest sentence belongs to the content of news with an extremely high probability. Meanwhile, the longest common string algorithm was used to extract title, the regular expression and line number were used to extract release-time, and the presentation characteristics of source and line number were used to extract source. Finally, a data set was built to conduct a comparison experiment with an open-source software named newsPaper in accuracy of extraction. Experimental results show that newsExtractor outperforms newsPaper in average accuracy of content, title, release-time and source, it has strong generality and robustness.
出处
《计算机应用》
CSCD
北大核心
2016年第8期2082-2086,2120,共6页
journal of Computer Applications
基金
国家自然科学基金面上项目(61375039)
中国科学院网络中心一三五重点项目(CNIC_PY_1402)~~
关键词
网页信息提取
新闻信息提取
网页去噪
Web information extraction
news information extraction
Web denoising