摘要
汉语组块分析是中文信息处理领域中一项重要的子任务.在一种新的结构化SVMs(support vector machines)模型的基础上,提出一种基于大间隔方法的汉语组块分析方法.首先,针对汉语组块分析问题设计了序列化标注模型;然后根据大间隔思想给出判别式的序列化标注函数的优化目标,并应用割平面算法实现对特征参数的近似优化训练.针对组块识别问题设计了一种改进的F1损失函数,使得F1损失值能够依据每个句子的实际长度进行相应的调整,从而能够引入更有效的约束不等式.通过在滨州中文树库CTB4数据集上的实验数据显示,基于改进的F1损失函数所产生的识别结果优于Hamming损失函数,各种类型组块识别的总的F1值为91.61%,优于CRFs(conditional random fields)和SVMs方法.
Chinese chunking plays an important role in natural language processing. This paper presents a large margin method for Chinese chunking based on structural SVMs (support vector machines). First, a sequence labeling model and the formulation of the learning problem are introduced for Chinese chunking problem, and then the cutting plane algorithm is applied to efficiently approximate the optimal solution of the optimization problem. Finally, an improved F1 loss function is proposed to tackle Chinese chunking. The loss function can scale the F1 loss value to the length of the sentence to adjust the margin accordingly, leading to more effective constraint inequalities. Experiments are conducted on UPENN Chinese Treebank-4 (CTB4), and the hamming loss function is compared with the improved F1 loss function. The experimental results show that the training algorithm with the improved F1 loss function can achieve higher performance than the Hamming loss function. The overall F1 score of Chinese chunking obtained with this approach is 91.61%, which is higher than the performance produced by the state-of-the-art machine learning models, such as CRFs (conditional random fields) and SVMs models.
出处
《软件学报》
EI
CSCD
北大核心
2009年第4期870-877,共8页
Journal of Software
基金
国家自然科学基金Nos.60673043,60773173
国家高技术研究发展计划(863)No.2006AA01Z143
江苏省自然科学基金No.BK2006117
江苏省高校自然科学基金No.07KJB520057~~
关键词
汉语组块分析
大间隔
判别式学习
损失函数
Chinese chunking
large margin
discriminative learning
loss function