摘要
预训练语言模型能够表达句子丰富的句法和语法信息,并且能够对词的多义性建模,在自然语言处理中有着广泛的应用,BERT(bidirectional encoder representations from transformers)预训练语言模型是其中之一。在基于BERT微调的命名实体识别方法中,存在的问题是训练参数过多,训练时间过长。针对这个问题提出了基于BERT-IDCNN-CRF(BERT-iterated dilated convolutional neural network-conditional random field)的中文命名实体识别方法,该方法通过BERT预训练语言模型得到字的上下文表示,再将字向量序列输入IDCNN-CRF模型中进行训练,训练过程中保持BERT参数不变,只训练IDCNN-CRF部分,在保持多义性的同时减少了训练参数。实验表明,该模型在MSRA语料上F1值能够达到94.41%,在中文命名实体任务上优于目前最好的Lattice-LSTM模型,提高了1.23%;与基于BERT微调的方法相比,该方法的F1值略低但是训练时间大幅度缩短。将该模型应用于信息安全、电网电磁环境舆情等领域的敏感实体识别,速度更快,响应更及时。
The pre-trained language model,BERT(bidirectional encoder representations from transformers),has shown promising result in NER(named entity recognition)due to its ability to represent rich syntactic,grammatical information in sentences and the polysemy of words.However,most existing BERT fine-tuning based models need to update lots of model parameters,facing with expensive time cost at both training and testing phases.To handle this problem,this work presents a novel BERT based language model for Chinese NER,named BERT-IDCNN-CRF(BERT-iterated dilated convolutional neural network-conditional random field).The proposed model utilizes traditional BERT model to obtain the context representation of the word as the input of IDCNN-CRF.At training phase,the model parameters of BERT in the proposed model remain unchanged so that the proposed model can reduce parameters training while maintaining polysemy of words.Experimental results show that the proposed model obtains significant training time with acceptable test error.
作者
李妮
关焕梅
杨飘
董文永
LI Ni;GUAN Huan-mei;YANG Piao;DONG Wen-yong(State Key Laboratory of Power Grid Environmental Protection,China Electric Power Research Institute,Wuhan 430074,Hubei,China;School of Computer Science,Wuhan University,Wuhan 430072,Hubei,China)
出处
《山东大学学报(理学版)》
CAS
CSCD
北大核心
2020年第1期102-109,共8页
Journal of Shandong University(Natural Science)
基金
国家电网公司总部科技项目(GY71-18-009).