摘要
大多数中文命名实体识别模型中,语言预处理只关注单个词和字符的向量表示,忽略了它们之间的语义关系,无法解决一词多义问题;Transformer特征抽取模型的并行计算和长距离建模优势提升了许多自然语言理解任务的效果,但全连接结构使得计算复杂度为输入长度的平方,导致其在中文命名实体识别的效果不佳.针对这些问题,提出一种基于BSTTC (BERT-Star-Transformer-TextCNN-CRF)模型的中文命名实体识别方法.首先利用在大规模语料上预训练好的BERT模型根据其输入上下文动态生成字向量序列;然后使用星型Transformer与TextCNN联合模型进一步提取句子特征;最后将特征向量序列输入CRF模型得到最终预测结果.在MSRA中文语料上的实验结果表明,该模型的精确率、召回率和F1值与之前模型相比,均有所提高.与BERT-Transformer-CRF模型相比,训练时间大约节省了65%.
In most recognition models of Chinese named entities,language preprocessing only focuses on the vector representation of single words and characters and ignores the semantic relationship between them,hence failing to tackle polysemy.The transformer feature extraction model improves the understanding of natural language due to parallel computing and long-distance modeling,but its fully connected structure makes the computational complexity the square of the input length,which leads to poor recognition of Chinese named entities.A recognition method for Chinese named entities based on the BERT-Star-Transformer-TextCNN-CRF(BSTTC)model is proposed to solve these problems.First,the BERT model pre-trained on a large-scale corpus is used to dynamically generate the word vector sequence according to its input context.Then,the star Transformer-TextCNN model is adopted to further extract sentence features.Finally,the prediction result is received by inputting the feature vector sequence into the CRF model.The experimental results on the Chinese corpus from MSRA show that the accuracy,recall,and F1 value of this model are all higher than those of existing models.Moreover,its training time is 65%shorter than that of the BSTTC model.
作者
申晖
张英俊
谢斌红
赵红燕
SHEN Hui;ZHANG Ying-Jun;XIE Bin-Hong;ZHAO Hong-Yan(School of Computer Science and Technology,Taiyuan University of Science and Technology,Taiyuan 030024,China)
出处
《计算机系统应用》
2021年第6期262-270,共9页
Computer Systems & Applications
基金
山西省重点研发计划重点项目(201703D111027)
山西省重点计划研发项目(201803D121048,201803D121055)。