摘要
对于越南语组块识别任务,在前期对越南语组块内部词性构成模式进行统计调查的基础上,该文针对Bi-LSTM+CRF模型提出了两种融入注意力机制的方法:一是在输入层融入注意力机制,从而使得模型能够灵活调整输入的词向量与词性特征向量各自的权重;二是在Bi-LSTM之上加入了多头注意力机制,从而使模型能够学习到Bi-LSTM输出值的权重矩阵,进而有选择地聚焦于重要信息。实验结果表明,在输入层融入注意力机制后,模型对组块识别的F值提升了3.08%,在Bi-LSTM之上加入了多头注意力机制之后,模型对组块识别的F值提升了4.56%,证明了这两种方法的有效性。
For the Vietnamese chunk identification task,this paper proposes two ways to integrate the attention mechanism with the Bi-LSTM+CRF model.The first is to integrate the attention mechanism at the input layer,which allows the model to flexibly adjust weights of word embeddings and POS feature embeddings.The second is to add a multi-head attention mechanism on the top of Bi-LSTM,which enables the model to learn weight matrix of the Bi-LSTM outputs and selectively focus on important information.Experimental results show that,after integrating the attention mechanism at the input layer,the F-value of Vietnamese chunk identification is increased by 3.08%;and after adding the multi-head attention mechanism on the top of Bi-LSTM,the F-value of Vietnamese chunk identification is improved by 4.56%.
作者
王闻慧
毕玉德
雷树杰
WANG Wenhui;BI Yude;LEI Shujie(Luoyang Division,Information Engineering University,Luoyang,Henan 471003,China;College of For&gn Language and Literature,Fudan University,Shanghai 200433,China)
出处
《中文信息学报》
CSCD
北大核心
2019年第12期91-100,共10页
Journal of Chinese Information Processing