摘要
口语理解包含两个子任务——意图识别和语义槽填充,现有的联合建模方法虽然实现了模型参数的共享,并将意图识别的结果作用于语义槽填充,但是对于语义槽填充任务没有考虑到标签前后的依赖关系。采用双向长短时记忆网络(BLSTM),由BLSTM得到隐藏层状态后,对两任务分别加入注意力机制,通过语义槽门控机制将意图识别的结果作用于语义槽填充任务,并在语义槽填充任务中加入条件随机场(CRF)模型,该模型考虑了标签前后的依赖关系从而使得标注结果更为准确。实验数据选择航空信息领域的查询语句,得到的结果是意图识别的准确率达到93.20%,语义槽填充的F1值达到99.28%,并在SMP中文人机对话技术评测数据集上验证模型的性能。实验结果证明该模型优于其他联合识别模型。
Spoken language understanding(SLU)consists of two sub-tasks,which are intent detection and semantic slot filling.Although the existing joint modeling methods realize the sharing of model parameters and apply the result of intent detection to semantic slot filling,the dependency before and after annotation is not considered for semantic slot filling task.A joint recognition model based on bidirectional long-short term memory(BLSTM)is adopted.After the hidden layer state is obtained by BLSTM,the attention mechanism is added to the two tasks respectively,and the result of intent detection is applied to the semantic slot filling by slot-gated mechanism.Considering the dependency before and after annotation,conditional random field(CRF)model is added into the semantic slot filling task to make the annotation result more accurate.Experimental data select the query in the field of flight information,the accuracy of the intent detection is 93.20%and F1 score of semantic slot filling is 99.28%.The performance of the model is verified on the SMP Chinese human-machine dialogue technology evaluation dataset.The results prove that the method is superior to other joint recognition models.
作者
侯丽仙
李艳玲
林民
李成城
HOU Lixian;LI Yanling;LIN Min;LI Chengcheng(College of Computer Science and Technology,Inner Mongolia Normal University,Hohhot 010022,China)
出处
《计算机科学与探索》
CSCD
北大核心
2020年第9期1545-1553,共9页
Journal of Frontiers of Computer Science and Technology
基金
国家自然科学基金(Nos.61562068,61806103)
内蒙古民委蒙古文信息化专项扶持子项目(No.MW-2014-MGYWXXH-01)
内蒙古自治区“草原英才”工程青年创新创业人才项目
内蒙古科技规划项目(No.21K01724)
内蒙古师范大学研究生创新基金(No.CXJJS18112).