期刊文献+
共找到3篇文章
< 1 >
每页显示 20 50 100
Number Entities Recognition in Multiple Rounds of Dialogue Systems 被引量:1
1
作者 Shan Zhang Bin Cao +1 位作者 Yueshen Xu Jing Fan 《Computer Modeling in Engineering & Sciences》 SCIE EI 2021年第4期309-323,共15页
As a representative technique in natural language processing(NLP),named entity recognition is used in many tasks,such as dialogue systems,machine translation and information extraction.In dialogue systems,there is a c... As a representative technique in natural language processing(NLP),named entity recognition is used in many tasks,such as dialogue systems,machine translation and information extraction.In dialogue systems,there is a common case for named entity recognition,where a lot of entities are composed of numbers,and are segmented to be located in different places.For example,in multiple rounds of dialogue systems,a phone number is likely to be divided into several parts,because the phone number is usually long and is emphasized.In this paper,the entity consisting of numbers is named as number entity.The discontinuous positions of number entities result from many reasons.We find two reasons from real-world dialogue systems.The first reason is the repetitive confirmation of different components of a number entity,and the second reason is the interception of mood words.The extraction of number entities is quite useful in many tasks,such as user information completion and service requests correction.However,the existing entity extraction methods cannot extract entities consisting of discontinuous entity blocks.To address these problems,in this paper,we propose a comprehensive method for number entity recognition,which is capable of extracting number entities in multiple rounds of dialogues systems.We conduct extensive experiments on a real-world dataset,and the experimental results demonstrate the high performance of our method. 展开更多
关键词 Natural language processing dialogue systems named entity recognition number entity discontinuous entity blocks
下载PDF
Evaluating Neural Dialogue Systems Using Deep Learning and Conversation History
2
作者 Inshirah Ali AlMutairi Ali Mustafa Qamar 《Journal on Artificial Intelligence》 2022年第3期155-165,共11页
Neural talk models play a leading role in the growing popular building of conversational managers.A commonplace criticism of those systems is that they seldom understand or use the conversation data efficiently.The d... Neural talk models play a leading role in the growing popular building of conversational managers.A commonplace criticism of those systems is that they seldom understand or use the conversation data efficiently.The development of profound concentration on innovations has increased the use of neural models for a discussion display.In recent years,deep learning(DL)models have achieved significant success in various tasks,and many dialogue systems are also employing DL techniques.The primary issues involved in the generation of the dialogue system are acquiring perspectives into instinctual linguistics,comprehension provision,and conversation assessment.In this paper,we mainly focus on DL-based dialogue systems.The issue to be overcome under this publication would be dialogue supervision,which will determine how the framework responds to recognizing the needs of the user.The dataset utilized in this research is extracted from movies.The models implemented in this research are the seq2seq model,transformers,and GPT while using word embedding and NLP.The results obtained after implementation depicted that all three models produced accurate results.In the modern revolutionized world,the demand for a dialogue system is more than ever.Therefore,it is essential to take the necessary steps to build effective dialogue systems. 展开更多
关键词 Seq2Seq CNN dialogue systems NLP RNN transformer GPT
下载PDF
EVA2.0:Investigating Open-domain Chinese Dialogue Systems with Large-scale Pre-training 被引量:2
3
作者 Yuxian Gu Jiaxin Wen +8 位作者 Hao Sun Yi Song Pei Ke Chujie Zheng Zheng Zhang Jianzhu Yao Lei Liu Xiaoyan Zhu Minlie Huang 《Machine Intelligence Research》 EI CSCD 2023年第2期207-219,共13页
Large-scale pre-training has shown remarkable performance in building open-domain dialogue systems.However,previous works mainly focus on showing and evaluating the conversational performance of the released dialogue ... Large-scale pre-training has shown remarkable performance in building open-domain dialogue systems.However,previous works mainly focus on showing and evaluating the conversational performance of the released dialogue model,ignoring the discussion of some key factors towards a powerful human-like chatbot,especially in Chinese scenarios.In this paper,we conduct extensive experiments to investigate these under-explored factors,including data quality control,model architecture designs,training approaches,and decoding strategies.We propose EVA2.0,a large-scale pre-trained open-domain Chinese dialogue model with 2.8 billion parameters,and will make our models and codes publicly available.Automatic and human evaluations show that EVA2.0 significantly outperforms other open-source counterparts.We also discuss the limitations of this work by presenting some failure cases and pose some future research directions on large-scale Chinese open-domain dialogue systems. 展开更多
关键词 Natural language processing deep learning(DL) large-scale pre-training dialogue systems Chinese open-domain conversational model
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部