Algorithms of detecting dialogue deviations from a dialogue topic in an agent and ontology-based dialogue management system(AODMS) are proposed. In AODMS, agents and ontologies are introduced to represent domain kno...Algorithms of detecting dialogue deviations from a dialogue topic in an agent and ontology-based dialogue management system(AODMS) are proposed. In AODMS, agents and ontologies are introduced to represent domain knowledge. And general algorithms that model dialogue phenomena in different domains can be realized in that complex relationships between knowledge in different domains can be described by ontologies. An evaluation of the dialogue management system with deviation-judging algorithms on 736 utterances shows that the AODMS is able to talk about the given topic consistently and answer 86.6 % of the utterances, while only 72.1% of the utterances can be responded correctly without deviation-judging module.展开更多
A scenic-spot introduction-task-oriented 3D virtual human spoken dialogue system-- EasyGuide is introduced. The system includes five modules: natural language processing, task do- main knowledge database, dialogue ma...A scenic-spot introduction-task-oriented 3D virtual human spoken dialogue system-- EasyGuide is introduced. The system includes five modules: natural language processing, task do- main knowledge database, dialogue management, voice processing and 3D virtual human text-to-vis- ual speech synthesis. In the first module, dictionary construction along with sentence analysis and semantic representation axe illustrated specifically. A tree-structured knowledge database is designed for the task domain. A novel framework based on the keyword analysis and context constraints is proposed as the dialogue management. As for voice processing module, a software development kit which performs speech recognition and synthesis is introduced briefly. In the last module, 3D viseme synthesis is explained with examples and a text-driven facial animation system is presented. Evalua- tion results show that the system can achieve satisfactory performance.展开更多
Human-Computer dialogue systems provide a natural language based interface between human and computers. They are widely demanded in network information services, intelligent accompanying robots, and so on. A Human-Com...Human-Computer dialogue systems provide a natural language based interface between human and computers. They are widely demanded in network information services, intelligent accompanying robots, and so on. A Human-Computer dialogue system typically consists of three parts, namely Natural Language Understanding (NLU), Dialogue Management (DM) and Natural Language Generation (NLG). Each part has several different subtasks. Each subtask has been received lots of attentions, many improvements have been achieved on each subtask, respectively. But systems built in traditional pipeline way, where different subtasks are assembled sequently, suffered from some problems such as error accu- mulation and expanding, domain transferring. Therefore, researches on jointly modeling several subtasks in one part or cross different parts have been prompted greatly in recent years, especially the rapid developments on deep neural networks based joint models. There is even a few work aiming to integrate all subtasks of a dialogue system in a single model, namely end-to-end models. This paper introduces two basic frames of current dialogue systems and gives a brief survey on recent advances on variety subtasks at first, and then focuses on joint models for multiple subtasks of dialogues. We review several different joint models including integration of several subtasks inside NLU or NLG, jointly modeling cross NLG and DM, and jointly modeling through NLU, DM and NLG. Both advantages and problems of those joint models are discussed. We consider that the joint models, or end-to-end models, will be one important trend for developing Human-Computer dialogue systems.展开更多
Statistical dialogue management is the core of cognitive spoken dialogue systems (SDS) and has attracted great research interest. In recent years, SDS with the ability of evolution is of particular interest and beco...Statistical dialogue management is the core of cognitive spoken dialogue systems (SDS) and has attracted great research interest. In recent years, SDS with the ability of evolution is of particular interest and becomes the cuttingedge of SDS research. Dialogue state tracking (DST) is a process to estimate the distribution of the dialogue states at each dialogue turn, given the previous interaction history. It plays an important role in statistical dialogue management. To provide a common testbed for advancing the research of DST, international DST challenges (DSTC) have been organised and well-attended by major SDS groups in the world. This paper reviews recent progresses on rule-based and statistical approaches during the challenges. In particular, this paper is focused on evolvable DST approaches for dialogue domain extension. The two primary aspects for evolution, semantic parsing and tracker, are discussed. Semantic enhancement and a DST framework which bridges rule-based and statistical models are introduced in detail. By effectively incorporating prior knowledge of dialogue state transition and the ability of being data-driven, the new framework supports reliable domain extension with little data and can continuously improve with more data available. This makes it excellent candidate for DST evolution. Experiments show that the evolvable DST approaches can achieve the state-of-the-art performance and outperform all previously submitted trackers in the third DSTC.展开更多
文摘Algorithms of detecting dialogue deviations from a dialogue topic in an agent and ontology-based dialogue management system(AODMS) are proposed. In AODMS, agents and ontologies are introduced to represent domain knowledge. And general algorithms that model dialogue phenomena in different domains can be realized in that complex relationships between knowledge in different domains can be described by ontologies. An evaluation of the dialogue management system with deviation-judging algorithms on 736 utterances shows that the AODMS is able to talk about the given topic consistently and answer 86.6 % of the utterances, while only 72.1% of the utterances can be responded correctly without deviation-judging module.
基金Supported by the Ministerial Level Advanced Research Foundation(404050301.4)the National Natural Science Foundation of hina(60605015)
文摘A scenic-spot introduction-task-oriented 3D virtual human spoken dialogue system-- EasyGuide is introduced. The system includes five modules: natural language processing, task do- main knowledge database, dialogue management, voice processing and 3D virtual human text-to-vis- ual speech synthesis. In the first module, dictionary construction along with sentence analysis and semantic representation axe illustrated specifically. A tree-structured knowledge database is designed for the task domain. A novel framework based on the keyword analysis and context constraints is proposed as the dialogue management. As for voice processing module, a software development kit which performs speech recognition and synthesis is introduced briefly. In the last module, 3D viseme synthesis is explained with examples and a text-driven facial animation system is presented. Evalua- tion results show that the system can achieve satisfactory performance.
文摘Human-Computer dialogue systems provide a natural language based interface between human and computers. They are widely demanded in network information services, intelligent accompanying robots, and so on. A Human-Computer dialogue system typically consists of three parts, namely Natural Language Understanding (NLU), Dialogue Management (DM) and Natural Language Generation (NLG). Each part has several different subtasks. Each subtask has been received lots of attentions, many improvements have been achieved on each subtask, respectively. But systems built in traditional pipeline way, where different subtasks are assembled sequently, suffered from some problems such as error accu- mulation and expanding, domain transferring. Therefore, researches on jointly modeling several subtasks in one part or cross different parts have been prompted greatly in recent years, especially the rapid developments on deep neural networks based joint models. There is even a few work aiming to integrate all subtasks of a dialogue system in a single model, namely end-to-end models. This paper introduces two basic frames of current dialogue systems and gives a brief survey on recent advances on variety subtasks at first, and then focuses on joint models for multiple subtasks of dialogues. We review several different joint models including integration of several subtasks inside NLU or NLG, jointly modeling cross NLG and DM, and jointly modeling through NLU, DM and NLG. Both advantages and problems of those joint models are discussed. We consider that the joint models, or end-to-end models, will be one important trend for developing Human-Computer dialogue systems.
文摘Statistical dialogue management is the core of cognitive spoken dialogue systems (SDS) and has attracted great research interest. In recent years, SDS with the ability of evolution is of particular interest and becomes the cuttingedge of SDS research. Dialogue state tracking (DST) is a process to estimate the distribution of the dialogue states at each dialogue turn, given the previous interaction history. It plays an important role in statistical dialogue management. To provide a common testbed for advancing the research of DST, international DST challenges (DSTC) have been organised and well-attended by major SDS groups in the world. This paper reviews recent progresses on rule-based and statistical approaches during the challenges. In particular, this paper is focused on evolvable DST approaches for dialogue domain extension. The two primary aspects for evolution, semantic parsing and tracker, are discussed. Semantic enhancement and a DST framework which bridges rule-based and statistical models are introduced in detail. By effectively incorporating prior knowledge of dialogue state transition and the ability of being data-driven, the new framework supports reliable domain extension with little data and can continuously improve with more data available. This makes it excellent candidate for DST evolution. Experiments show that the evolvable DST approaches can achieve the state-of-the-art performance and outperform all previously submitted trackers in the third DSTC.