The weapon and equipment operational requirement analysis(WEORA) is a necessary condition to win a future war,among which the acquisition of knowledge about weapons and equipment is a great challenge. The main challen...The weapon and equipment operational requirement analysis(WEORA) is a necessary condition to win a future war,among which the acquisition of knowledge about weapons and equipment is a great challenge. The main challenge is that the existing weapons and equipment data fails to carry out structured knowledge representation, and knowledge navigation based on natural language cannot efficiently support the WEORA. To solve above problem, this research proposes a method based on question answering(QA) of weapons and equipment knowledge graph(WEKG) to construct and navigate the knowledge related to weapons and equipment in the WEORA. This method firstly constructs the WEKG, and builds a neutral network-based QA system over the WEKG by means of semantic parsing for knowledge navigation. Finally, the method is evaluated and a chatbot on the QA system is developed for the WEORA. Our proposed method has good performance in the accuracy and efficiency of searching target knowledge, and can well assist the WEORA.展开更多
Background External knowledge representations play an essential role in knowledge-based visual question and answering to better understand complex scenarios in the open world.Recent entity-relationship embedding appro...Background External knowledge representations play an essential role in knowledge-based visual question and answering to better understand complex scenarios in the open world.Recent entity-relationship embedding approaches are deficient in representing some complex relations,resulting in a lack of topic-related knowledge and redundancy in topic-irrelevant information.Methods To this end,we propose MKEAH:Multimodal Knowledge Extraction and Accumulation on Hyperplanes.To ensure that the lengths of the feature vectors projected onto the hyperplane compare equally and to filter out sufficient topic-irrelevant information,two losses are proposed to learn the triplet representations from the complementary views:range loss and orthogonal loss.To interpret the capability of extracting topic-related knowledge,we present the Topic Similarity(TS)between topic and entity-relations.Results Experimental results demonstrate the effectiveness of hyperplane embedding for knowledge representation in knowledge-based visual question answering.Our model outperformed state-of-the-art methods by 2.12%and 3.24%on two challenging knowledge-request datasets:OK-VQA and KRVQA,respectively.Conclusions The obvious advantages of our model in TS show that using hyperplane embedding to represent multimodal knowledge can improve its ability to extract topic-related knowledge.展开更多
In the field of natural language processing(NLP),there have been various pre-training language models in recent years,with question answering systems gaining significant attention.However,as algorithms,data,and comput...In the field of natural language processing(NLP),there have been various pre-training language models in recent years,with question answering systems gaining significant attention.However,as algorithms,data,and computing power advance,the issue of increasingly larger models and a growing number of parameters has surfaced.Consequently,model training has become more costly and less efficient.To enhance the efficiency and accuracy of the training process while reducing themodel volume,this paper proposes a first-order pruningmodel PAL-BERT based on the ALBERT model according to the characteristics of question-answering(QA)system and language model.Firstly,a first-order network pruning method based on the ALBERT model is designed,and the PAL-BERT model is formed.Then,the parameter optimization strategy of the PAL-BERT model is formulated,and the Mish function was used as an activation function instead of ReLU to improve the performance.Finally,after comparison experiments with traditional deep learning models TextCNN and BiLSTM,it is confirmed that PALBERT is a pruning model compression method that can significantly reduce training time and optimize training efficiency.Compared with traditional models,PAL-BERT significantly improves the NLP task’s performance.展开更多
Recent advancements in natural language processing have given rise to numerous pre-training language models in question-answering systems.However,with the constant evolution of algorithms,data,and computing power,the ...Recent advancements in natural language processing have given rise to numerous pre-training language models in question-answering systems.However,with the constant evolution of algorithms,data,and computing power,the increasing size and complexity of these models have led to increased training costs and reduced efficiency.This study aims to minimize the inference time of such models while maintaining computational performance.It also proposes a novel Distillation model for PAL-BERT(DPAL-BERT),specifically,employs knowledge distillation,using the PAL-BERT model as the teacher model to train two student models:DPAL-BERT-Bi and DPAL-BERTC.This research enhances the dataset through techniques such as masking,replacement,and n-gram sampling to optimize knowledge transfer.The experimental results showed that the distilled models greatly outperform models trained from scratch.In addition,although the distilled models exhibit a slight decrease in performance compared to PAL-BERT,they significantly reduce inference time to just 0.25%of the original.This demonstrates the effectiveness of the proposed approach in balancing model performance and efficiency.展开更多
目前针对复杂语义和复杂句法的知识库问答(Knowledge Base Question Answering,KBQA)研究层出不穷,但它们多以已知问题的主题实体为前提,对问题中多意图和多实体重视不足,而问句中对核心实体的识别是理解自然语言的关键。针对此问题,提...目前针对复杂语义和复杂句法的知识库问答(Knowledge Base Question Answering,KBQA)研究层出不穷,但它们多以已知问题的主题实体为前提,对问题中多意图和多实体重视不足,而问句中对核心实体的识别是理解自然语言的关键。针对此问题,提出了一种引入核心实体关注度的KBQA模型。该模型基于注意力机制及注意力增强技术,对识别到的实体引用(Mention)进行重要性评估,得到实体引用关注度,去除潜在干扰项,捕获用户提问的核心实体,解决了多实体、多意图问句的语义理解问题。此外,还将评估的结果作为重要权重引入后续的问答推理中。在英文MetaQA数据集、多实体问句MetaQA数据集、多实体问句HotpotQA数据集上,与KVMem,GraftNet,PullNet等模型进行了对比实验。结果表明,针对多实体问句,所提模型在Hits@n、准确率、召回率等评估指标上均取得了更好的实验效果。展开更多
Over the last couple of decades,community question-answering sites(CQAs)have been a topic of much academic interest.Scholars have often leveraged traditional machine learning(ML)and deep learning(DL)to explore the eve...Over the last couple of decades,community question-answering sites(CQAs)have been a topic of much academic interest.Scholars have often leveraged traditional machine learning(ML)and deep learning(DL)to explore the ever-growing volume of content that CQAs engender.To clarify the current state of the CQA literature that has used ML and DL,this paper reports a systematic literature review.The goal is to summarise and synthesise the major themes of CQA research related to(i)questions,(ii)answers and(iii)users.The final review included 133 articles.Dominant research themes include question quality,answer quality,and expert identification.In terms of dataset,some of the most widely studied platforms include Yahoo!Answers,Stack Exchange and Stack Overflow.The scope of most articles was confined to just one platform with few cross-platform investigations.Articles with ML outnumber those with DL.Nonetheless,the use of DL in CQA research is on an upward trajectory.A number of research directions are proposed.展开更多
ExpertRecommendation(ER)aims to identify domain experts with high expertise and willingness to provide answers to questions in Community Question Answering(CQA)web services.How to model questions and users in the hete...ExpertRecommendation(ER)aims to identify domain experts with high expertise and willingness to provide answers to questions in Community Question Answering(CQA)web services.How to model questions and users in the heterogeneous content network is critical to this task.Most traditional methods focus on modeling questions and users based on the textual content left in the community while ignoring the structural properties of heterogeneous CQA networks and always suffering from textual data sparsity issues.Recent approaches take advantage of structural proximities between nodes and attempt to fuse the textual content of nodes for modeling.However,they often fail to distinguish the nodes’personalized preferences and only consider the textual content of a part of the nodes in network embedding learning,while ignoring the semantic relevance of nodes.In this paper,we propose a novel framework that jointly considers the structural proximity relations and textual semantic relevance to model users and questions more comprehensively.Specifically,we learn topology-based embeddings through a hierarchical attentive network learning strategy,in which the proximity information and the personalized preference of nodes are encoded and preserved.Meanwhile,we utilize the node’s textual content and the text correlation between adjacent nodes to build the content-based embedding through a meta-context-aware skip-gram model.In addition,the user’s relative answer quality is incorporated to promote the ranking performance.Experimental results show that our proposed framework consistently and significantly outperforms the state-of-the-art baselines on three real-world datasets by taking the deep semantic understanding and structural feature learning together.The performance of the proposed work is analyzed in terms of MRR,P@K,and MAP and is proven to be more advanced than the existing methodologies.展开更多
Recently,pre-trained language representation models such as bidirec-tional encoder representations from transformers(BERT)have been performing well in commonsense question answering(CSQA).However,there is a problem th...Recently,pre-trained language representation models such as bidirec-tional encoder representations from transformers(BERT)have been performing well in commonsense question answering(CSQA).However,there is a problem that the models do not directly use explicit information of knowledge sources existing outside.To augment this,additional methods such as knowledge-aware graph network(KagNet)and multi-hop graph relation network(MHGRN)have been proposed.In this study,we propose to use the latest pre-trained language model a lite bidirectional encoder representations from transformers(ALBERT)with knowledge graph information extraction technique.We also propose to applying the novel method,schema graph expansion to recent language models.Then,we analyze the effect of applying knowledge graph-based knowledge extraction techniques to recent pre-trained language models and confirm that schema graph expansion is effective in some extent.Furthermore,we show that our proposed model can achieve better performance than existing KagNet and MHGRN models in CommonsenseQA dataset.展开更多
Research on specific domain question-answering technology has become important with the increasing demand for intelligent question-answering systems. This paper proposes a domain question-answering algorithm based on ...Research on specific domain question-answering technology has become important with the increasing demand for intelligent question-answering systems. This paper proposes a domain question-answering algorithm based on the CLIP mechanism to improve the accuracy and efficiency of interaction. First, this paper reviewed relevant technologies involved in the question-answering field. Then, the question-answering model based on the CLIP mechanism was produced, including its design, implementation, and optimization. It also described the construction process of the specific domain knowledge graph, including graph design, data collection and processing, and graph construction methods. The paper compared the performance of the proposed algorithm with classic question-answering algorithms BiDAF, R-Net, and XLNet models, using a military domain dataset. The experimental results show that the proposed algorithm has advanced performance, with an F1 score of 84.6% on the constructed military knowledge graph test set, which is at least 1.5% higher than other models. We conduct a detailed analysis of the experimental results, which illustrates the algorithm’s advantages in accuracy and efficiency, as well as its potential for further improvement. These findings demonstrate the practical application potential of the proposed algorithm in the military domain.展开更多
In contrast with the research of new models,little attention has been paid to the impact of low or high-quality data feeding a dialogue system.The present paper makes thefirst attempt tofill this gap by extending our ...In contrast with the research of new models,little attention has been paid to the impact of low or high-quality data feeding a dialogue system.The present paper makes thefirst attempt tofill this gap by extending our previous work on question-answering(QA)systems by investigating the effect of misspelling on QA agents and how context changes can enhance the responses.Instead of using large language models trained on huge datasets,we propose a method that enhances the model's score by modifying only the quality and structure of the data feed to the model.It is important to identify the features that modify the agent performance because a high rate of wrong answers can make the students lose their interest in using the QA agent as an additional tool for distant learning.The results demonstrate the accuracy of the proposed context simplification exceeds 85%.Thesefindings shed light on the importance of question data quality and context complexity construct as key dimensions of the QA system.In conclusion,the experimental results on questions and contexts showed that controlling and improving the various aspects of data quality around the QA system can significantly enhance his robustness and performance.展开更多
Analyzing Research and Development(R&D)trends is important because it can influence future decisions regarding R&D direction.In typical trend analysis,topic or technology taxonomies are employed to compute the...Analyzing Research and Development(R&D)trends is important because it can influence future decisions regarding R&D direction.In typical trend analysis,topic or technology taxonomies are employed to compute the popularities of the topics or codes over time.Although it is simple and effective,the taxonomies are difficult to manage because new technologies are introduced rapidly.Therefore,recent studies exploit deep learning to extract pre-defined targets such as problems and solutions.Based on the recent advances in question answering(QA)using deep learning,we adopt a multi-turn QA model to extract problems and solutions from Korean R&D reports.With the previous research,we use the reports directly and analyze the difficulties in handling them using QA style on Information Extraction(IE)for sentence-level benchmark dataset.After investigating the characteristics of Korean R&D,we propose a model to deal with multiple and repeated appearances of targets in the reports.Accordingly,we propose a model that includes an algorithm with two novel modules and a prompt.A newly proposed methodology focuses on reformulating a question without a static template or pre-defined knowledge.We show the effectiveness of the proposed model using a Korean R&D report dataset that we constructed and presented an in-depth analysis of the benefits of the multi-turn QA model.展开更多
In this work, a best answer recommendation model is proposed for a Question Answering (QA) system. A Community Question Answering System was subsequently developed based on the model. The system applies Brouwer Fixed ...In this work, a best answer recommendation model is proposed for a Question Answering (QA) system. A Community Question Answering System was subsequently developed based on the model. The system applies Brouwer Fixed Point Theorem to prove the existence of the desired voter scoring function and Normalized Google Distance (NGD) to show closeness between words before an answer is suggested to users. Answers are ranked according to their Fixed-Point Score (FPS) for each question. Thereafter, the highest scored answer is chosen as the FPS Best Answer (BA). For each question asked by user, the system applies NGD to check if similar or related questions with the best answer had been asked and stored in the database. When similar or related questions with the best answer are not found in the database, Brouwer Fixed point is used to calculate the best answer from the pool of answers on a question then the best answer is stored in the NGD data-table for recommendation purpose. The system was implemented using PHP scripting language, MySQL for database management, JQuery, and Apache. The system was evaluated using standard metrics: Reciprocal Rank, Mean Reciprocal Rank (MRR) and Discounted Cumulative Gain (DCG). The system eliminated longer waiting time faced by askers in a community question answering system. The developed system can be used for research and learning purposes.展开更多
Visual question answering(VQA)has attracted more and more attention in computer vision and natural language processing.Scholars are committed to studying how to better integrate image features and text features to ach...Visual question answering(VQA)has attracted more and more attention in computer vision and natural language processing.Scholars are committed to studying how to better integrate image features and text features to achieve better results in VQA tasks.Analysis of all features may cause information redundancy and heavy computational burden.Attention mechanism is a wise way to solve this problem.However,using single attention mechanism may cause incomplete concern of features.This paper improves the attention mechanism method and proposes a hybrid attention mechanism that combines the spatial attention mechanism method and the channel attention mechanism method.In the case that the attention mechanism will cause the loss of the original features,a small portion of image features were added as compensation.For the attention mechanism of text features,a selfattention mechanism was introduced,and the internal structural features of sentences were strengthened to improve the overall model.The results show that attention mechanism and feature compensation add 6.1%accuracy to multimodal low-rank bilinear pooling network.展开更多
The original intention of visual question answering(VQA)models is to infer the answer based on the relevant information of the question text in the visual image,but many VQA models often yield answers that are biased ...The original intention of visual question answering(VQA)models is to infer the answer based on the relevant information of the question text in the visual image,but many VQA models often yield answers that are biased by some prior knowledge,especially the language priors.This paper proposes a mitigation model called language priors mitigation-VQA(LPM-VQA)for the language priors problem in VQA model,which divides language priors into positive and negative language priors.Different network branches are used to capture and process the different priors to achieve the purpose of mitigating language priors.A dynamically-changing language prior feedback objective function is designed with the intermediate results of some modules in the VQA model.The weight of the loss value for each answer is dynamically set according to the strength of its language priors to balance its proportion in the total VQA loss to further mitigate the language priors.This model does not depend on the baseline VQA architectures and can be configured like a plug-in to improve the performance of the model over most existing VQA models.The experimental results show that the proposed model is general and effective,achieving state-of-the-art accuracy in the VQA-CP v2 dataset.展开更多
Visual Question Answering(VQA)has sparked widespread interest as a crucial task in integrating vision and language.VQA primarily uses attention mechanisms to effectively answer questions to associate relevant visual r...Visual Question Answering(VQA)has sparked widespread interest as a crucial task in integrating vision and language.VQA primarily uses attention mechanisms to effectively answer questions to associate relevant visual regions with input questions.The detection-based features extracted by the object detection network aim to acquire the visual attention distribution on a predetermined detection frame and provide object-level insights to answer questions about foreground objects more effectively.However,it cannot answer the question about the background forms without detection boxes due to the lack of fine-grained details,which is the advantage of grid-based features.In this paper,we propose a Dual-Level Feature Embedding(DLFE)network,which effectively integrates grid-based and detection-based image features in a unified architecture to realize the complementary advantages of both features.Specifically,in DLFE,In DLFE,firstly,a novel Dual-Level Self-Attention(DLSA)modular is proposed to mine the intrinsic properties of the two features,where Positional Relation Attention(PRA)is designed to model the position information.Then,we propose a Feature Fusion Attention(FFA)to address the semantic noise caused by the fusion of two features and construct an alignment graph to enhance and align the grid and detection features.Finally,we use co-attention to learn the interactive features of the image and question and answer questions more accurately.Our method has significantly improved compared to the baseline,increasing accuracy from 66.01%to 70.63%on the test-std dataset of VQA 1.0 and from 66.24%to 70.91%for the test-std dataset of VQA 2.0.展开更多
文摘The weapon and equipment operational requirement analysis(WEORA) is a necessary condition to win a future war,among which the acquisition of knowledge about weapons and equipment is a great challenge. The main challenge is that the existing weapons and equipment data fails to carry out structured knowledge representation, and knowledge navigation based on natural language cannot efficiently support the WEORA. To solve above problem, this research proposes a method based on question answering(QA) of weapons and equipment knowledge graph(WEKG) to construct and navigate the knowledge related to weapons and equipment in the WEORA. This method firstly constructs the WEKG, and builds a neutral network-based QA system over the WEKG by means of semantic parsing for knowledge navigation. Finally, the method is evaluated and a chatbot on the QA system is developed for the WEORA. Our proposed method has good performance in the accuracy and efficiency of searching target knowledge, and can well assist the WEORA.
基金Supported by National Nature Science Foudation of China(61976160,61906137,61976158,62076184,62076182)Shanghai Science and Technology Plan Project(21DZ1204800)。
文摘Background External knowledge representations play an essential role in knowledge-based visual question and answering to better understand complex scenarios in the open world.Recent entity-relationship embedding approaches are deficient in representing some complex relations,resulting in a lack of topic-related knowledge and redundancy in topic-irrelevant information.Methods To this end,we propose MKEAH:Multimodal Knowledge Extraction and Accumulation on Hyperplanes.To ensure that the lengths of the feature vectors projected onto the hyperplane compare equally and to filter out sufficient topic-irrelevant information,two losses are proposed to learn the triplet representations from the complementary views:range loss and orthogonal loss.To interpret the capability of extracting topic-related knowledge,we present the Topic Similarity(TS)between topic and entity-relations.Results Experimental results demonstrate the effectiveness of hyperplane embedding for knowledge representation in knowledge-based visual question answering.Our model outperformed state-of-the-art methods by 2.12%and 3.24%on two challenging knowledge-request datasets:OK-VQA and KRVQA,respectively.Conclusions The obvious advantages of our model in TS show that using hyperplane embedding to represent multimodal knowledge can improve its ability to extract topic-related knowledge.
基金Supported by Sichuan Science and Technology Program(2021YFQ0003,2023YFSY0026,2023YFH0004).
文摘In the field of natural language processing(NLP),there have been various pre-training language models in recent years,with question answering systems gaining significant attention.However,as algorithms,data,and computing power advance,the issue of increasingly larger models and a growing number of parameters has surfaced.Consequently,model training has become more costly and less efficient.To enhance the efficiency and accuracy of the training process while reducing themodel volume,this paper proposes a first-order pruningmodel PAL-BERT based on the ALBERT model according to the characteristics of question-answering(QA)system and language model.Firstly,a first-order network pruning method based on the ALBERT model is designed,and the PAL-BERT model is formed.Then,the parameter optimization strategy of the PAL-BERT model is formulated,and the Mish function was used as an activation function instead of ReLU to improve the performance.Finally,after comparison experiments with traditional deep learning models TextCNN and BiLSTM,it is confirmed that PALBERT is a pruning model compression method that can significantly reduce training time and optimize training efficiency.Compared with traditional models,PAL-BERT significantly improves the NLP task’s performance.
基金supported by Sichuan Science and Technology Program(2023YFSY0026,2023YFH0004).
文摘Recent advancements in natural language processing have given rise to numerous pre-training language models in question-answering systems.However,with the constant evolution of algorithms,data,and computing power,the increasing size and complexity of these models have led to increased training costs and reduced efficiency.This study aims to minimize the inference time of such models while maintaining computational performance.It also proposes a novel Distillation model for PAL-BERT(DPAL-BERT),specifically,employs knowledge distillation,using the PAL-BERT model as the teacher model to train two student models:DPAL-BERT-Bi and DPAL-BERTC.This research enhances the dataset through techniques such as masking,replacement,and n-gram sampling to optimize knowledge transfer.The experimental results showed that the distilled models greatly outperform models trained from scratch.In addition,although the distilled models exhibit a slight decrease in performance compared to PAL-BERT,they significantly reduce inference time to just 0.25%of the original.This demonstrates the effectiveness of the proposed approach in balancing model performance and efficiency.
文摘目前针对复杂语义和复杂句法的知识库问答(Knowledge Base Question Answering,KBQA)研究层出不穷,但它们多以已知问题的主题实体为前提,对问题中多意图和多实体重视不足,而问句中对核心实体的识别是理解自然语言的关键。针对此问题,提出了一种引入核心实体关注度的KBQA模型。该模型基于注意力机制及注意力增强技术,对识别到的实体引用(Mention)进行重要性评估,得到实体引用关注度,去除潜在干扰项,捕获用户提问的核心实体,解决了多实体、多意图问句的语义理解问题。此外,还将评估的结果作为重要权重引入后续的问答推理中。在英文MetaQA数据集、多实体问句MetaQA数据集、多实体问句HotpotQA数据集上,与KVMem,GraftNet,PullNet等模型进行了对比实验。结果表明,针对多实体问句,所提模型在Hits@n、准确率、召回率等评估指标上均取得了更好的实验效果。
文摘Over the last couple of decades,community question-answering sites(CQAs)have been a topic of much academic interest.Scholars have often leveraged traditional machine learning(ML)and deep learning(DL)to explore the ever-growing volume of content that CQAs engender.To clarify the current state of the CQA literature that has used ML and DL,this paper reports a systematic literature review.The goal is to summarise and synthesise the major themes of CQA research related to(i)questions,(ii)answers and(iii)users.The final review included 133 articles.Dominant research themes include question quality,answer quality,and expert identification.In terms of dataset,some of the most widely studied platforms include Yahoo!Answers,Stack Exchange and Stack Overflow.The scope of most articles was confined to just one platform with few cross-platform investigations.Articles with ML outnumber those with DL.Nonetheless,the use of DL in CQA research is on an upward trajectory.A number of research directions are proposed.
文摘ExpertRecommendation(ER)aims to identify domain experts with high expertise and willingness to provide answers to questions in Community Question Answering(CQA)web services.How to model questions and users in the heterogeneous content network is critical to this task.Most traditional methods focus on modeling questions and users based on the textual content left in the community while ignoring the structural properties of heterogeneous CQA networks and always suffering from textual data sparsity issues.Recent approaches take advantage of structural proximities between nodes and attempt to fuse the textual content of nodes for modeling.However,they often fail to distinguish the nodes’personalized preferences and only consider the textual content of a part of the nodes in network embedding learning,while ignoring the semantic relevance of nodes.In this paper,we propose a novel framework that jointly considers the structural proximity relations and textual semantic relevance to model users and questions more comprehensively.Specifically,we learn topology-based embeddings through a hierarchical attentive network learning strategy,in which the proximity information and the personalized preference of nodes are encoded and preserved.Meanwhile,we utilize the node’s textual content and the text correlation between adjacent nodes to build the content-based embedding through a meta-context-aware skip-gram model.In addition,the user’s relative answer quality is incorporated to promote the ranking performance.Experimental results show that our proposed framework consistently and significantly outperforms the state-of-the-art baselines on three real-world datasets by taking the deep semantic understanding and structural feature learning together.The performance of the proposed work is analyzed in terms of MRR,P@K,and MAP and is proven to be more advanced than the existing methodologies.
基金supported by the National Research Foundation of Korea(NRF)grant funded by the Korea Government(MSIT)(No.2020R1G1A1100493).
文摘Recently,pre-trained language representation models such as bidirec-tional encoder representations from transformers(BERT)have been performing well in commonsense question answering(CSQA).However,there is a problem that the models do not directly use explicit information of knowledge sources existing outside.To augment this,additional methods such as knowledge-aware graph network(KagNet)and multi-hop graph relation network(MHGRN)have been proposed.In this study,we propose to use the latest pre-trained language model a lite bidirectional encoder representations from transformers(ALBERT)with knowledge graph information extraction technique.We also propose to applying the novel method,schema graph expansion to recent language models.Then,we analyze the effect of applying knowledge graph-based knowledge extraction techniques to recent pre-trained language models and confirm that schema graph expansion is effective in some extent.Furthermore,we show that our proposed model can achieve better performance than existing KagNet and MHGRN models in CommonsenseQA dataset.
文摘Research on specific domain question-answering technology has become important with the increasing demand for intelligent question-answering systems. This paper proposes a domain question-answering algorithm based on the CLIP mechanism to improve the accuracy and efficiency of interaction. First, this paper reviewed relevant technologies involved in the question-answering field. Then, the question-answering model based on the CLIP mechanism was produced, including its design, implementation, and optimization. It also described the construction process of the specific domain knowledge graph, including graph design, data collection and processing, and graph construction methods. The paper compared the performance of the proposed algorithm with classic question-answering algorithms BiDAF, R-Net, and XLNet models, using a military domain dataset. The experimental results show that the proposed algorithm has advanced performance, with an F1 score of 84.6% on the constructed military knowledge graph test set, which is at least 1.5% higher than other models. We conduct a detailed analysis of the experimental results, which illustrates the algorithm’s advantages in accuracy and efficiency, as well as its potential for further improvement. These findings demonstrate the practical application potential of the proposed algorithm in the military domain.
文摘In contrast with the research of new models,little attention has been paid to the impact of low or high-quality data feeding a dialogue system.The present paper makes thefirst attempt tofill this gap by extending our previous work on question-answering(QA)systems by investigating the effect of misspelling on QA agents and how context changes can enhance the responses.Instead of using large language models trained on huge datasets,we propose a method that enhances the model's score by modifying only the quality and structure of the data feed to the model.It is important to identify the features that modify the agent performance because a high rate of wrong answers can make the students lose their interest in using the QA agent as an additional tool for distant learning.The results demonstrate the accuracy of the proposed context simplification exceeds 85%.Thesefindings shed light on the importance of question data quality and context complexity construct as key dimensions of the QA system.In conclusion,the experimental results on questions and contexts showed that controlling and improving the various aspects of data quality around the QA system can significantly enhance his robustness and performance.
基金the National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(NRF-2019R1G1A1003312)the Ministry of Education(NRF-2021R1I1A3052815).
文摘Analyzing Research and Development(R&D)trends is important because it can influence future decisions regarding R&D direction.In typical trend analysis,topic or technology taxonomies are employed to compute the popularities of the topics or codes over time.Although it is simple and effective,the taxonomies are difficult to manage because new technologies are introduced rapidly.Therefore,recent studies exploit deep learning to extract pre-defined targets such as problems and solutions.Based on the recent advances in question answering(QA)using deep learning,we adopt a multi-turn QA model to extract problems and solutions from Korean R&D reports.With the previous research,we use the reports directly and analyze the difficulties in handling them using QA style on Information Extraction(IE)for sentence-level benchmark dataset.After investigating the characteristics of Korean R&D,we propose a model to deal with multiple and repeated appearances of targets in the reports.Accordingly,we propose a model that includes an algorithm with two novel modules and a prompt.A newly proposed methodology focuses on reformulating a question without a static template or pre-defined knowledge.We show the effectiveness of the proposed model using a Korean R&D report dataset that we constructed and presented an in-depth analysis of the benefits of the multi-turn QA model.
文摘In this work, a best answer recommendation model is proposed for a Question Answering (QA) system. A Community Question Answering System was subsequently developed based on the model. The system applies Brouwer Fixed Point Theorem to prove the existence of the desired voter scoring function and Normalized Google Distance (NGD) to show closeness between words before an answer is suggested to users. Answers are ranked according to their Fixed-Point Score (FPS) for each question. Thereafter, the highest scored answer is chosen as the FPS Best Answer (BA). For each question asked by user, the system applies NGD to check if similar or related questions with the best answer had been asked and stored in the database. When similar or related questions with the best answer are not found in the database, Brouwer Fixed point is used to calculate the best answer from the pool of answers on a question then the best answer is stored in the NGD data-table for recommendation purpose. The system was implemented using PHP scripting language, MySQL for database management, JQuery, and Apache. The system was evaluated using standard metrics: Reciprocal Rank, Mean Reciprocal Rank (MRR) and Discounted Cumulative Gain (DCG). The system eliminated longer waiting time faced by askers in a community question answering system. The developed system can be used for research and learning purposes.
基金This work was supported by the Sichuan Science and Technology Program(2021YFQ0003).
文摘Visual question answering(VQA)has attracted more and more attention in computer vision and natural language processing.Scholars are committed to studying how to better integrate image features and text features to achieve better results in VQA tasks.Analysis of all features may cause information redundancy and heavy computational burden.Attention mechanism is a wise way to solve this problem.However,using single attention mechanism may cause incomplete concern of features.This paper improves the attention mechanism method and proposes a hybrid attention mechanism that combines the spatial attention mechanism method and the channel attention mechanism method.In the case that the attention mechanism will cause the loss of the original features,a small portion of image features were added as compensation.For the attention mechanism of text features,a selfattention mechanism was introduced,and the internal structural features of sentences were strengthened to improve the overall model.The results show that attention mechanism and feature compensation add 6.1%accuracy to multimodal low-rank bilinear pooling network.
文摘The original intention of visual question answering(VQA)models is to infer the answer based on the relevant information of the question text in the visual image,but many VQA models often yield answers that are biased by some prior knowledge,especially the language priors.This paper proposes a mitigation model called language priors mitigation-VQA(LPM-VQA)for the language priors problem in VQA model,which divides language priors into positive and negative language priors.Different network branches are used to capture and process the different priors to achieve the purpose of mitigating language priors.A dynamically-changing language prior feedback objective function is designed with the intermediate results of some modules in the VQA model.The weight of the loss value for each answer is dynamically set according to the strength of its language priors to balance its proportion in the total VQA loss to further mitigate the language priors.This model does not depend on the baseline VQA architectures and can be configured like a plug-in to improve the performance of the model over most existing VQA models.The experimental results show that the proposed model is general and effective,achieving state-of-the-art accuracy in the VQA-CP v2 dataset.
文摘Visual Question Answering(VQA)has sparked widespread interest as a crucial task in integrating vision and language.VQA primarily uses attention mechanisms to effectively answer questions to associate relevant visual regions with input questions.The detection-based features extracted by the object detection network aim to acquire the visual attention distribution on a predetermined detection frame and provide object-level insights to answer questions about foreground objects more effectively.However,it cannot answer the question about the background forms without detection boxes due to the lack of fine-grained details,which is the advantage of grid-based features.In this paper,we propose a Dual-Level Feature Embedding(DLFE)network,which effectively integrates grid-based and detection-based image features in a unified architecture to realize the complementary advantages of both features.Specifically,in DLFE,In DLFE,firstly,a novel Dual-Level Self-Attention(DLSA)modular is proposed to mine the intrinsic properties of the two features,where Positional Relation Attention(PRA)is designed to model the position information.Then,we propose a Feature Fusion Attention(FFA)to address the semantic noise caused by the fusion of two features and construct an alignment graph to enhance and align the grid and detection features.Finally,we use co-attention to learn the interactive features of the image and question and answer questions more accurately.Our method has significantly improved compared to the baseline,increasing accuracy from 66.01%to 70.63%on the test-std dataset of VQA 1.0 and from 66.24%to 70.91%for the test-std dataset of VQA 2.0.