Accurately recommending candidate news to users is a basic challenge of personalized news recommendation systems.Traditional methods are usually difficult to learn and acquire complex semantic information in news text...Accurately recommending candidate news to users is a basic challenge of personalized news recommendation systems.Traditional methods are usually difficult to learn and acquire complex semantic information in news texts,resulting in unsatisfactory recommendation results.Besides,these traditional methods are more friendly to active users with rich historical behaviors.However,they can not effectively solve the long tail problem of inactive users.To address these issues,this research presents a novel general framework that combines Large Language Models(LLM)and Knowledge Graphs(KG)into traditional methods.To learn the contextual information of news text,we use LLMs’powerful text understanding ability to generate news representations with rich semantic information,and then,the generated news representations are used to enhance the news encoding in traditional methods.In addition,multi-hops relationship of news entities is mined and the structural information of news is encoded using KG,thus alleviating the challenge of long-tail distribution.Experimental results demonstrate that compared with various traditional models,on evaluation indicators such as AUC,MRR,nDCG@5 and nDCG@10,the framework significantly improves the recommendation performance.The successful integration of LLM and KG in our framework has established a feasible way for achieving more accurate personalized news recommendation.Our code is available at https://github.com/Xuan-ZW/LKPNR.展开更多
Knowledge plays a critical role in artificial intelligence.Recently,the extensive success of pre-trained language models(PLMs)has raised significant attention about how knowledge can be acquired,maintained,updated and...Knowledge plays a critical role in artificial intelligence.Recently,the extensive success of pre-trained language models(PLMs)has raised significant attention about how knowledge can be acquired,maintained,updated and used by language models.Despite the enormous amount of related studies,there is still a lack of a unified view of how knowledge circulates within language models throughout the learning,tuning,and application processes,which may prevent us from further understanding the connections between current progress or realizing existing limitations.In this survey,we revisit PLMs as knowledge-based systems by dividing the life circle of knowledge in PLMs into five critical periods,and investigating how knowledge circulates when it is built,maintained and used.To this end,we systematically review existing studies of each period of the knowledge life cycle,summarize the main challenges and current limitations,and discuss future directions.展开更多
Recently,pre-trained language representation models such as bidirec-tional encoder representations from transformers(BERT)have been performing well in commonsense question answering(CSQA).However,there is a problem th...Recently,pre-trained language representation models such as bidirec-tional encoder representations from transformers(BERT)have been performing well in commonsense question answering(CSQA).However,there is a problem that the models do not directly use explicit information of knowledge sources existing outside.To augment this,additional methods such as knowledge-aware graph network(KagNet)and multi-hop graph relation network(MHGRN)have been proposed.In this study,we propose to use the latest pre-trained language model a lite bidirectional encoder representations from transformers(ALBERT)with knowledge graph information extraction technique.We also propose to applying the novel method,schema graph expansion to recent language models.Then,we analyze the effect of applying knowledge graph-based knowledge extraction techniques to recent pre-trained language models and confirm that schema graph expansion is effective in some extent.Furthermore,we show that our proposed model can achieve better performance than existing KagNet and MHGRN models in CommonsenseQA dataset.展开更多
This paper presents a reference methodology for process orchestration that accelerates the development of Large Language Model (LLM) applications by integrating knowledge bases, API access, and deep web retrieval. By ...This paper presents a reference methodology for process orchestration that accelerates the development of Large Language Model (LLM) applications by integrating knowledge bases, API access, and deep web retrieval. By incorporating structured knowledge, the methodology enhances LLMs’ reasoning abilities, enabling more accurate and efficient handling of complex tasks. Integration with open APIs allows LLMs to access external services and real-time data, expanding their functionality and application range. Through real-world case studies, we demonstrate that this approach significantly improves the efficiency and adaptability of LLM-based applications, especially for time-sensitive tasks. Our methodology provides practical guidelines for developers to rapidly create robust and adaptable LLM applications capable of navigating dynamic information environments and performing effectively across diverse tasks.展开更多
Previous works employ the Large Language Model(LLM)like GPT-3 for knowledge-based Visual Question Answering(VQA).We argue that the inferential capacity of LLM can be enhanced through knowledge injection.Although metho...Previous works employ the Large Language Model(LLM)like GPT-3 for knowledge-based Visual Question Answering(VQA).We argue that the inferential capacity of LLM can be enhanced through knowledge injection.Although methods that utilize knowledge graphs to enhance LLM have been explored in various tasks,they may have some limitations,such as the possibility of not being able to retrieve the required knowledge.In this paper,we introduce a novel framework for knowledge-based VQA titled“Prompting Large Language Models with Knowledge-Injection”(PLLMKI).We use vanilla VQA model to inspire the LLM and further enhance the LLM with knowledge injection.Unlike earlier approaches,we adopt the LLM for knowledge enhancement instead of relying on knowledge graphs.Furthermore,we leverage open LLMs,incurring no additional costs.In comparison to existing baselines,our approach exhibits the accuracy improvement of over 1.3 and 1.7 on two knowledge-based VQA datasets,namely OK-VQA and A-OKVQA,respectively.展开更多
Humankind's understanding of the world is fundamentally linked to our perception and cognition,with human languages serving as one of the major carriers of world knowledge.In this vein,Large Language Models(LLMs)l...Humankind's understanding of the world is fundamentally linked to our perception and cognition,with human languages serving as one of the major carriers of world knowledge.In this vein,Large Language Models(LLMs)like ChatGPT epitomize the pre-training of extensive,sequence-based world knowledge into neural networks,facilitating the processing and manipulation of this knowledge in a parametric space.This article explores large models through the lens of"knowledge".We initially investigate the role of symbolic knowledge such as Knowledge Graphs(KGs)in enhancing LLMs,covering aspects like knowledge-augmented language model,structure-inducing pretraining,knowledgeable prompts,structured CoT,knowledge editing,semantic tools for LLM and knowledgeable Al agents.Subsequently,we examine how LLMs can boost traditional symbolic knowledge bases,encompassing aspects like using LLM as KG builder and controller,structured knowledge pretraining,and LLM-enhanced symbolic reasoning.Considering the intricate nature of human knowledge,we advocate for the creation of Large Knowledge Models(LKM),specifically engineered to manage diversified spectrum of knowledge structures.This promising undertaking would entail several key challenges,such as disentangling knowledge base from language models,cognitive alignment with human knowledge,integration of perception and cognition,and building large commonsense models for interacting with physical world,among others.We finally propose a five-"A"principle to distinguish the concept of LKM.展开更多
Addressing carbon neutrality presents a multifaceted challenge,necessitating collaboration across various disciplines,fields,and societal stakeholders.With the increasing urgency to mitigate climate change,there is a ...Addressing carbon neutrality presents a multifaceted challenge,necessitating collaboration across various disciplines,fields,and societal stakeholders.With the increasing urgency to mitigate climate change,there is a crucial need for innovative approaches in communication and education to enhance societal understanding and engagement.Large-scale language models like ChatGPT emerge as transformative tools in the AI era,offering potential to revolutionize how we approach economic,technological,social,and environmental issues of achieving carbon neutrality.However,the full potential of these models in carbon neutrality is yet to be realized,hindered by limitations in providing detailed,localized,and expert-level insights across an expansive spectrum of subjects.To bridge these gaps,this paper introduces an innovative framework that integrates local knowledge with LLMs,aiming to markedly enhance the depth,accuracy,and regional relevance of the information provided.The effectiveness of this framework is examined from government,corporations,and community perspectives.The integration of local knowledge with LLMs not only enriches the AI’s comprehension of local specificities but also guarantees an up-to-date information that is crucial for addressing the specific concerns and questions about carbon neutrality raised by a broad array of stakeholders.Overall,the proposed framework showcases significant potential in enhancing societal comprehension and participation towards carbon neutrality.展开更多
In the context of power generation companies, vast amounts of specialized data and expert knowledge have been accumulated. However, challenges such as data silos and fragmented knowledge hinder the effective utilizati...In the context of power generation companies, vast amounts of specialized data and expert knowledge have been accumulated. However, challenges such as data silos and fragmented knowledge hinder the effective utilization of this information. This study proposes a novel framework for intelligent Question-and-Answer (Q&A) systems based on Retrieval-Augmented Generation (RAG) to address these issues. The system efficiently acquires domain-specific knowledge by leveraging external databases, including Relational Databases (RDBs) and graph databases, without additional fine-tuning for Large Language Models (LLMs). Crucially, the framework integrates a Dynamic Knowledge Base Updating Mechanism (DKBUM) and a Weighted Context-Aware Similarity (WCAS) method to enhance retrieval accuracy and mitigate inherent limitations of LLMs, such as hallucinations and lack of specialization. Additionally, the proposed DKBUM dynamically adjusts knowledge weights within the database, ensuring that the most recent and relevant information is utilized, while WCAS refines the alignment between queries and knowledge items by enhanced context understanding. Experimental validation demonstrates that the system can generate timely, accurate, and context-sensitive responses, making it a robust solution for managing complex business logic in specialized industries.展开更多
This research explores the integration of large language models (LLMs) into scientific data assimilation, focusing on combustion science as a case study. Leveraging foundational models integrated with Retrieval-Augmen...This research explores the integration of large language models (LLMs) into scientific data assimilation, focusing on combustion science as a case study. Leveraging foundational models integrated with Retrieval-Augmented Generation (RAG) framework, the study introduces an approach to process diverse combustion research data, spanning experimental studies, simulations, and literature. The multifaceted nature of combustion research emphasizes the critical role of knowledge processing in navigating and extracting valuable information from a vast and diverse pool of sources. The developed approach minimizes computational and economic expenses while optimizing data privacy and accuracy. It incorporates prompt engineering and offline open-source LLMs, offering user autonomy in selecting base models. The study provides a thorough examination of text segmentation strategies, conducts comparative studies between LLMs, and explores various optimized prompts to demonstrate the effectiveness of the framework. By incorporating an external vector database, the framework outperforms a conventional LLM in generating accurate responses and constructing robust arguments. Additionally, the study delves into the investigation of optimized prompt templates for the purpose of efficient extraction of scientific literature. Furthermore, we present a targeted scaling study to quantify the algorithmic performance of the framework as the number of prompt tokens increases. The research addresses concerns related to hallucinations and false research articles by introducing a custom workflow developed with a detection algorithm to filter out inaccuracies. Despite identified areas for improvement, the framework consistently delivers accurate domain-specific responses with minimal human oversight. The prompt-agnostic approach introduced holds promise for future improvements. The study underscores the significance of integrating LLMs and knowledge processing techniques in scientific research, providing a foundation for advancements in data assimilation and utilization.展开更多
Open Relation Extraction(ORE)is a task of extracting semantic relations from a text document.Current ORE systems have significantly improved their efficiency in obtaining Chinese relations,when compared with conventio...Open Relation Extraction(ORE)is a task of extracting semantic relations from a text document.Current ORE systems have significantly improved their efficiency in obtaining Chinese relations,when compared with conventional systems which heavily depend on feature engineering or syntactic parsing.However,the ORE systems do not use robust neural networks such as pre-trained language models to take advantage of large-scale unstructured data effectively.In respons to this issue,a new system entitled Chinese Open Relation Extraction with Knowledge Enhancement(CORE-KE)is presented in this paper.The CORE-KE system employs a pre-trained language model(with the support of a Bidirectional Long Short-Term Memory(BiLSTM)layer and a Masked Conditional Random Field(Masked CRF)layer)on unstructured data in order to improve Chinese open relation extraction.Entity descriptions in Wikidata and additional knowledge(in terms of triple facts)extracted from Chinese ORE datasets are used to fine-tune the pre-trained language model.In addition,syntactic features are further adopted in the training stage of the CORE-KE system for knowledge enhancement.Experimental results of the CORE-KE system on two large-scale datasets of open Chinese entities and relations demonstrate that the CORE-KE system is superior to other ORE systems.The F1-scores of the CORE-KE system on the two datasets have given a relative improvement of 20.1%and 1.3%,when compared with benchmark ORE systems,respectively.The source code is available at https:/github.COm/cjwen15/CORE-KE.展开更多
基金supported by National Key R&D Program of China(2022QY2000-02).
文摘Accurately recommending candidate news to users is a basic challenge of personalized news recommendation systems.Traditional methods are usually difficult to learn and acquire complex semantic information in news texts,resulting in unsatisfactory recommendation results.Besides,these traditional methods are more friendly to active users with rich historical behaviors.However,they can not effectively solve the long tail problem of inactive users.To address these issues,this research presents a novel general framework that combines Large Language Models(LLM)and Knowledge Graphs(KG)into traditional methods.To learn the contextual information of news text,we use LLMs’powerful text understanding ability to generate news representations with rich semantic information,and then,the generated news representations are used to enhance the news encoding in traditional methods.In addition,multi-hops relationship of news entities is mined and the structural information of news is encoded using KG,thus alleviating the challenge of long-tail distribution.Experimental results demonstrate that compared with various traditional models,on evaluation indicators such as AUC,MRR,nDCG@5 and nDCG@10,the framework significantly improves the recommendation performance.The successful integration of LLM and KG in our framework has established a feasible way for achieving more accurate personalized news recommendation.Our code is available at https://github.com/Xuan-ZW/LKPNR.
基金supported by the National Natural Science Foundation of China(No.62122077)CAS Project for Young Scientists in Basic Research,China(No.YSBR-040).
文摘Knowledge plays a critical role in artificial intelligence.Recently,the extensive success of pre-trained language models(PLMs)has raised significant attention about how knowledge can be acquired,maintained,updated and used by language models.Despite the enormous amount of related studies,there is still a lack of a unified view of how knowledge circulates within language models throughout the learning,tuning,and application processes,which may prevent us from further understanding the connections between current progress or realizing existing limitations.In this survey,we revisit PLMs as knowledge-based systems by dividing the life circle of knowledge in PLMs into five critical periods,and investigating how knowledge circulates when it is built,maintained and used.To this end,we systematically review existing studies of each period of the knowledge life cycle,summarize the main challenges and current limitations,and discuss future directions.
基金supported by the National Research Foundation of Korea(NRF)grant funded by the Korea Government(MSIT)(No.2020R1G1A1100493).
文摘Recently,pre-trained language representation models such as bidirec-tional encoder representations from transformers(BERT)have been performing well in commonsense question answering(CSQA).However,there is a problem that the models do not directly use explicit information of knowledge sources existing outside.To augment this,additional methods such as knowledge-aware graph network(KagNet)and multi-hop graph relation network(MHGRN)have been proposed.In this study,we propose to use the latest pre-trained language model a lite bidirectional encoder representations from transformers(ALBERT)with knowledge graph information extraction technique.We also propose to applying the novel method,schema graph expansion to recent language models.Then,we analyze the effect of applying knowledge graph-based knowledge extraction techniques to recent pre-trained language models and confirm that schema graph expansion is effective in some extent.Furthermore,we show that our proposed model can achieve better performance than existing KagNet and MHGRN models in CommonsenseQA dataset.
文摘This paper presents a reference methodology for process orchestration that accelerates the development of Large Language Model (LLM) applications by integrating knowledge bases, API access, and deep web retrieval. By incorporating structured knowledge, the methodology enhances LLMs’ reasoning abilities, enabling more accurate and efficient handling of complex tasks. Integration with open APIs allows LLMs to access external services and real-time data, expanding their functionality and application range. Through real-world case studies, we demonstrate that this approach significantly improves the efficiency and adaptability of LLM-based applications, especially for time-sensitive tasks. Our methodology provides practical guidelines for developers to rapidly create robust and adaptable LLM applications capable of navigating dynamic information environments and performing effectively across diverse tasks.
基金supported by the National Natural Science Foundation of China(No.62272100)Consulting Project of Chinese Academy of Engineering(No.2023-XY-09)Fundamental Research Funds for the Central Universities.
文摘Previous works employ the Large Language Model(LLM)like GPT-3 for knowledge-based Visual Question Answering(VQA).We argue that the inferential capacity of LLM can be enhanced through knowledge injection.Although methods that utilize knowledge graphs to enhance LLM have been explored in various tasks,they may have some limitations,such as the possibility of not being able to retrieve the required knowledge.In this paper,we introduce a novel framework for knowledge-based VQA titled“Prompting Large Language Models with Knowledge-Injection”(PLLMKI).We use vanilla VQA model to inspire the LLM and further enhance the LLM with knowledge injection.Unlike earlier approaches,we adopt the LLM for knowledge enhancement instead of relying on knowledge graphs.Furthermore,we leverage open LLMs,incurring no additional costs.In comparison to existing baselines,our approach exhibits the accuracy improvement of over 1.3 and 1.7 on two knowledge-based VQA datasets,namely OK-VQA and A-OKVQA,respectively.
文摘Humankind's understanding of the world is fundamentally linked to our perception and cognition,with human languages serving as one of the major carriers of world knowledge.In this vein,Large Language Models(LLMs)like ChatGPT epitomize the pre-training of extensive,sequence-based world knowledge into neural networks,facilitating the processing and manipulation of this knowledge in a parametric space.This article explores large models through the lens of"knowledge".We initially investigate the role of symbolic knowledge such as Knowledge Graphs(KGs)in enhancing LLMs,covering aspects like knowledge-augmented language model,structure-inducing pretraining,knowledgeable prompts,structured CoT,knowledge editing,semantic tools for LLM and knowledgeable Al agents.Subsequently,we examine how LLMs can boost traditional symbolic knowledge bases,encompassing aspects like using LLM as KG builder and controller,structured knowledge pretraining,and LLM-enhanced symbolic reasoning.Considering the intricate nature of human knowledge,we advocate for the creation of Large Knowledge Models(LKM),specifically engineered to manage diversified spectrum of knowledge structures.This promising undertaking would entail several key challenges,such as disentangling knowledge base from language models,cognitive alignment with human knowledge,integration of perception and cognition,and building large commonsense models for interacting with physical world,among others.We finally propose a five-"A"principle to distinguish the concept of LKM.
基金supported by Beijing Natural Science Foundation,China(Grant No.L241083)the National Natural Science Foundation of China(Grant No.72293605)+1 种基金the Science Fund Program for Excellent Young Scientists,China(Overseas)the Anhui Provincial Science and Technology Major Project,China(Grant No.2023z020006).
文摘Addressing carbon neutrality presents a multifaceted challenge,necessitating collaboration across various disciplines,fields,and societal stakeholders.With the increasing urgency to mitigate climate change,there is a crucial need for innovative approaches in communication and education to enhance societal understanding and engagement.Large-scale language models like ChatGPT emerge as transformative tools in the AI era,offering potential to revolutionize how we approach economic,technological,social,and environmental issues of achieving carbon neutrality.However,the full potential of these models in carbon neutrality is yet to be realized,hindered by limitations in providing detailed,localized,and expert-level insights across an expansive spectrum of subjects.To bridge these gaps,this paper introduces an innovative framework that integrates local knowledge with LLMs,aiming to markedly enhance the depth,accuracy,and regional relevance of the information provided.The effectiveness of this framework is examined from government,corporations,and community perspectives.The integration of local knowledge with LLMs not only enriches the AI’s comprehension of local specificities but also guarantees an up-to-date information that is crucial for addressing the specific concerns and questions about carbon neutrality raised by a broad array of stakeholders.Overall,the proposed framework showcases significant potential in enhancing societal comprehension and participation towards carbon neutrality.
文摘In the context of power generation companies, vast amounts of specialized data and expert knowledge have been accumulated. However, challenges such as data silos and fragmented knowledge hinder the effective utilization of this information. This study proposes a novel framework for intelligent Question-and-Answer (Q&A) systems based on Retrieval-Augmented Generation (RAG) to address these issues. The system efficiently acquires domain-specific knowledge by leveraging external databases, including Relational Databases (RDBs) and graph databases, without additional fine-tuning for Large Language Models (LLMs). Crucially, the framework integrates a Dynamic Knowledge Base Updating Mechanism (DKBUM) and a Weighted Context-Aware Similarity (WCAS) method to enhance retrieval accuracy and mitigate inherent limitations of LLMs, such as hallucinations and lack of specialization. Additionally, the proposed DKBUM dynamically adjusts knowledge weights within the database, ensuring that the most recent and relevant information is utilized, while WCAS refines the alignment between queries and knowledge items by enhanced context understanding. Experimental validation demonstrates that the system can generate timely, accurate, and context-sensitive responses, making it a robust solution for managing complex business logic in specialized industries.
基金support from the Defense Threat Reduction Agency(DTRA)under Grant No.HDTRA12110012with Dr.Richard Fry as the Program Officer,and partial project support from the Air Force Office of Scientific Research(AFOSR)under Grant No.FA9550-24-1-0017with Dr.Chiping Li as the Program Officer.
文摘This research explores the integration of large language models (LLMs) into scientific data assimilation, focusing on combustion science as a case study. Leveraging foundational models integrated with Retrieval-Augmented Generation (RAG) framework, the study introduces an approach to process diverse combustion research data, spanning experimental studies, simulations, and literature. The multifaceted nature of combustion research emphasizes the critical role of knowledge processing in navigating and extracting valuable information from a vast and diverse pool of sources. The developed approach minimizes computational and economic expenses while optimizing data privacy and accuracy. It incorporates prompt engineering and offline open-source LLMs, offering user autonomy in selecting base models. The study provides a thorough examination of text segmentation strategies, conducts comparative studies between LLMs, and explores various optimized prompts to demonstrate the effectiveness of the framework. By incorporating an external vector database, the framework outperforms a conventional LLM in generating accurate responses and constructing robust arguments. Additionally, the study delves into the investigation of optimized prompt templates for the purpose of efficient extraction of scientific literature. Furthermore, we present a targeted scaling study to quantify the algorithmic performance of the framework as the number of prompt tokens increases. The research addresses concerns related to hallucinations and false research articles by introducing a custom workflow developed with a detection algorithm to filter out inaccuracies. Despite identified areas for improvement, the framework consistently delivers accurate domain-specific responses with minimal human oversight. The prompt-agnostic approach introduced holds promise for future improvements. The study underscores the significance of integrating LLMs and knowledge processing techniques in scientific research, providing a foundation for advancements in data assimilation and utilization.
基金the high-level university construction special project of Guangdong province,China 2019(No.5041700175)the new engineering research and practice project of the Ministry of Education,China(NO.E-RGZN20201036)。
文摘Open Relation Extraction(ORE)is a task of extracting semantic relations from a text document.Current ORE systems have significantly improved their efficiency in obtaining Chinese relations,when compared with conventional systems which heavily depend on feature engineering or syntactic parsing.However,the ORE systems do not use robust neural networks such as pre-trained language models to take advantage of large-scale unstructured data effectively.In respons to this issue,a new system entitled Chinese Open Relation Extraction with Knowledge Enhancement(CORE-KE)is presented in this paper.The CORE-KE system employs a pre-trained language model(with the support of a Bidirectional Long Short-Term Memory(BiLSTM)layer and a Masked Conditional Random Field(Masked CRF)layer)on unstructured data in order to improve Chinese open relation extraction.Entity descriptions in Wikidata and additional knowledge(in terms of triple facts)extracted from Chinese ORE datasets are used to fine-tune the pre-trained language model.In addition,syntactic features are further adopted in the training stage of the CORE-KE system for knowledge enhancement.Experimental results of the CORE-KE system on two large-scale datasets of open Chinese entities and relations demonstrate that the CORE-KE system is superior to other ORE systems.The F1-scores of the CORE-KE system on the two datasets have given a relative improvement of 20.1%and 1.3%,when compared with benchmark ORE systems,respectively.The source code is available at https:/github.COm/cjwen15/CORE-KE.