期刊文献+
共找到621篇文章
< 1 2 32 >
每页显示 20 50 100
LKPNR: Large Language Models and Knowledge Graph for Personalized News Recommendation Framework
1
作者 Hao Chen Runfeng Xie +4 位作者 Xiangyang Cui Zhou Yan Xin Wang Zhanwei Xuan Kai Zhang 《Computers, Materials & Continua》 SCIE EI 2024年第6期4283-4296,共14页
Accurately recommending candidate news to users is a basic challenge of personalized news recommendation systems.Traditional methods are usually difficult to learn and acquire complex semantic information in news text... Accurately recommending candidate news to users is a basic challenge of personalized news recommendation systems.Traditional methods are usually difficult to learn and acquire complex semantic information in news texts,resulting in unsatisfactory recommendation results.Besides,these traditional methods are more friendly to active users with rich historical behaviors.However,they can not effectively solve the long tail problem of inactive users.To address these issues,this research presents a novel general framework that combines Large Language Models(LLM)and Knowledge Graphs(KG)into traditional methods.To learn the contextual information of news text,we use LLMs’powerful text understanding ability to generate news representations with rich semantic information,and then,the generated news representations are used to enhance the news encoding in traditional methods.In addition,multi-hops relationship of news entities is mined and the structural information of news is encoded using KG,thus alleviating the challenge of long-tail distribution.Experimental results demonstrate that compared with various traditional models,on evaluation indicators such as AUC,MRR,nDCG@5 and nDCG@10,the framework significantly improves the recommendation performance.The successful integration of LLM and KG in our framework has established a feasible way for achieving more accurate personalized news recommendation.Our code is available at https://github.com/Xuan-ZW/LKPNR. 展开更多
关键词 Large language models news recommendation knowledge graphs(KG)
下载PDF
The Life Cycle of Knowledge in Big Language Models:A Survey 被引量:1
2
作者 Boxi Cao Hongyu Lin +1 位作者 Xianpei Han Le Sun 《Machine Intelligence Research》 EI CSCD 2024年第2期217-238,共22页
Knowledge plays a critical role in artificial intelligence.Recently,the extensive success of pre-trained language models(PLMs)has raised significant attention about how knowledge can be acquired,maintained,updated and... Knowledge plays a critical role in artificial intelligence.Recently,the extensive success of pre-trained language models(PLMs)has raised significant attention about how knowledge can be acquired,maintained,updated and used by language models.Despite the enormous amount of related studies,there is still a lack of a unified view of how knowledge circulates within language models throughout the learning,tuning,and application processes,which may prevent us from further understanding the connections between current progress or realizing existing limitations.In this survey,we revisit PLMs as knowledge-based systems by dividing the life circle of knowledge in PLMs into five critical periods,and investigating how knowledge circulates when it is built,maintained and used.To this end,we systematically review existing studies of each period of the knowledge life cycle,summarize the main challenges and current limitations,and discuss future directions. 展开更多
关键词 Pre-trained language model knowledge acquisition knowledge representation knowledge probing knowledge editing knowledge application
原文传递
ALBERT with Knowledge Graph Encoder Utilizing Semantic Similarity for Commonsense Question Answering 被引量:1
3
作者 Byeongmin Choi YongHyun Lee +1 位作者 Yeunwoong Kyung Eunchan Kim 《Intelligent Automation & Soft Computing》 SCIE 2023年第4期71-82,共12页
Recently,pre-trained language representation models such as bidirec-tional encoder representations from transformers(BERT)have been performing well in commonsense question answering(CSQA).However,there is a problem th... Recently,pre-trained language representation models such as bidirec-tional encoder representations from transformers(BERT)have been performing well in commonsense question answering(CSQA).However,there is a problem that the models do not directly use explicit information of knowledge sources existing outside.To augment this,additional methods such as knowledge-aware graph network(KagNet)and multi-hop graph relation network(MHGRN)have been proposed.In this study,we propose to use the latest pre-trained language model a lite bidirectional encoder representations from transformers(ALBERT)with knowledge graph information extraction technique.We also propose to applying the novel method,schema graph expansion to recent language models.Then,we analyze the effect of applying knowledge graph-based knowledge extraction techniques to recent pre-trained language models and confirm that schema graph expansion is effective in some extent.Furthermore,we show that our proposed model can achieve better performance than existing KagNet and MHGRN models in CommonsenseQA dataset. 展开更多
关键词 Commonsense reasoning question answering knowledge graph language representation model
下载PDF
Efficient Large Language Model Application Development: A Case Study of Knowledge Base, API, and Deep Web Search Integration
4
作者 Xiangyu Wang Yan Tan +6 位作者 Tao Yang Meng Yuan Shaohan Wang Min Chen Feiyang Ren Zijian Zhang Yuqi Shao 《Journal of Computer and Communications》 2024年第12期171-200,共30页
This paper presents a reference methodology for process orchestration that accelerates the development of Large Language Model (LLM) applications by integrating knowledge bases, API access, and deep web retrieval. By ... This paper presents a reference methodology for process orchestration that accelerates the development of Large Language Model (LLM) applications by integrating knowledge bases, API access, and deep web retrieval. By incorporating structured knowledge, the methodology enhances LLMs’ reasoning abilities, enabling more accurate and efficient handling of complex tasks. Integration with open APIs allows LLMs to access external services and real-time data, expanding their functionality and application range. Through real-world case studies, we demonstrate that this approach significantly improves the efficiency and adaptability of LLM-based applications, especially for time-sensitive tasks. Our methodology provides practical guidelines for developers to rapidly create robust and adaptable LLM applications capable of navigating dynamic information environments and performing effectively across diverse tasks. 展开更多
关键词 Large language Model knowledge Base API Integration Web Retrieval Application Development
下载PDF
Integration of a Resource-Oriented Vocabulary with Knowledge-Oriented Vocabulary Systems
5
作者 秦健 陈江萍 《大学图书馆学报》 CSSCI 北大核心 2002年第2期2-8,共7页
万维网信息网关面临着三大挑战:描述内容的直观的词汇表,词汇表系统的逻辑结构,以及不同的词汇表结构之间丰富的相互关系。本文试图解决美国国家教育图书馆发起的教育资源网关(GEM)遭遇的这些挑战。GEM的面向资源的词汇表定义了教育资... 万维网信息网关面临着三大挑战:描述内容的直观的词汇表,词汇表系统的逻辑结构,以及不同的词汇表结构之间丰富的相互关系。本文试图解决美国国家教育图书馆发起的教育资源网关(GEM)遭遇的这些挑战。GEM的面向资源的词汇表定义了教育资源的范围和子类;然而,它缺乏主题类目之间和主题类目与关键词之间的相互关系。对GEM用户所作的一次调查表明,这种语义关联的缺乏对系统的检索效果具有负面的影响。作为对比,许多面向知识的词汇表系统含有语义关联及表达知识的结构。这篇论文报告了GEM语义项目第一阶段的成果,在这一阶段,作者通过分析其结构和特点,对GEM的受控词汇表增加了语义映射。在语义映射实验基础上,提出了两种模型来整合面向资源和知识的词汇表系统。元素-属性-值(EAV)模型注重于资源类型,可以方便地用文献类型定义来表达。语义层级模型则基于主题词条的语义含义和关系来对其加以处理。这两种整合模型可被用作词汇表建立和维护的理论框架。 展开更多
关键词 词汇表 整合模型 GEM 教育资源网关 美国国家教育图书馆 万维网 信息网关 语义映射
下载PDF
Prompting Large Language Models with Knowledge-Injection for Knowledge-Based Visual Question Answering
6
作者 Zhongjian Hu Peng Yang +2 位作者 Fengyuan Liu Yuan Meng Xingyu Liu 《Big Data Mining and Analytics》 EI CSCD 2024年第3期843-857,共15页
Previous works employ the Large Language Model(LLM)like GPT-3 for knowledge-based Visual Question Answering(VQA).We argue that the inferential capacity of LLM can be enhanced through knowledge injection.Although metho... Previous works employ the Large Language Model(LLM)like GPT-3 for knowledge-based Visual Question Answering(VQA).We argue that the inferential capacity of LLM can be enhanced through knowledge injection.Although methods that utilize knowledge graphs to enhance LLM have been explored in various tasks,they may have some limitations,such as the possibility of not being able to retrieve the required knowledge.In this paper,we introduce a novel framework for knowledge-based VQA titled“Prompting Large Language Models with Knowledge-Injection”(PLLMKI).We use vanilla VQA model to inspire the LLM and further enhance the LLM with knowledge injection.Unlike earlier approaches,we adopt the LLM for knowledge enhancement instead of relying on knowledge graphs.Furthermore,we leverage open LLMs,incurring no additional costs.In comparison to existing baselines,our approach exhibits the accuracy improvement of over 1.3 and 1.7 on two knowledge-based VQA datasets,namely OK-VQA and A-OKVQA,respectively. 展开更多
关键词 visual question answering knowledge-based visual question answering large language model knowledge injection
原文传递
Large Knowledge Model:Perspectives and Challenges
7
作者 Huajun Chen 《Data Intelligence》 EI 2024年第3期587-620,共34页
Humankind's understanding of the world is fundamentally linked to our perception and cognition,with human languages serving as one of the major carriers of world knowledge.In this vein,Large Language Models(LLMs)l... Humankind's understanding of the world is fundamentally linked to our perception and cognition,with human languages serving as one of the major carriers of world knowledge.In this vein,Large Language Models(LLMs)like ChatGPT epitomize the pre-training of extensive,sequence-based world knowledge into neural networks,facilitating the processing and manipulation of this knowledge in a parametric space.This article explores large models through the lens of"knowledge".We initially investigate the role of symbolic knowledge such as Knowledge Graphs(KGs)in enhancing LLMs,covering aspects like knowledge-augmented language model,structure-inducing pretraining,knowledgeable prompts,structured CoT,knowledge editing,semantic tools for LLM and knowledgeable Al agents.Subsequently,we examine how LLMs can boost traditional symbolic knowledge bases,encompassing aspects like using LLM as KG builder and controller,structured knowledge pretraining,and LLM-enhanced symbolic reasoning.Considering the intricate nature of human knowledge,we advocate for the creation of Large Knowledge Models(LKM),specifically engineered to manage diversified spectrum of knowledge structures.This promising undertaking would entail several key challenges,such as disentangling knowledge base from language models,cognitive alignment with human knowledge,integration of perception and cognition,and building large commonsense models for interacting with physical world,among others.We finally propose a five-"A"principle to distinguish the concept of LKM. 展开更多
关键词 Large language Model knowledge Graph Large knowledge Model knowledge Representation knowledge Augmentation
原文传递
Integrating local knowledge with ChatGPT-like large-scale language models for enhanced societal comprehension of carbon neutrality
8
作者 Te Han Rong-Gang Cong +2 位作者 Biying Yu Baojun Tang Yi-Ming Wei 《Energy and AI》 2024年第4期327-340,共14页
Addressing carbon neutrality presents a multifaceted challenge,necessitating collaboration across various disciplines,fields,and societal stakeholders.With the increasing urgency to mitigate climate change,there is a ... Addressing carbon neutrality presents a multifaceted challenge,necessitating collaboration across various disciplines,fields,and societal stakeholders.With the increasing urgency to mitigate climate change,there is a crucial need for innovative approaches in communication and education to enhance societal understanding and engagement.Large-scale language models like ChatGPT emerge as transformative tools in the AI era,offering potential to revolutionize how we approach economic,technological,social,and environmental issues of achieving carbon neutrality.However,the full potential of these models in carbon neutrality is yet to be realized,hindered by limitations in providing detailed,localized,and expert-level insights across an expansive spectrum of subjects.To bridge these gaps,this paper introduces an innovative framework that integrates local knowledge with LLMs,aiming to markedly enhance the depth,accuracy,and regional relevance of the information provided.The effectiveness of this framework is examined from government,corporations,and community perspectives.The integration of local knowledge with LLMs not only enriches the AI’s comprehension of local specificities but also guarantees an up-to-date information that is crucial for addressing the specific concerns and questions about carbon neutrality raised by a broad array of stakeholders.Overall,the proposed framework showcases significant potential in enhancing societal comprehension and participation towards carbon neutrality. 展开更多
关键词 Carbon neutrality Large-scale language models Local knowledge ChatGPT AIGC
原文传递
A Dynamic Knowledge Base Updating Mechanism-Based Retrieval-Augmented Generation Framework for Intelligent Question-and-Answer Systems
9
作者 Yu Li 《Journal of Computer and Communications》 2025年第1期41-58,共18页
In the context of power generation companies, vast amounts of specialized data and expert knowledge have been accumulated. However, challenges such as data silos and fragmented knowledge hinder the effective utilizati... In the context of power generation companies, vast amounts of specialized data and expert knowledge have been accumulated. However, challenges such as data silos and fragmented knowledge hinder the effective utilization of this information. This study proposes a novel framework for intelligent Question-and-Answer (Q&A) systems based on Retrieval-Augmented Generation (RAG) to address these issues. The system efficiently acquires domain-specific knowledge by leveraging external databases, including Relational Databases (RDBs) and graph databases, without additional fine-tuning for Large Language Models (LLMs). Crucially, the framework integrates a Dynamic Knowledge Base Updating Mechanism (DKBUM) and a Weighted Context-Aware Similarity (WCAS) method to enhance retrieval accuracy and mitigate inherent limitations of LLMs, such as hallucinations and lack of specialization. Additionally, the proposed DKBUM dynamically adjusts knowledge weights within the database, ensuring that the most recent and relevant information is utilized, while WCAS refines the alignment between queries and knowledge items by enhanced context understanding. Experimental validation demonstrates that the system can generate timely, accurate, and context-sensitive responses, making it a robust solution for managing complex business logic in specialized industries. 展开更多
关键词 Retrieval-Augmented Generation Question-and-Answer Large language Models Dynamic knowledge Base Updating Mechanism Weighted Context-Aware Similarity
下载PDF
A reliable knowledge processing framework for combustion science using foundation models
10
作者 Vansh Sharma Venkat Raman 《Energy and AI》 EI 2024年第2期396-416,共21页
This research explores the integration of large language models (LLMs) into scientific data assimilation, focusing on combustion science as a case study. Leveraging foundational models integrated with Retrieval-Augmen... This research explores the integration of large language models (LLMs) into scientific data assimilation, focusing on combustion science as a case study. Leveraging foundational models integrated with Retrieval-Augmented Generation (RAG) framework, the study introduces an approach to process diverse combustion research data, spanning experimental studies, simulations, and literature. The multifaceted nature of combustion research emphasizes the critical role of knowledge processing in navigating and extracting valuable information from a vast and diverse pool of sources. The developed approach minimizes computational and economic expenses while optimizing data privacy and accuracy. It incorporates prompt engineering and offline open-source LLMs, offering user autonomy in selecting base models. The study provides a thorough examination of text segmentation strategies, conducts comparative studies between LLMs, and explores various optimized prompts to demonstrate the effectiveness of the framework. By incorporating an external vector database, the framework outperforms a conventional LLM in generating accurate responses and constructing robust arguments. Additionally, the study delves into the investigation of optimized prompt templates for the purpose of efficient extraction of scientific literature. Furthermore, we present a targeted scaling study to quantify the algorithmic performance of the framework as the number of prompt tokens increases. The research addresses concerns related to hallucinations and false research articles by introducing a custom workflow developed with a detection algorithm to filter out inaccuracies. Despite identified areas for improvement, the framework consistently delivers accurate domain-specific responses with minimal human oversight. The prompt-agnostic approach introduced holds promise for future improvements. The study underscores the significance of integrating LLMs and knowledge processing techniques in scientific research, providing a foundation for advancements in data assimilation and utilization. 展开更多
关键词 Large language models(LLM) Foundation models COMBUSTION knowledge processing Retrieval-augmented generation(RAG)
原文传递
Improving Extraction of Chinese Open Relations Using Pre-trained Language Model and Knowledge Enhancement
11
作者 Chaojie Wen Xudong Jia Tao Chen 《Data Intelligence》 EI 2023年第4期962-989,共28页
Open Relation Extraction(ORE)is a task of extracting semantic relations from a text document.Current ORE systems have significantly improved their efficiency in obtaining Chinese relations,when compared with conventio... Open Relation Extraction(ORE)is a task of extracting semantic relations from a text document.Current ORE systems have significantly improved their efficiency in obtaining Chinese relations,when compared with conventional systems which heavily depend on feature engineering or syntactic parsing.However,the ORE systems do not use robust neural networks such as pre-trained language models to take advantage of large-scale unstructured data effectively.In respons to this issue,a new system entitled Chinese Open Relation Extraction with Knowledge Enhancement(CORE-KE)is presented in this paper.The CORE-KE system employs a pre-trained language model(with the support of a Bidirectional Long Short-Term Memory(BiLSTM)layer and a Masked Conditional Random Field(Masked CRF)layer)on unstructured data in order to improve Chinese open relation extraction.Entity descriptions in Wikidata and additional knowledge(in terms of triple facts)extracted from Chinese ORE datasets are used to fine-tune the pre-trained language model.In addition,syntactic features are further adopted in the training stage of the CORE-KE system for knowledge enhancement.Experimental results of the CORE-KE system on two large-scale datasets of open Chinese entities and relations demonstrate that the CORE-KE system is superior to other ORE systems.The F1-scores of the CORE-KE system on the two datasets have given a relative improvement of 20.1%and 1.3%,when compared with benchmark ORE systems,respectively.The source code is available at https:/github.COm/cjwen15/CORE-KE. 展开更多
关键词 Chinese open relation extraction Pre-trained language model knowledge enhancement
原文传递
基于知识图谱和大语言模型的口述历史资源的问答应用研究
12
作者 孙翌 刘音 《图书馆杂志》 北大核心 2025年第1期98-107,119,共11页
档案馆和图书馆等人文机构逐渐形成了丰富多样的有序化整理后的口述历史档案集合。引入问答系统,通过互动方式可展示档案单元内容的知识推理能力。本研究融合知识图谱和大语言模型,充分发挥知识图谱的准确性、内容透明度等优势,降低大... 档案馆和图书馆等人文机构逐渐形成了丰富多样的有序化整理后的口述历史档案集合。引入问答系统,通过互动方式可展示档案单元内容的知识推理能力。本研究融合知识图谱和大语言模型,充分发挥知识图谱的准确性、内容透明度等优势,降低大语言模型带来应答幻觉、建设成本高等问题,尝试构造面对口述历史档案资源的问答系统。文章详细阐述了系统设计思路与构建过程,以及核心部件的关键技术要点等,并以李政道图书馆馆藏的CUSPEA主题的口述历史为研究对象,进行问答应用实践。实践验证了问答系统的可行性,能实现口述历史档案资源的知识融汇与知识挖掘,能有效辅助人文学者和历史爱好者理解与洞悉口述历史本质。 展开更多
关键词 口述历史资源 问答系统 知识图谱 大语言模型
下载PDF
建筑工程标准规范智能解译关键技术及应用
13
作者 林佳瑞 陈柯吟 +2 位作者 郑哲 周育丞 陆新征 《工程力学》 北大核心 2025年第2期1-14,共14页
建筑工程标准规范文本具有概念多样、隐含工程常识及复杂规则组合等特点,给标准规范的自动拆解与推理带来了极大挑战。因此,作者团队建立了一套集成领域大语言模型与常识知识图谱的规范智能解译技术体系。通过构建领域规范语料库及预训... 建筑工程标准规范文本具有概念多样、隐含工程常识及复杂规则组合等特点,给标准规范的自动拆解与推理带来了极大挑战。因此,作者团队建立了一套集成领域大语言模型与常识知识图谱的规范智能解译技术体系。通过构建领域规范语料库及预训练大模型,实现规范内容文风语法等知识的学习表征,并通过大规模领域常识图谱构建,为规范条文智能解译奠定基础。基于领域大模型与大规模常识图谱,研发标准规范章节结构拆解、可解译条文识别、条文语义标注、句法解析以及复杂规则处理等核心算法与技术,实现了从原始文本到计算机可执行代码的端到端自动生成。作者团队还探讨了所提出的技术体系在条文关联检索、标准知识问答、BIM智能校审和BIM优化建议等典型场景中的应用潜力。验证结果表明:所提出的方法可有效突破复杂条文解译的瓶颈难题,条文解译准确率超过95%,解译效率较人工提升5倍,BIM模型审查效率提升约40倍,为建筑工程领域的标准规范的数字化及智能化应用提供了一条可借鉴、可推广的技术路径。 展开更多
关键词 智能标准 标准数字化 规则解译 大语言模型 知识图谱 智能审图
下载PDF
融合知识图谱和大模型的高校科研管理问答系统设计
14
作者 王永 秦嘉俊 +1 位作者 黄有锐 邓江洲 《计算机科学与探索》 北大核心 2025年第1期107-117,共11页
科研管理是高校管理中的重要组成部分,但现有的科研管理系统难以满足用户的个性化需求。以高校科研管理向智能化转型为需求导向,将知识图谱、传统模型和大语言模型相结合,共同构建新一代高校科研管理问答系统。采集科研知识用于构建科... 科研管理是高校管理中的重要组成部分,但现有的科研管理系统难以满足用户的个性化需求。以高校科研管理向智能化转型为需求导向,将知识图谱、传统模型和大语言模型相结合,共同构建新一代高校科研管理问答系统。采集科研知识用于构建科研知识图谱。利用同时进行意图分类和实体提取的多任务模型进行语义解析。借助解析结果来生成查询语句,并从知识图谱中检索信息来回复常规问题。将大语言模型与知识图谱相结合,以辅助处理开放性问题。在意图和实体具有关联的数据集上的实验结果表明,采用的多任务模型在意图分类和实体识别任务上的F1值分别为0.958和0.937,优于其他对比模型和单任务模型。Cypher生成测试表明了自定义Prompt在激发大语言模型涌现能力方面的成效,利用大语言模型实现文本生成Cypher的准确率达到85.8%,有效处理了基于知识图谱的开放性问题。采用知识图谱、传统模型和大语言模型搭建的问答系统的准确性为0.935,很好地满足了智能问答的需求。 展开更多
关键词 知识图谱 多任务模型 意图分类 命名实体识别 大语言模型
下载PDF
MDKG:基于多模态知识图谱的RAG框架
15
作者 张强 刘丰 《计算机应用文摘》 2025年第2期182-184,188,共4页
知识图谱是一种网状结构的知识库,与基于向量的知识库在数据存储结构上有所不同,但它也可以作为大模型的外部知识库,通过RAG(Retrieval-Augmented Generation)方式增强大模型的能力。文章提出了一种基于知识图谱的RAG框架:MDKG-RAG。该... 知识图谱是一种网状结构的知识库,与基于向量的知识库在数据存储结构上有所不同,但它也可以作为大模型的外部知识库,通过RAG(Retrieval-Augmented Generation)方式增强大模型的能力。文章提出了一种基于知识图谱的RAG框架:MDKG-RAG。该框架利用多模态算法,将文档解析为多模态文档知识图谱,并将业务系统中的知识融入其中,实现文档知识和系统知识在知识图谱中的统一管理。此外,提出了一种多模态的检索生成方法。最后,通过设计实验来验证该框架的有效性。实验结果表明,与基于向量库和Elasticsearch(ES)的RAG相比,MDKG-RAG具有以下优势:在垂直领域和通用领域,MDKG-RAG的能力更强;MDKG-RAG在检索路径上具有更强的可解释性。 展开更多
关键词 检索增强生成 知识图谱 大语言模型
下载PDF
基于预训练语言模型的知识图谱研究综述
16
作者 曾泽凡 胡星辰 +2 位作者 成清 司悦航 刘忠 《计算机科学》 北大核心 2025年第1期1-33,共33页
大语言模型时代,知识图谱作为一种结构化的知识表示方式,在提升人工智能的可靠性、安全性和可解释性方面发挥着不可替代的作用,具有重要的研究价值和实际应用前景。近年来,凭借在语义理解和上下文学习方面的优越性能,预训练语言模型已... 大语言模型时代,知识图谱作为一种结构化的知识表示方式,在提升人工智能的可靠性、安全性和可解释性方面发挥着不可替代的作用,具有重要的研究价值和实际应用前景。近年来,凭借在语义理解和上下文学习方面的优越性能,预训练语言模型已经成为了知识图谱研究的主要手段。系统梳理了基于预训练语言模型的知识图谱研究的相关工作,包括知识图谱构建、表示学习、推理、问答等,介绍了相关模型和方法的核心思路,并依据技术路径建立了分类体系,对不同类型方法的优缺点进行了对比分析。此外,对预训练语言模型在事件知识图谱和多模态知识图谱这两种新型知识图谱中的应用现状进行了综述。最后,总结了当前基于预训练语言模型的知识图谱研究面临的挑战,展望了未来的研究方向。 展开更多
关键词 知识图谱 预训练语言模型 大语言模型 多模态 事件知识图谱
下载PDF
大语言模型驱动的多元关系知识图谱补全方法
17
作者 刘畅成 桑磊 +1 位作者 李炜 张以文 《计算机科学》 北大核心 2025年第1期94-101,共8页
知识图谱通过将复杂的互联网信息转化为易于理解的结构化形式,极大地提高了信息的可访问性。知识图谱补全技术进一步增强了知识图谱的信息完整性,显著提升了智能问答和推荐系统等通用领域应用的性能与用户体验。然而,现有的知识图谱补... 知识图谱通过将复杂的互联网信息转化为易于理解的结构化形式,极大地提高了信息的可访问性。知识图谱补全技术进一步增强了知识图谱的信息完整性,显著提升了智能问答和推荐系统等通用领域应用的性能与用户体验。然而,现有的知识图谱补全方法大多专注于关系类型较少和简单语义情景下的三元组实例,未能充分利用知识图谱在处理多元关系和复杂语义方面的潜力。针对此问题,提出了一种由大语言模型(LLM)驱动的多元关系知识图谱补全方法。将LLM的深层语言理解能力与知识图谱的结构特性相结合,有效捕捉多元关系,理解复杂语义情景。此外,还引入了一种基于思维链的提示工程策略,旨在提高补全任务的准确性。该方法在两个公开知识图谱数据集上的实验结果都取得了显著的提升。 展开更多
关键词 知识图谱 大语言模型 知识图谱补全 多元关系 候选集构建 思维链提示
下载PDF
面向区块链漏洞知识库的大模型增强知识图谱问答模型
18
作者 解飞 宋建华 +2 位作者 姜丽 张龑 何帅 《现代电子技术》 北大核心 2025年第2期137-142,共6页
大语言模型(LLM)在专业领域特别是区块链漏洞领域应用时存在局限性,如专业术语噪声干扰和细粒度信息过重导致理解不足。为此,构建一种面向区块链漏洞知识库的增强型知识图谱问答模型(LMBK_KG)。通过整合大模型和知识图谱来增强知识表示... 大语言模型(LLM)在专业领域特别是区块链漏洞领域应用时存在局限性,如专业术语噪声干扰和细粒度信息过重导致理解不足。为此,构建一种面向区块链漏洞知识库的增强型知识图谱问答模型(LMBK_KG)。通过整合大模型和知识图谱来增强知识表示和理解能力,同时利用多粒度语义信息进行专业问题的过滤和精准匹配。研究方法包括使用集成的多粒度语义信息和知识图谱来过滤专业术语噪声,以及采用大模型生成的回答与专业知识图谱进行结构化匹配和验证,以提高模型的鲁棒性和安全性。实验结果表明,所提出的模型在区块链漏洞领域问答的准确率比单独使用大模型提高26%。 展开更多
关键词 大语言模型 知识图谱 问答模型 多粒度语义信息 区块链 漏洞信息 文本表征
下载PDF
基于大语言模型的CIL-LLM类别增量学习框架
19
作者 王晓宇 李欣 +1 位作者 胡勉宁 薛迪 《计算机科学与探索》 北大核心 2025年第2期374-384,共11页
在文本分类领域,为了提升类别增量学习模型的分类准确率并避免灾难性遗忘问题,提出了一种基于大语言模型(LLM)的类别增量学习框架(CIL-LLM)。CIL-LLM框架通过抽样和压缩环节选取具有代表性的样本,利用较强语言理解能力的LLM基于上下文... 在文本分类领域,为了提升类别增量学习模型的分类准确率并避免灾难性遗忘问题,提出了一种基于大语言模型(LLM)的类别增量学习框架(CIL-LLM)。CIL-LLM框架通过抽样和压缩环节选取具有代表性的样本,利用较强语言理解能力的LLM基于上下文学习提炼关键技能,以这些技能作为分类的依据,从而降低了存储成本;采用关键词匹配环节选取最优技能,以此构建提示词,引导下游弱LLM进行分类,提高了分类的准确性;根据基于知识蒸馏的技能融合环节,不仅实现了技能库的有效拓展和更新,还兼顾了新旧类别特性的学习。对比实验结果表明,在THUCNews数据集上的测试中,与现有的L-SCL方法相比,CIL-LLM框架在所有任务上的平均准确率提升了6.3个百分点,性能下降率降低了3.1个百分点。此外,在消融实验中,经由CIL-LLM框架增强的SLEICL模型相比于原有模型,所有任务的平均准确率提高了10.4个百分点,性能下降率降低了3.3个百分点。消融实验进一步验证了提出的样本压缩、关键词匹配和技能融合环节均对模型的准确率和性能下降率产生了优化效果。 展开更多
关键词 类别增量学习 大语言模型(LLM) 主题分类 知识蒸馏
下载PDF
图模驱动的在线医疗健康智慧问答服务研究
20
作者 张君冬 刘江峰 +2 位作者 邓景鹏 刘艳华 黄奇 《现代情报》 北大核心 2025年第1期164-176,共13页
[目的/意义]学者们重视追求医疗智慧问答相关技术本身的前沿性,对基础理论的探讨研究较少,两者未能融合发展。[方法/过程]在辨析相关概念的基础上,首先阐述在线医疗健康领域智慧问答服务的内涵及特征,之后剖析知识图谱与大语言模型的联... [目的/意义]学者们重视追求医疗智慧问答相关技术本身的前沿性,对基础理论的探讨研究较少,两者未能融合发展。[方法/过程]在辨析相关概念的基础上,首先阐述在线医疗健康领域智慧问答服务的内涵及特征,之后剖析知识图谱与大语言模型的联系及两者的互补融合思路,最后提出图模驱动的在线医疗健康智慧问答服务。[结果/结论]文章将医疗智慧问答服务理论特征贯穿智慧问答服务的全过程,创新性地提出其智慧问答服务应包含大语言模型驱动的医疗知识图谱构建、知识图谱增强的医疗大模型训练、图模驱动的智慧问答服务流程三部分。本研究实现了理论与技术的有机结合,研究成果可用于后续医疗智慧问答的实践性工作。 展开更多
关键词 知识图谱 大语言模型 在线医疗健康 智慧问答服务
下载PDF
上一页 1 2 32 下一页 到第
使用帮助 返回顶部