The growing prevalence of knowledge reasoning using knowledge graphs(KGs)has substantially improved the accuracy and efficiency of intelligent medical diagnosis.However,current models primarily integrate electronic me...The growing prevalence of knowledge reasoning using knowledge graphs(KGs)has substantially improved the accuracy and efficiency of intelligent medical diagnosis.However,current models primarily integrate electronic medical records(EMRs)and KGs into the knowledge reasoning process,ignoring the differing significance of various types of knowledge in EMRs and the diverse data types present in the text.To better integrate EMR text information,we propose a novel intelligent diagnostic model named the Graph ATtention network incorporating Text representation in knowledge reasoning(GATiT),which comprises text representation,subgraph construction,knowledge reasoning,and diagnostic classification.In the text representation process,GATiT uses a pre-trained model to obtain text representations of the EMRs and additionally enhances embeddings by including chief complaint information and numerical information in the input.In the subgraph construction process,GATiT constructs text subgraphs and disease subgraphs from the KG,utilizing EMR text and the disease to be diagnosed.To differentiate the varying importance of nodes within the subgraphs features such as node categories,relevance scores,and other relevant factors are introduced into the text subgraph.Themessage-passing strategy and attention weight calculation of the graph attention network are adjusted to learn these features in the knowledge reasoning process.Finally,in the diagnostic classification process,the interactive attention-based fusion method integrates the results of knowledge reasoning with text representations to produce the final diagnosis results.Experimental results on multi-label and single-label EMR datasets demonstrate the model’s superiority over several state-of-theart methods.展开更多
As a core part of battlefield situational awareness,air target intention recognition plays an important role in modern air operations.Aiming at the problems of insufficient feature extraction and misclassification in ...As a core part of battlefield situational awareness,air target intention recognition plays an important role in modern air operations.Aiming at the problems of insufficient feature extraction and misclassification in intention recognition,this paper designs an air target intention recognition method(KGTLIR)based on Knowledge Graph and Deep Learning.Firstly,the intention recognition model based on Deep Learning is constructed to mine the temporal relationship of intention features using dilated causal convolution and the spatial relationship of intention features using a graph attention mechanism.Meanwhile,the accuracy,recall,and F1-score after iteration are introduced to dynamically adjust the sample weights to reduce the probability of misclassification.After that,an intention recognition model based on Knowledge Graph is constructed to predict the probability of the occurrence of different intentions of the target.Finally,the results of the two models are fused by evidence theory to obtain the target’s operational intention.Experiments show that the intention recognition accuracy of the KGTLIRmodel can reach 98.48%,which is not only better than most of the air target intention recognition methods,but also demonstrates better interpretability and trustworthiness.展开更多
Knowledge graph(KG)serves as a specialized semantic network that encapsulates intricate relationships among real-world entities within a structured framework.This framework facilitates a transformation in information ...Knowledge graph(KG)serves as a specialized semantic network that encapsulates intricate relationships among real-world entities within a structured framework.This framework facilitates a transformation in information retrieval,transitioning it from mere string matching to far more sophisticated entity matching.In this transformative process,the advancement of artificial intelligence and intelligent information services is invigorated.Meanwhile,the role ofmachine learningmethod in the construction of KG is important,and these techniques have already achieved initial success.This article embarks on a comprehensive journey through the last strides in the field of KG via machine learning.With a profound amalgamation of cutting-edge research in machine learning,this article undertakes a systematical exploration of KG construction methods in three distinct phases:entity learning,ontology learning,and knowledge reasoning.Especially,a meticulous dissection of machine learningdriven algorithms is conducted,spotlighting their contributions to critical facets such as entity extraction,relation extraction,entity linking,and link prediction.Moreover,this article also provides an analysis of the unresolved challenges and emerging trajectories that beckon within the expansive application of machine learning-fueled,large-scale KG construction.展开更多
The acquisition of valuable design knowledge from massive fragmentary data is challenging for designers in conceptual product design.This study proposes a novel method for acquiring design knowledge by combining deep ...The acquisition of valuable design knowledge from massive fragmentary data is challenging for designers in conceptual product design.This study proposes a novel method for acquiring design knowledge by combining deep learning with knowledge graph.Specifically,the design knowledge acquisition method utilises the knowledge extraction model to extract design-related entities and relations from fragmentary data,and further constructs the knowledge graph to support design knowledge acquisition for conceptual product design.Moreover,the knowledge extraction model introduces ALBERT to solve memory limitation and communication overhead in the entity extraction module,and uses multi-granularity information to overcome segmentation errors and polysemy ambiguity in the relation extraction module.Experimental comparison verified the effectiveness and accuracy of the proposed knowledge extraction model.The case study demonstrated the feasibility of the knowledge graph construction with real fragmentary porcelain data and showed the capability to provide designers with interconnected and visualised design knowledge.展开更多
Enterprise risk management holds significant importance in fostering sustainable growth of businesses and in serving as a critical element for regulatory bodies to uphold market order.Amidst the challenges posed by in...Enterprise risk management holds significant importance in fostering sustainable growth of businesses and in serving as a critical element for regulatory bodies to uphold market order.Amidst the challenges posed by intricate and unpredictable risk factors,knowledge graph technology is effectively driving risk management,leveraging its ability to associate and infer knowledge from diverse sources.This review aims to comprehensively summarize the construction techniques of enterprise risk knowledge graphs and their prominent applications across various business scenarios.Firstly,employing bibliometric methods,the aim is to uncover the developmental trends and current research hotspots within the domain of enterprise risk knowledge graphs.In the succeeding section,systematically delineate the technical methods for knowledge extraction and fusion in the standardized construction process of enterprise risk knowledge graphs.Objectively comparing and summarizing the strengths and weaknesses of each method,we provide recommendations for addressing the existing challenges in the construction process.Subsequently,categorizing the applied research of enterprise risk knowledge graphs based on research hotspots and risk category standards,and furnishing a detailed exposition on the applicability of technical routes and methods.Finally,the future research directions that still need to be explored in enterprise risk knowledge graphs were discussed,and relevant improvement suggestions were proposed.Practitioners and researchers can gain insights into the construction of technical theories and practical guidance of enterprise risk knowledge graphs based on this foundation.展开更多
Quality management is a constant and significant concern in enterprises.Effective determination of correct solutions for comprehensive problems helps avoid increased backtesting costs.This study proposes an intelligen...Quality management is a constant and significant concern in enterprises.Effective determination of correct solutions for comprehensive problems helps avoid increased backtesting costs.This study proposes an intelligent quality control method for manufacturing processes based on a human–cyber–physical(HCP)knowledge graph,which is a systematic method that encompasses the following elements:data management and classification based on HCP ternary data,HCP ontology construction,knowledge extraction for constructing an HCP knowledge graph,and comprehensive application of quality control based on HCP knowledge.The proposed method implements case retrieval,automatic analysis,and assisted decision making based on an HCP knowledge graph,enabling quality monitoring,inspection,diagnosis,and maintenance strategies for quality control.In practical applications,the proposed modular and hierarchical HCP ontology exhibits significant superiority in terms of shareability and reusability of the acquired knowledge.Moreover,the HCP knowledge graph deeply integrates the provided HCP data and effectively supports comprehensive decision making.The proposed method was implemented in cases involving an automotive production line and a gear manufacturing process,and the effectiveness of the method was verified by the application system deployed.Furthermore,the proposed method can be extended to other manufacturing process quality control tasks.展开更多
Media convergence works by processing information from different modalities and applying them to different domains.It is difficult for the conventional knowledge graph to utilise multi-media features because the intro...Media convergence works by processing information from different modalities and applying them to different domains.It is difficult for the conventional knowledge graph to utilise multi-media features because the introduction of a large amount of information from other modalities reduces the effectiveness of representation learning and makes knowledge graph inference less effective.To address the issue,an inference method based on Media Convergence and Rule-guided Joint Inference model(MCRJI)has been pro-posed.The authors not only converge multi-media features of entities but also introduce logic rules to improve the accuracy and interpretability of link prediction.First,a multi-headed self-attention approach is used to obtain the attention of different media features of entities during semantic synthesis.Second,logic rules of different lengths are mined from knowledge graph to learn new entity representations.Finally,knowledge graph inference is performed based on representing entities that converge multi-media features.Numerous experimental results show that MCRJI outperforms other advanced baselines in using multi-media features and knowledge graph inference,demonstrating that MCRJI provides an excellent approach for knowledge graph inference with converged multi-media features.展开更多
In the context of big data, many large-scale knowledge graphs have emerged to effectively organize the explosive growth of web data on the Internet. To select suitable knowledge graphs for use from many knowledge grap...In the context of big data, many large-scale knowledge graphs have emerged to effectively organize the explosive growth of web data on the Internet. To select suitable knowledge graphs for use from many knowledge graphs, quality assessment is particularly important. As an important thing of quality assessment, completeness assessment generally refers to the ratio of the current data volume to the total data volume.When evaluating the completeness of a knowledge graph, it is often necessary to refine the completeness dimension by setting different completeness metrics to produce more complete and understandable evaluation results for the knowledge graph.However, lack of awareness of requirements is the most problematic quality issue. In the actual evaluation process, the existing completeness metrics need to consider the actual application. Therefore, to accurately recommend suitable knowledge graphs to many users, it is particularly important to develop relevant measurement metrics and formulate measurement schemes for completeness. In this paper, we will first clarify the concept of completeness, establish each metric of completeness, and finally design a measurement proposal for the completeness of knowledge graphs.展开更多
Utilizing graph neural networks for knowledge embedding to accomplish the task of knowledge graph completion(KGC)has become an important research area in knowledge graph completion.However,the number of nodes in the k...Utilizing graph neural networks for knowledge embedding to accomplish the task of knowledge graph completion(KGC)has become an important research area in knowledge graph completion.However,the number of nodes in the knowledge graph increases exponentially with the depth of the tree,whereas the distances of nodes in Euclidean space are second-order polynomial distances,whereby knowledge embedding using graph neural networks in Euclidean space will not represent the distances between nodes well.This paper introduces a novel approach called hyperbolic hierarchical graph attention network(H2GAT)to rectify this limitation.Firstly,the paper conducts knowledge representation in the hyperbolic space,effectively mitigating the issue of exponential growth of nodes with tree depth and consequent information loss.Secondly,it introduces a hierarchical graph atten-tion mechanism specifically designed for the hyperbolic space,allowing for enhanced capture of the network structure inherent in the knowledge graph.Finally,the efficacy of the proposed H2GAT model is evaluated on benchmark datasets,namely WN18RR and FB15K-237,thereby validating its effectiveness.The H2GAT model achieved 0.445,0.515,and 0.586 in the Hits@1,Hits@3 and Hits@10 metrics respectively on the WN18RR dataset and 0.243,0.367 and 0.518 on the FB15K-237 dataset.By incorporating hyperbolic space embedding and hierarchical graph attention,the H2GAT model successfully addresses the limitations of existing hyperbolic knowledge embedding models,exhibiting its competence in knowledge graph completion tasks.展开更多
Accurately recommending candidate news to users is a basic challenge of personalized news recommendation systems.Traditional methods are usually difficult to learn and acquire complex semantic information in news text...Accurately recommending candidate news to users is a basic challenge of personalized news recommendation systems.Traditional methods are usually difficult to learn and acquire complex semantic information in news texts,resulting in unsatisfactory recommendation results.Besides,these traditional methods are more friendly to active users with rich historical behaviors.However,they can not effectively solve the long tail problem of inactive users.To address these issues,this research presents a novel general framework that combines Large Language Models(LLM)and Knowledge Graphs(KG)into traditional methods.To learn the contextual information of news text,we use LLMs’powerful text understanding ability to generate news representations with rich semantic information,and then,the generated news representations are used to enhance the news encoding in traditional methods.In addition,multi-hops relationship of news entities is mined and the structural information of news is encoded using KG,thus alleviating the challenge of long-tail distribution.Experimental results demonstrate that compared with various traditional models,on evaluation indicators such as AUC,MRR,nDCG@5 and nDCG@10,the framework significantly improves the recommendation performance.The successful integration of LLM and KG in our framework has established a feasible way for achieving more accurate personalized news recommendation.Our code is available at https://github.com/Xuan-ZW/LKPNR.展开更多
Due to the structural dependencies among concurrent events in the knowledge graph and the substantial amount of sequential correlation information carried by temporally adjacent events,we propose an Independent Recurr...Due to the structural dependencies among concurrent events in the knowledge graph and the substantial amount of sequential correlation information carried by temporally adjacent events,we propose an Independent Recurrent Temporal Graph Convolution Networks(IndRT-GCNets)framework to efficiently and accurately capture event attribute information.The framework models the knowledge graph sequences to learn the evolutionary represen-tations of entities and relations within each period.Firstly,by utilizing the temporal graph convolution module in the evolutionary representation unit,the framework captures the structural dependency relationships within the knowledge graph in each period.Meanwhile,to achieve better event representation and establish effective correlations,an independent recurrent neural network is employed to implement auto-regressive modeling.Furthermore,static attributes of entities in the entity-relation events are constrained andmerged using a static graph constraint to obtain optimal entity representations.Finally,the evolution of entity and relation representations is utilized to predict events in the next subsequent step.On multiple real-world datasets such as Freebase13(FB13),Freebase 15k(FB15K),WordNet11(WN11),WordNet18(WN18),FB15K-237,WN18RR,YAGO3-10,and Nell-995,the results of multiple evaluation indicators show that our proposed IndRT-GCNets framework outperforms most existing models on knowledge reasoning tasks,which validates the effectiveness and robustness.展开更多
Textile production has received considerable attention owing to its significance in production value,the complexity of its manufacturing processes and the extensive reach of its supply chains.However,textile industry ...Textile production has received considerable attention owing to its significance in production value,the complexity of its manufacturing processes and the extensive reach of its supply chains.However,textile industry consumes substantial energy and materials and emits greenhouse gases that severely harm the environment.In addressing this challenge,the concept of sustainable production offers crucial guidance for the sustainable development of the textile industry.Low-carbon manufacturing technologies provide robust technical support for the textile industry to transition to a low-carbon model by optimizing production processes,enhancing energy efficiency and minimizing material waste.Consequently,low-carbon manufacturing technologies have gradually been implemented in sustainable textile production scenarios.However,while research on low-carbon manufacturing technologies for textile production has advanced,these studies predominantly concentrate on theoretical methods,with relatively limited exploration of practical applications.To address this gap,a thorough overview of carbon emission management methods and tools in textile production,as well as the characteristics and influencing factors of carbon emissions in key textile manufacturing processes is presented to identify common issues.Additionally,two new concepts,carbon knowledge graph and carbon traceability,are introduced,offering strategic recommendations and application directions for the low-carbon development of sustainable textile production.Beginning with seven key aspects of sustainable textile production,the characteristics of carbon emissions and their influencing factors in key textile manufacturing process are systematically summarized.The aim is to provide guidance and optimization strategies for future emission reduction efforts by exploring the carbon emission situations and influencing factors at each stage.Furthermore,the potential and challenges of carbon knowledge graph technology are summarized in achieving carbon traceability,and several research ideas and suggestions are proposed.展开更多
Objective: To grasp the changing trend of research hotspots of traditional Chinese medicine in the prevention and treatment of COVID-19, and to better play the role of traditional Chinese medicine in the prevention an...Objective: To grasp the changing trend of research hotspots of traditional Chinese medicine in the prevention and treatment of COVID-19, and to better play the role of traditional Chinese medicine in the prevention and treatment of COVID-19 and other diseases. Methods: The research literature from 2020 to 2022 was searched in the CNKI database, and CiteSpace software was used for visual analysis. Results: The papers on the prevention and treatment of COVID-19 by traditional Chinese medicine changed from cases, overviews, reports, and efficacy studies to more in-depth mechanism research, theoretical exploration, and social impact analysis, and finally formed a theory-clinical-society Influence-institutional change and other multi-dimensional achievement systems. Conclusion: Analyzing the changing trends of TCM hotspots in the prevention and treatment of COVID-19 can fully understand the important value of TCM, take the coordination of TCM and Western medicine as an important means to deal with public health security incidents, and promote the exploration of the potential efficacy of TCM, so as to enhance the role of TCM in Applications in social stability, emergency security, clinical practice, etc.展开更多
With the reform of experimental teaching in colleges and universities,the teaching mode of"experimental students as the main body,experimental teachers as the guide"needs to constantly explore new experiment...With the reform of experimental teaching in colleges and universities,the teaching mode of"experimental students as the main body,experimental teachers as the guide"needs to constantly explore new experimental teaching methods.In this paper,knowledge graph is integrated into the experiment of mechanical principle to guide undergraduates to use knowledge graph to analyze and summarize independently in experimental teaching activities,aiming at cultivating undergraduates interest in learning and innovative thinking,so as to improve the quality of experimental teaching.This study has a certain reference significance for experimental teaching in colleges and universities.展开更多
Drawing upon relevant papers from Chinese core journals and CSSCI source journals in the CNKI China Academic Journals Full-Text Database spanning from 1992 to 2023,this study utilizes CiteSpace as a research tool to v...Drawing upon relevant papers from Chinese core journals and CSSCI source journals in the CNKI China Academic Journals Full-Text Database spanning from 1992 to 2023,this study utilizes CiteSpace as a research tool to visually analyze the knowledge graph structure of research on international Chinese language textbooks in China.The study maps out the publication timeline,authors,institutions,collaborative networks,and keywords pertaining to research on international Chinese language textbooks.The findings indicate that research on international Chinese language textbooks commenced early and continues to maintain a certain level of research interest,yet lacks sufficient research output.Research institutions predominantly reside in universities and publishing groups specializing in language or education,with collaboration between institutions being relatively scarce.High-frequency keywords in recent research on international Chinese language textbooks include“Chinese language textbooks for the Foreigners,”“Chinese language textbooks,”“Teaching Chinese Language for the Foreigners,”“Textbook compilation,”“International Chinese Language Education and Localization,”which reflect a diversified research perspective with interdisciplinary trends.Future research priorities encompass research on localization,customization of textbooks,and evaluation of textbooks which represent forefront directions of research.展开更多
Performance Management is the core course of human resource management major,but its knowledge points lack multi-dimensional correlations.There are problems such as scattered content and unclear system,and it is urgen...Performance Management is the core course of human resource management major,but its knowledge points lack multi-dimensional correlations.There are problems such as scattered content and unclear system,and it is urgent to reconstruct the content system of the course.Knowledge graph technology can integrate massive and scattered information into an organic structure through semantic correlation and reasoning.The application of knowledge graph to education and teaching can promote scientific and personalized teaching evaluation and better realize individualized teaching.This paper systematically combs the knowledge points of Performance Management course and forms a comprehensive knowledge graph.The knowledge point is associated with specific questions to form the problem map of the course,and then the knowledge point is further associated with the ability target to form the ability map of the course.Then,the knowledge point is associated with teaching materials,question bank and expansion resources to form a systematic teaching database,thereby giving the method of building the content system of Performance Management course based on the knowledge map.This research can be further extended to other core management courses to realize the deep integration of knowledge graph and teaching.展开更多
Knowledge graphs(KGs)have been widely accepted as powerful tools for modeling the complex relationships between concepts and developing knowledge-based services.In recent years,researchers in the field of power system...Knowledge graphs(KGs)have been widely accepted as powerful tools for modeling the complex relationships between concepts and developing knowledge-based services.In recent years,researchers in the field of power systems have explored KGs to develop intelligent dispatching systems for increasingly large power grids.With multiple power grid dispatching knowledge graphs(PDKGs)constructed by different agencies,the knowledge fusion of different PDKGs is useful for providing more accurate decision supports.To achieve this,entity alignment that aims at connecting different KGs by identifying equivalent entities is a critical step.Existing entity alignment methods cannot integrate useful structural,attribute,and relational information while calculating entities’similarities and are prone to making many-to-one alignments,thus can hardly achieve the best performance.To address these issues,this paper proposes a collective entity alignment model that integrates three kinds of available information and makes collective counterpart assignments.This model proposes a novel knowledge graph attention network(KGAT)to learn the embeddings of entities and relations explicitly and calculates entities’similarities by adaptively incorporating the structural,attribute,and relational similarities.Then,we formulate the counterpart assignment task as an integer programming(IP)problem to obtain one-to-one alignments.We not only conduct experiments on a pair of PDKGs but also evaluate o ur model on three commonly used cross-lingual KGs.Experimental comparisons indicate that our model outperforms other methods and provides an effective tool for the knowledge fusion of PDKGs.展开更多
Using the advantages of web crawlers in data collection and distributed storage technologies,we accessed to a wealth of forestry-related data.Combined with the mature big data technology at its present stage,Hadoop...Using the advantages of web crawlers in data collection and distributed storage technologies,we accessed to a wealth of forestry-related data.Combined with the mature big data technology at its present stage,Hadoop's distributed system was selected to solve the storage problem of massive forestry big data and the memory-based Spark computing framework to realize real-time and fast processing of data.The forestry data contains a wealth of information,and mining this information is of great significance for guiding the development of forestry.We conducts co-word and cluster analyses on the keywords of forestry data,extracts the rules hidden in the data,analyzes the research hotspots more accurately,grasps the evolution trend of subject topics,and plays an important role in promoting the research and development of subject areas.The co-word analysis and clustering algorithm have important practical significance for the topic structure,research hotspot or development trend in the field of forestry research.Distributed storage framework and parallel computing have greatly improved the performance of data mining algorithms.Therefore,the forestry big data mining system by big data technology has important practical significance for promoting the development of intelligent forestry.展开更多
Syndrome differentiation is the core diagnosis method of Traditional Chinese Medicine(TCM).We propose a method that simulates syndrome differentiation through deductive reasoning on a knowledge graph to achieve automa...Syndrome differentiation is the core diagnosis method of Traditional Chinese Medicine(TCM).We propose a method that simulates syndrome differentiation through deductive reasoning on a knowledge graph to achieve automated diagnosis in TCM.We analyze the reasoning path patterns from symptom to syndromes on the knowledge graph.There are two kinds of path patterns in the knowledge graph:one-hop and two-hop.The one-hop path pattern maps the symptom to syndromes immediately.The two-hop path pattern maps the symptom to syndromes through the nature of disease,etiology,and pathomechanism to support the diagnostic reasoning.Considering the different support strengths for the knowledge paths in reasoning,we design a dynamic weight mechanism.We utilize Naïve Bayes and TF-IDF to implement the reasoning method and the weighted score calculation.The proposed method reasons the syndrome results by calculating the possibility according to the weighted score of the path in the knowledge graph based on the reasoning path patterns.We evaluate the method with clinical records and clinical practice in hospitals.The preliminary results suggest that the method achieves high performance and can help TCM doctors make better diagnosis decisions in practice.Meanwhile,the method is robust and explainable under the guide of the knowledge graph.It could help TCM physicians,especially primary physicians in rural areas,and provide clinical decision support in clinical practice.展开更多
Purpose:Due to the incompleteness nature of knowledge graphs(KGs),the task of predicting missing links between entities becomes important.Many previous approaches are static,this posed a notable problem that all meani...Purpose:Due to the incompleteness nature of knowledge graphs(KGs),the task of predicting missing links between entities becomes important.Many previous approaches are static,this posed a notable problem that all meanings of a polysemous entity share one embedding vector.This study aims to propose a polysemous embedding approach,named KG embedding under relational contexts(ContE for short),for missing link prediction.Design/methodology/approach:ContE models and infers different relationship patterns by considering the context of the relationship,which is implicit in the local neighborhood of the relationship.The forward and backward impacts of the relationship in ContE are mapped to two different embedding vectors,which represent the contextual information of the relationship.Then,according to the position of the entity,the entity’s polysemous representation is obtained by adding its static embedding vector to the corresponding context vector of the relationship.Findings:ContE is a fully expressive,that is,given any ground truth over the triples,there are embedding assignments to entities and relations that can precisely separate the true triples from false ones.ContE is capable of modeling four connectivity patterns such as symmetry,antisymmetry,inversion and composition.Research limitations:ContE needs to do a grid search to find best parameters to get best performance in practice,which is a time-consuming task.Sometimes,it requires longer entity vectors to get better performance than some other models.Practical implications:ContE is a bilinear model,which is a quite simple model that could be applied to large-scale KGs.By considering contexts of relations,ContE can distinguish the exact meaning of an entity in different triples so that when performing compositional reasoning,it is capable to infer the connectivity patterns of relations and achieves good performance on link prediction tasks.Originality/value:ContE considers the contexts of entities in terms of their positions in triples and the relationships they link to.It decomposes a relation vector into two vectors,namely,forward impact vector and backward impact vector in order to capture the relational contexts.ContE has the same low computational complexity as TransE.Therefore,it provides a new approach for contextualized knowledge graph embedding.展开更多
基金supported in part by the Science and Technology Innovation 2030-“New Generation of Artificial Intelligence”Major Project(No.2021ZD0111000)Henan Provincial Science and Technology Research Project(No.232102211039).
文摘The growing prevalence of knowledge reasoning using knowledge graphs(KGs)has substantially improved the accuracy and efficiency of intelligent medical diagnosis.However,current models primarily integrate electronic medical records(EMRs)and KGs into the knowledge reasoning process,ignoring the differing significance of various types of knowledge in EMRs and the diverse data types present in the text.To better integrate EMR text information,we propose a novel intelligent diagnostic model named the Graph ATtention network incorporating Text representation in knowledge reasoning(GATiT),which comprises text representation,subgraph construction,knowledge reasoning,and diagnostic classification.In the text representation process,GATiT uses a pre-trained model to obtain text representations of the EMRs and additionally enhances embeddings by including chief complaint information and numerical information in the input.In the subgraph construction process,GATiT constructs text subgraphs and disease subgraphs from the KG,utilizing EMR text and the disease to be diagnosed.To differentiate the varying importance of nodes within the subgraphs features such as node categories,relevance scores,and other relevant factors are introduced into the text subgraph.Themessage-passing strategy and attention weight calculation of the graph attention network are adjusted to learn these features in the knowledge reasoning process.Finally,in the diagnostic classification process,the interactive attention-based fusion method integrates the results of knowledge reasoning with text representations to produce the final diagnosis results.Experimental results on multi-label and single-label EMR datasets demonstrate the model’s superiority over several state-of-theart methods.
基金funded by the Project of the National Natural Science Foundation of China,Grant Number 72071209.
文摘As a core part of battlefield situational awareness,air target intention recognition plays an important role in modern air operations.Aiming at the problems of insufficient feature extraction and misclassification in intention recognition,this paper designs an air target intention recognition method(KGTLIR)based on Knowledge Graph and Deep Learning.Firstly,the intention recognition model based on Deep Learning is constructed to mine the temporal relationship of intention features using dilated causal convolution and the spatial relationship of intention features using a graph attention mechanism.Meanwhile,the accuracy,recall,and F1-score after iteration are introduced to dynamically adjust the sample weights to reduce the probability of misclassification.After that,an intention recognition model based on Knowledge Graph is constructed to predict the probability of the occurrence of different intentions of the target.Finally,the results of the two models are fused by evidence theory to obtain the target’s operational intention.Experiments show that the intention recognition accuracy of the KGTLIRmodel can reach 98.48%,which is not only better than most of the air target intention recognition methods,but also demonstrates better interpretability and trustworthiness.
基金supported in part by the Beijing Natural Science Foundation under Grants L211020 and M21032in part by the National Natural Science Foundation of China under Grants U1836106 and 62271045in part by the Scientific and Technological Innovation Foundation of Foshan under Grants BK21BF001 and BK20BF010。
文摘Knowledge graph(KG)serves as a specialized semantic network that encapsulates intricate relationships among real-world entities within a structured framework.This framework facilitates a transformation in information retrieval,transitioning it from mere string matching to far more sophisticated entity matching.In this transformative process,the advancement of artificial intelligence and intelligent information services is invigorated.Meanwhile,the role ofmachine learningmethod in the construction of KG is important,and these techniques have already achieved initial success.This article embarks on a comprehensive journey through the last strides in the field of KG via machine learning.With a profound amalgamation of cutting-edge research in machine learning,this article undertakes a systematical exploration of KG construction methods in three distinct phases:entity learning,ontology learning,and knowledge reasoning.Especially,a meticulous dissection of machine learningdriven algorithms is conducted,spotlighting their contributions to critical facets such as entity extraction,relation extraction,entity linking,and link prediction.Moreover,this article also provides an analysis of the unresolved challenges and emerging trajectories that beckon within the expansive application of machine learning-fueled,large-scale KG construction.
基金This research is supported by the Chinese Special Projects of the National Key Research and Development Plan(2019YFB1405702).
文摘The acquisition of valuable design knowledge from massive fragmentary data is challenging for designers in conceptual product design.This study proposes a novel method for acquiring design knowledge by combining deep learning with knowledge graph.Specifically,the design knowledge acquisition method utilises the knowledge extraction model to extract design-related entities and relations from fragmentary data,and further constructs the knowledge graph to support design knowledge acquisition for conceptual product design.Moreover,the knowledge extraction model introduces ALBERT to solve memory limitation and communication overhead in the entity extraction module,and uses multi-granularity information to overcome segmentation errors and polysemy ambiguity in the relation extraction module.Experimental comparison verified the effectiveness and accuracy of the proposed knowledge extraction model.The case study demonstrated the feasibility of the knowledge graph construction with real fragmentary porcelain data and showed the capability to provide designers with interconnected and visualised design knowledge.
基金supported by the Shandong Province Science and Technology Project(2023TSGC0509,2022TSGC2234)Qingdao Science and Technology Plan Project(23-1-5-yqpy-2-qy).
文摘Enterprise risk management holds significant importance in fostering sustainable growth of businesses and in serving as a critical element for regulatory bodies to uphold market order.Amidst the challenges posed by intricate and unpredictable risk factors,knowledge graph technology is effectively driving risk management,leveraging its ability to associate and infer knowledge from diverse sources.This review aims to comprehensively summarize the construction techniques of enterprise risk knowledge graphs and their prominent applications across various business scenarios.Firstly,employing bibliometric methods,the aim is to uncover the developmental trends and current research hotspots within the domain of enterprise risk knowledge graphs.In the succeeding section,systematically delineate the technical methods for knowledge extraction and fusion in the standardized construction process of enterprise risk knowledge graphs.Objectively comparing and summarizing the strengths and weaknesses of each method,we provide recommendations for addressing the existing challenges in the construction process.Subsequently,categorizing the applied research of enterprise risk knowledge graphs based on research hotspots and risk category standards,and furnishing a detailed exposition on the applicability of technical routes and methods.Finally,the future research directions that still need to be explored in enterprise risk knowledge graphs were discussed,and relevant improvement suggestions were proposed.Practitioners and researchers can gain insights into the construction of technical theories and practical guidance of enterprise risk knowledge graphs based on this foundation.
基金supported by the National Science and Technology Innovation 2030 of China Next-Generation Artificial Intelligence Major Project(2018AAA0101800)the National Natural Science Foundation of China(52375482)the Regional Innovation Cooperation Project of Sichuan Province(2023YFQ0019).
文摘Quality management is a constant and significant concern in enterprises.Effective determination of correct solutions for comprehensive problems helps avoid increased backtesting costs.This study proposes an intelligent quality control method for manufacturing processes based on a human–cyber–physical(HCP)knowledge graph,which is a systematic method that encompasses the following elements:data management and classification based on HCP ternary data,HCP ontology construction,knowledge extraction for constructing an HCP knowledge graph,and comprehensive application of quality control based on HCP knowledge.The proposed method implements case retrieval,automatic analysis,and assisted decision making based on an HCP knowledge graph,enabling quality monitoring,inspection,diagnosis,and maintenance strategies for quality control.In practical applications,the proposed modular and hierarchical HCP ontology exhibits significant superiority in terms of shareability and reusability of the acquired knowledge.Moreover,the HCP knowledge graph deeply integrates the provided HCP data and effectively supports comprehensive decision making.The proposed method was implemented in cases involving an automotive production line and a gear manufacturing process,and the effectiveness of the method was verified by the application system deployed.Furthermore,the proposed method can be extended to other manufacturing process quality control tasks.
基金National College Students’Training Programs of Innovation and Entrepreneurship,Grant/Award Number:S202210022060the CACMS Innovation Fund,Grant/Award Number:CI2021A00512the National Nature Science Foundation of China under Grant,Grant/Award Number:62206021。
文摘Media convergence works by processing information from different modalities and applying them to different domains.It is difficult for the conventional knowledge graph to utilise multi-media features because the introduction of a large amount of information from other modalities reduces the effectiveness of representation learning and makes knowledge graph inference less effective.To address the issue,an inference method based on Media Convergence and Rule-guided Joint Inference model(MCRJI)has been pro-posed.The authors not only converge multi-media features of entities but also introduce logic rules to improve the accuracy and interpretability of link prediction.First,a multi-headed self-attention approach is used to obtain the attention of different media features of entities during semantic synthesis.Second,logic rules of different lengths are mined from knowledge graph to learn new entity representations.Finally,knowledge graph inference is performed based on representing entities that converge multi-media features.Numerous experimental results show that MCRJI outperforms other advanced baselines in using multi-media features and knowledge graph inference,demonstrating that MCRJI provides an excellent approach for knowledge graph inference with converged multi-media features.
基金supported by the National Key Laboratory for Comp lex Systems Simulation Foundation (6142006190301)。
文摘In the context of big data, many large-scale knowledge graphs have emerged to effectively organize the explosive growth of web data on the Internet. To select suitable knowledge graphs for use from many knowledge graphs, quality assessment is particularly important. As an important thing of quality assessment, completeness assessment generally refers to the ratio of the current data volume to the total data volume.When evaluating the completeness of a knowledge graph, it is often necessary to refine the completeness dimension by setting different completeness metrics to produce more complete and understandable evaluation results for the knowledge graph.However, lack of awareness of requirements is the most problematic quality issue. In the actual evaluation process, the existing completeness metrics need to consider the actual application. Therefore, to accurately recommend suitable knowledge graphs to many users, it is particularly important to develop relevant measurement metrics and formulate measurement schemes for completeness. In this paper, we will first clarify the concept of completeness, establish each metric of completeness, and finally design a measurement proposal for the completeness of knowledge graphs.
基金the Beijing Municipal Science and Technology Program(No.Z231100001323004).
文摘Utilizing graph neural networks for knowledge embedding to accomplish the task of knowledge graph completion(KGC)has become an important research area in knowledge graph completion.However,the number of nodes in the knowledge graph increases exponentially with the depth of the tree,whereas the distances of nodes in Euclidean space are second-order polynomial distances,whereby knowledge embedding using graph neural networks in Euclidean space will not represent the distances between nodes well.This paper introduces a novel approach called hyperbolic hierarchical graph attention network(H2GAT)to rectify this limitation.Firstly,the paper conducts knowledge representation in the hyperbolic space,effectively mitigating the issue of exponential growth of nodes with tree depth and consequent information loss.Secondly,it introduces a hierarchical graph atten-tion mechanism specifically designed for the hyperbolic space,allowing for enhanced capture of the network structure inherent in the knowledge graph.Finally,the efficacy of the proposed H2GAT model is evaluated on benchmark datasets,namely WN18RR and FB15K-237,thereby validating its effectiveness.The H2GAT model achieved 0.445,0.515,and 0.586 in the Hits@1,Hits@3 and Hits@10 metrics respectively on the WN18RR dataset and 0.243,0.367 and 0.518 on the FB15K-237 dataset.By incorporating hyperbolic space embedding and hierarchical graph attention,the H2GAT model successfully addresses the limitations of existing hyperbolic knowledge embedding models,exhibiting its competence in knowledge graph completion tasks.
基金supported by National Key R&D Program of China(2022QY2000-02).
文摘Accurately recommending candidate news to users is a basic challenge of personalized news recommendation systems.Traditional methods are usually difficult to learn and acquire complex semantic information in news texts,resulting in unsatisfactory recommendation results.Besides,these traditional methods are more friendly to active users with rich historical behaviors.However,they can not effectively solve the long tail problem of inactive users.To address these issues,this research presents a novel general framework that combines Large Language Models(LLM)and Knowledge Graphs(KG)into traditional methods.To learn the contextual information of news text,we use LLMs’powerful text understanding ability to generate news representations with rich semantic information,and then,the generated news representations are used to enhance the news encoding in traditional methods.In addition,multi-hops relationship of news entities is mined and the structural information of news is encoded using KG,thus alleviating the challenge of long-tail distribution.Experimental results demonstrate that compared with various traditional models,on evaluation indicators such as AUC,MRR,nDCG@5 and nDCG@10,the framework significantly improves the recommendation performance.The successful integration of LLM and KG in our framework has established a feasible way for achieving more accurate personalized news recommendation.Our code is available at https://github.com/Xuan-ZW/LKPNR.
基金the National Natural Science Founda-tion of China(62062062)hosted by Gulila Altenbek.
文摘Due to the structural dependencies among concurrent events in the knowledge graph and the substantial amount of sequential correlation information carried by temporally adjacent events,we propose an Independent Recurrent Temporal Graph Convolution Networks(IndRT-GCNets)framework to efficiently and accurately capture event attribute information.The framework models the knowledge graph sequences to learn the evolutionary represen-tations of entities and relations within each period.Firstly,by utilizing the temporal graph convolution module in the evolutionary representation unit,the framework captures the structural dependency relationships within the knowledge graph in each period.Meanwhile,to achieve better event representation and establish effective correlations,an independent recurrent neural network is employed to implement auto-regressive modeling.Furthermore,static attributes of entities in the entity-relation events are constrained andmerged using a static graph constraint to obtain optimal entity representations.Finally,the evolution of entity and relation representations is utilized to predict events in the next subsequent step.On multiple real-world datasets such as Freebase13(FB13),Freebase 15k(FB15K),WordNet11(WN11),WordNet18(WN18),FB15K-237,WN18RR,YAGO3-10,and Nell-995,the results of multiple evaluation indicators show that our proposed IndRT-GCNets framework outperforms most existing models on knowledge reasoning tasks,which validates the effectiveness and robustness.
基金Natural Science Foundation of Shanghai,China (No. 21ZR1400800)。
文摘Textile production has received considerable attention owing to its significance in production value,the complexity of its manufacturing processes and the extensive reach of its supply chains.However,textile industry consumes substantial energy and materials and emits greenhouse gases that severely harm the environment.In addressing this challenge,the concept of sustainable production offers crucial guidance for the sustainable development of the textile industry.Low-carbon manufacturing technologies provide robust technical support for the textile industry to transition to a low-carbon model by optimizing production processes,enhancing energy efficiency and minimizing material waste.Consequently,low-carbon manufacturing technologies have gradually been implemented in sustainable textile production scenarios.However,while research on low-carbon manufacturing technologies for textile production has advanced,these studies predominantly concentrate on theoretical methods,with relatively limited exploration of practical applications.To address this gap,a thorough overview of carbon emission management methods and tools in textile production,as well as the characteristics and influencing factors of carbon emissions in key textile manufacturing processes is presented to identify common issues.Additionally,two new concepts,carbon knowledge graph and carbon traceability,are introduced,offering strategic recommendations and application directions for the low-carbon development of sustainable textile production.Beginning with seven key aspects of sustainable textile production,the characteristics of carbon emissions and their influencing factors in key textile manufacturing process are systematically summarized.The aim is to provide guidance and optimization strategies for future emission reduction efforts by exploring the carbon emission situations and influencing factors at each stage.Furthermore,the potential and challenges of carbon knowledge graph technology are summarized in achieving carbon traceability,and several research ideas and suggestions are proposed.
文摘Objective: To grasp the changing trend of research hotspots of traditional Chinese medicine in the prevention and treatment of COVID-19, and to better play the role of traditional Chinese medicine in the prevention and treatment of COVID-19 and other diseases. Methods: The research literature from 2020 to 2022 was searched in the CNKI database, and CiteSpace software was used for visual analysis. Results: The papers on the prevention and treatment of COVID-19 by traditional Chinese medicine changed from cases, overviews, reports, and efficacy studies to more in-depth mechanism research, theoretical exploration, and social impact analysis, and finally formed a theory-clinical-society Influence-institutional change and other multi-dimensional achievement systems. Conclusion: Analyzing the changing trends of TCM hotspots in the prevention and treatment of COVID-19 can fully understand the important value of TCM, take the coordination of TCM and Western medicine as an important means to deal with public health security incidents, and promote the exploration of the potential efficacy of TCM, so as to enhance the role of TCM in Applications in social stability, emergency security, clinical practice, etc.
基金Supported by Undergraduate Teaching Research and Reform Project of University of Shanghai for Science and Technology in 2024(JGXM24281&JGXM24263)First-class Undergraduate Course Construction Project of University of Shanghai for Science and Technology in 2024(YLKC202424394).
文摘With the reform of experimental teaching in colleges and universities,the teaching mode of"experimental students as the main body,experimental teachers as the guide"needs to constantly explore new experimental teaching methods.In this paper,knowledge graph is integrated into the experiment of mechanical principle to guide undergraduates to use knowledge graph to analyze and summarize independently in experimental teaching activities,aiming at cultivating undergraduates interest in learning and innovative thinking,so as to improve the quality of experimental teaching.This study has a certain reference significance for experimental teaching in colleges and universities.
基金2023 International Chinese Language Education Collaboration Mechanism Project,Center for Language Education and Cooperation,Theoretical and Practical Research on Guangxi’s International Chinese Language Education Collaboration Mechanism(23YHXZ1010)2021 Education Teaching Reform Projects and Research and Practice Projects on New Engineering Disciplines and New Liberal Arts,Guangxi Normal University,Research and Practice of Online Authentic Chinese Language Courses in the Post-Pandemic Era Under the Background of New Liberal Arts(2021JGZ15)2019 Scientific Research Engineering·Innovation and Entrepreneurship Special Project,Guangxi Research Center for the Development of Humanities and Social Sciences,Model Research on the Construction of Internationalization Development Platform for Innovation and Entrepreneurship Education in Higher Education Institutions:A Case Study of Confucius Institutes(CXCY2019014)。
文摘Drawing upon relevant papers from Chinese core journals and CSSCI source journals in the CNKI China Academic Journals Full-Text Database spanning from 1992 to 2023,this study utilizes CiteSpace as a research tool to visually analyze the knowledge graph structure of research on international Chinese language textbooks in China.The study maps out the publication timeline,authors,institutions,collaborative networks,and keywords pertaining to research on international Chinese language textbooks.The findings indicate that research on international Chinese language textbooks commenced early and continues to maintain a certain level of research interest,yet lacks sufficient research output.Research institutions predominantly reside in universities and publishing groups specializing in language or education,with collaboration between institutions being relatively scarce.High-frequency keywords in recent research on international Chinese language textbooks include“Chinese language textbooks for the Foreigners,”“Chinese language textbooks,”“Teaching Chinese Language for the Foreigners,”“Textbook compilation,”“International Chinese Language Education and Localization,”which reflect a diversified research perspective with interdisciplinary trends.Future research priorities encompass research on localization,customization of textbooks,and evaluation of textbooks which represent forefront directions of research.
基金Education and Teaching Reform Research Project of Chongqing Institute of Engineering(JY2023206)。
文摘Performance Management is the core course of human resource management major,but its knowledge points lack multi-dimensional correlations.There are problems such as scattered content and unclear system,and it is urgent to reconstruct the content system of the course.Knowledge graph technology can integrate massive and scattered information into an organic structure through semantic correlation and reasoning.The application of knowledge graph to education and teaching can promote scientific and personalized teaching evaluation and better realize individualized teaching.This paper systematically combs the knowledge points of Performance Management course and forms a comprehensive knowledge graph.The knowledge point is associated with specific questions to form the problem map of the course,and then the knowledge point is further associated with the ability target to form the ability map of the course.Then,the knowledge point is associated with teaching materials,question bank and expansion resources to form a systematic teaching database,thereby giving the method of building the content system of Performance Management course based on the knowledge map.This research can be further extended to other core management courses to realize the deep integration of knowledge graph and teaching.
基金supported by the National Key R&D Program of China(2018AAA0101502)the Science and Technology Project of SGCC(State Grid Corporation of China):Fundamental Theory of Human-in-the-Loop Hybrid-Augmented Intelligence for Power Grid Dispatch and Control。
文摘Knowledge graphs(KGs)have been widely accepted as powerful tools for modeling the complex relationships between concepts and developing knowledge-based services.In recent years,researchers in the field of power systems have explored KGs to develop intelligent dispatching systems for increasingly large power grids.With multiple power grid dispatching knowledge graphs(PDKGs)constructed by different agencies,the knowledge fusion of different PDKGs is useful for providing more accurate decision supports.To achieve this,entity alignment that aims at connecting different KGs by identifying equivalent entities is a critical step.Existing entity alignment methods cannot integrate useful structural,attribute,and relational information while calculating entities’similarities and are prone to making many-to-one alignments,thus can hardly achieve the best performance.To address these issues,this paper proposes a collective entity alignment model that integrates three kinds of available information and makes collective counterpart assignments.This model proposes a novel knowledge graph attention network(KGAT)to learn the embeddings of entities and relations explicitly and calculates entities’similarities by adaptively incorporating the structural,attribute,and relational similarities.Then,we formulate the counterpart assignment task as an integer programming(IP)problem to obtain one-to-one alignments.We not only conduct experiments on a pair of PDKGs but also evaluate o ur model on three commonly used cross-lingual KGs.Experimental comparisons indicate that our model outperforms other methods and provides an effective tool for the knowledge fusion of PDKGs.
基金grants from the Fundamental Research Funds for the Central Universities(Grant No.2572018BH02)Special Funds for Scientific Research in the Forestry Public Welfare Industry(Grant Nos.201504307-03)。
文摘Using the advantages of web crawlers in data collection and distributed storage technologies,we accessed to a wealth of forestry-related data.Combined with the mature big data technology at its present stage,Hadoop's distributed system was selected to solve the storage problem of massive forestry big data and the memory-based Spark computing framework to realize real-time and fast processing of data.The forestry data contains a wealth of information,and mining this information is of great significance for guiding the development of forestry.We conducts co-word and cluster analyses on the keywords of forestry data,extracts the rules hidden in the data,analyzes the research hotspots more accurately,grasps the evolution trend of subject topics,and plays an important role in promoting the research and development of subject areas.The co-word analysis and clustering algorithm have important practical significance for the topic structure,research hotspot or development trend in the field of forestry research.Distributed storage framework and parallel computing have greatly improved the performance of data mining algorithms.Therefore,the forestry big data mining system by big data technology has important practical significance for promoting the development of intelligent forestry.
基金This work is supported by the National Key Research and Development Program of China under Grant 2017YFB1002304the China Scholarship Council under Grant 201906465021.
文摘Syndrome differentiation is the core diagnosis method of Traditional Chinese Medicine(TCM).We propose a method that simulates syndrome differentiation through deductive reasoning on a knowledge graph to achieve automated diagnosis in TCM.We analyze the reasoning path patterns from symptom to syndromes on the knowledge graph.There are two kinds of path patterns in the knowledge graph:one-hop and two-hop.The one-hop path pattern maps the symptom to syndromes immediately.The two-hop path pattern maps the symptom to syndromes through the nature of disease,etiology,and pathomechanism to support the diagnostic reasoning.Considering the different support strengths for the knowledge paths in reasoning,we design a dynamic weight mechanism.We utilize Naïve Bayes and TF-IDF to implement the reasoning method and the weighted score calculation.The proposed method reasons the syndrome results by calculating the possibility according to the weighted score of the path in the knowledge graph based on the reasoning path patterns.We evaluate the method with clinical records and clinical practice in hospitals.The preliminary results suggest that the method achieves high performance and can help TCM doctors make better diagnosis decisions in practice.Meanwhile,the method is robust and explainable under the guide of the knowledge graph.It could help TCM physicians,especially primary physicians in rural areas,and provide clinical decision support in clinical practice.
基金supported by the Key R&D Program Project of Zhejiang Province under Grant no.2019 C01004 and 2021C02004.
文摘Purpose:Due to the incompleteness nature of knowledge graphs(KGs),the task of predicting missing links between entities becomes important.Many previous approaches are static,this posed a notable problem that all meanings of a polysemous entity share one embedding vector.This study aims to propose a polysemous embedding approach,named KG embedding under relational contexts(ContE for short),for missing link prediction.Design/methodology/approach:ContE models and infers different relationship patterns by considering the context of the relationship,which is implicit in the local neighborhood of the relationship.The forward and backward impacts of the relationship in ContE are mapped to two different embedding vectors,which represent the contextual information of the relationship.Then,according to the position of the entity,the entity’s polysemous representation is obtained by adding its static embedding vector to the corresponding context vector of the relationship.Findings:ContE is a fully expressive,that is,given any ground truth over the triples,there are embedding assignments to entities and relations that can precisely separate the true triples from false ones.ContE is capable of modeling four connectivity patterns such as symmetry,antisymmetry,inversion and composition.Research limitations:ContE needs to do a grid search to find best parameters to get best performance in practice,which is a time-consuming task.Sometimes,it requires longer entity vectors to get better performance than some other models.Practical implications:ContE is a bilinear model,which is a quite simple model that could be applied to large-scale KGs.By considering contexts of relations,ContE can distinguish the exact meaning of an entity in different triples so that when performing compositional reasoning,it is capable to infer the connectivity patterns of relations and achieves good performance on link prediction tasks.Originality/value:ContE considers the contexts of entities in terms of their positions in triples and the relationships they link to.It decomposes a relation vector into two vectors,namely,forward impact vector and backward impact vector in order to capture the relational contexts.ContE has the same low computational complexity as TransE.Therefore,it provides a new approach for contextualized knowledge graph embedding.