Purpose:This paper aims to address the limitations in existing research on the evolution of knowledge flow networks by proposing a meso-level institutional field knowledge flow network evolution model(IKM).The purpose...Purpose:This paper aims to address the limitations in existing research on the evolution of knowledge flow networks by proposing a meso-level institutional field knowledge flow network evolution model(IKM).The purpose is to simulate the construction process of a knowledge flow network using knowledge organizations as units and to investigate its effectiveness in replicating institutional field knowledge flow networks.Design/Methodology/Approach:The IKM model enhances the preferential attachment and growth observed in scale-free BA networks,while incorporating three adjustment parameters to simulate the selection of connection targets and the types of nodes involved in the network evolution process Using the PageRank algorithm to calculate the significance of nodes within the knowledge flow network.To compare its performance,the BA and DMS models are also employed for simulating the network.Pearson coefficient analysis is conducted on the simulated networks generated by the IKM,BA and DMS models,as well as on the actual network.Findings:The research findings demonstrate that the IKM model outperforms the BA and DMS models in replicating the institutional field knowledge flow network.It provides comprehensive insights into the evolution mechanism of knowledge flow networks in the scientific research realm.The model also exhibits potential applicability to other knowledge networks that involve knowledge organizations as node units.Research Limitations:This study has some limitations.Firstly,it primarily focuses on the evolution of knowledge flow networks within the field of physics,neglecting other fields.Additionally,the analysis is based on a specific set of data,which may limit the generalizability of the findings.Future research could address these limitations by exploring knowledge flow networks in diverse fields and utilizing broader datasets.Practical Implications:The proposed IKM model offers practical implications for the construction and analysis of knowledge flow networks within institutions.It provides a valuable tool for understanding and managing knowledge exchange between knowledge organizations.The model can aid in optimizing knowledge flow and enhancing collaboration within organizations.Originality/value:This research highlights the significance of meso-level studies in understanding knowledge organization and its impact on knowledge flow networks.The IKM model demonstrates its effectiveness in replicating institutional field knowledge flow networks and offers practical implications for knowledge management in institutions.Moreover,the model has the potential to be applied to other knowledge networks,which are formed by knowledge organizations as node units.展开更多
The growing prevalence of knowledge reasoning using knowledge graphs(KGs)has substantially improved the accuracy and efficiency of intelligent medical diagnosis.However,current models primarily integrate electronic me...The growing prevalence of knowledge reasoning using knowledge graphs(KGs)has substantially improved the accuracy and efficiency of intelligent medical diagnosis.However,current models primarily integrate electronic medical records(EMRs)and KGs into the knowledge reasoning process,ignoring the differing significance of various types of knowledge in EMRs and the diverse data types present in the text.To better integrate EMR text information,we propose a novel intelligent diagnostic model named the Graph ATtention network incorporating Text representation in knowledge reasoning(GATiT),which comprises text representation,subgraph construction,knowledge reasoning,and diagnostic classification.In the text representation process,GATiT uses a pre-trained model to obtain text representations of the EMRs and additionally enhances embeddings by including chief complaint information and numerical information in the input.In the subgraph construction process,GATiT constructs text subgraphs and disease subgraphs from the KG,utilizing EMR text and the disease to be diagnosed.To differentiate the varying importance of nodes within the subgraphs features such as node categories,relevance scores,and other relevant factors are introduced into the text subgraph.Themessage-passing strategy and attention weight calculation of the graph attention network are adjusted to learn these features in the knowledge reasoning process.Finally,in the diagnostic classification process,the interactive attention-based fusion method integrates the results of knowledge reasoning with text representations to produce the final diagnosis results.Experimental results on multi-label and single-label EMR datasets demonstrate the model’s superiority over several state-of-theart methods.展开更多
The acquisition of valuable design knowledge from massive fragmentary data is challenging for designers in conceptual product design.This study proposes a novel method for acquiring design knowledge by combining deep ...The acquisition of valuable design knowledge from massive fragmentary data is challenging for designers in conceptual product design.This study proposes a novel method for acquiring design knowledge by combining deep learning with knowledge graph.Specifically,the design knowledge acquisition method utilises the knowledge extraction model to extract design-related entities and relations from fragmentary data,and further constructs the knowledge graph to support design knowledge acquisition for conceptual product design.Moreover,the knowledge extraction model introduces ALBERT to solve memory limitation and communication overhead in the entity extraction module,and uses multi-granularity information to overcome segmentation errors and polysemy ambiguity in the relation extraction module.Experimental comparison verified the effectiveness and accuracy of the proposed knowledge extraction model.The case study demonstrated the feasibility of the knowledge graph construction with real fragmentary porcelain data and showed the capability to provide designers with interconnected and visualised design knowledge.展开更多
Knowledge graph(KG)serves as a specialized semantic network that encapsulates intricate relationships among real-world entities within a structured framework.This framework facilitates a transformation in information ...Knowledge graph(KG)serves as a specialized semantic network that encapsulates intricate relationships among real-world entities within a structured framework.This framework facilitates a transformation in information retrieval,transitioning it from mere string matching to far more sophisticated entity matching.In this transformative process,the advancement of artificial intelligence and intelligent information services is invigorated.Meanwhile,the role ofmachine learningmethod in the construction of KG is important,and these techniques have already achieved initial success.This article embarks on a comprehensive journey through the last strides in the field of KG via machine learning.With a profound amalgamation of cutting-edge research in machine learning,this article undertakes a systematical exploration of KG construction methods in three distinct phases:entity learning,ontology learning,and knowledge reasoning.Especially,a meticulous dissection of machine learningdriven algorithms is conducted,spotlighting their contributions to critical facets such as entity extraction,relation extraction,entity linking,and link prediction.Moreover,this article also provides an analysis of the unresolved challenges and emerging trajectories that beckon within the expansive application of machine learning-fueled,large-scale KG construction.展开更多
This study endeavors to formulate a comprehensive methodology for establishing a Geological Knowledge Base(GKB)tailored to fracture-cavity reservoir outcrops within the North Tarim Basin.The acquisition of quantitativ...This study endeavors to formulate a comprehensive methodology for establishing a Geological Knowledge Base(GKB)tailored to fracture-cavity reservoir outcrops within the North Tarim Basin.The acquisition of quantitative geological parameters was accomplished through diverse means such as outcrop observations,thin section studies,unmanned aerial vehicle scanning,and high-resolution cameras.Subsequently,a three-dimensional digital outcrop model was generated,and the parameters were standardized.An assessment of traditional geological knowledge was conducted to delineate the knowledge framework,content,and system of the GKB.The basic parameter knowledge was extracted using multiscale fine characterization techniques,including core statistics,field observations,and microscopic thin section analysis.Key mechanism knowledge was identified by integrating trace elements from filling,isotope geochemical tests,and water-rock simulation experiments.Significant representational knowledge was then extracted by employing various methods such as multiple linear regression,neural network technology,and discriminant classification.Subsequently,an analogy study was performed on the karst fracture-cavity system(KFCS)in both outcrop and underground reservoir settings.The results underscored several key findings:(1)Utilization of a diverse range of techniques,including outcrop observations,core statistics,unmanned aerial vehicle scanning,high-resolution cameras,thin section analysis,and electron scanning imaging,enabled the acquisition and standardization of data.This facilitated effective management and integration of geological parameter data from multiple sources and scales.(2)The GKB for fracture-cavity reservoir outcrops,encompassing basic parameter knowledge,key mechanism knowledge,and significant representational knowledge,provides robust data support and systematic geological insights for the intricate and in-depth examination of the genetic mechanisms of fracture-cavity reservoirs.(3)The developmental characteristics of fracturecavities in karst outcrops offer effective,efficient,and accurate guidance for fracture-cavity research in underground karst reservoirs.The outlined construction method of the outcrop geological knowledge base is applicable to various fracture-cavity reservoirs in different layers and regions worldwide.展开更多
Utilizing graph neural networks for knowledge embedding to accomplish the task of knowledge graph completion(KGC)has become an important research area in knowledge graph completion.However,the number of nodes in the k...Utilizing graph neural networks for knowledge embedding to accomplish the task of knowledge graph completion(KGC)has become an important research area in knowledge graph completion.However,the number of nodes in the knowledge graph increases exponentially with the depth of the tree,whereas the distances of nodes in Euclidean space are second-order polynomial distances,whereby knowledge embedding using graph neural networks in Euclidean space will not represent the distances between nodes well.This paper introduces a novel approach called hyperbolic hierarchical graph attention network(H2GAT)to rectify this limitation.Firstly,the paper conducts knowledge representation in the hyperbolic space,effectively mitigating the issue of exponential growth of nodes with tree depth and consequent information loss.Secondly,it introduces a hierarchical graph atten-tion mechanism specifically designed for the hyperbolic space,allowing for enhanced capture of the network structure inherent in the knowledge graph.Finally,the efficacy of the proposed H2GAT model is evaluated on benchmark datasets,namely WN18RR and FB15K-237,thereby validating its effectiveness.The H2GAT model achieved 0.445,0.515,and 0.586 in the Hits@1,Hits@3 and Hits@10 metrics respectively on the WN18RR dataset and 0.243,0.367 and 0.518 on the FB15K-237 dataset.By incorporating hyperbolic space embedding and hierarchical graph attention,the H2GAT model successfully addresses the limitations of existing hyperbolic knowledge embedding models,exhibiting its competence in knowledge graph completion tasks.展开更多
High-efficiency and low-cost knowledge sharing can improve the decision-making ability of autonomous vehicles by mining knowledge from the Internet of Vehicles(IoVs).However,it is challenging to ensure high efficiency...High-efficiency and low-cost knowledge sharing can improve the decision-making ability of autonomous vehicles by mining knowledge from the Internet of Vehicles(IoVs).However,it is challenging to ensure high efficiency of local data learning models while preventing privacy leakage in a high mobility environment.In order to protect data privacy and improve data learning efficiency in knowledge sharing,we propose an asynchronous federated broad learning(FBL)framework that integrates broad learning(BL)into federated learning(FL).In FBL,we design a broad fully connected model(BFCM)as a local model for training client data.To enhance the wireless channel quality for knowledge sharing and reduce the communication and computation cost of participating clients,we construct a joint resource allocation and reconfigurable intelligent surface(RIS)configuration optimization framework for FBL.The problem is decoupled into two convex subproblems.Aiming to improve the resource scheduling efficiency in FBL,a double Davidon–Fletcher–Powell(DDFP)algorithm is presented to solve the time slot allocation and RIS configuration problem.Based on the results of resource scheduling,we design a reward-allocation algorithm based on federated incentive learning(FIL)in FBL to compensate clients for their costs.The simulation results show that the proposed FBL framework achieves better performance than the comparison models in terms of efficiency,accuracy,and cost for knowledge sharing in the IoV.展开更多
Quality management is a constant and significant concern in enterprises.Effective determination of correct solutions for comprehensive problems helps avoid increased backtesting costs.This study proposes an intelligen...Quality management is a constant and significant concern in enterprises.Effective determination of correct solutions for comprehensive problems helps avoid increased backtesting costs.This study proposes an intelligent quality control method for manufacturing processes based on a human–cyber–physical(HCP)knowledge graph,which is a systematic method that encompasses the following elements:data management and classification based on HCP ternary data,HCP ontology construction,knowledge extraction for constructing an HCP knowledge graph,and comprehensive application of quality control based on HCP knowledge.The proposed method implements case retrieval,automatic analysis,and assisted decision making based on an HCP knowledge graph,enabling quality monitoring,inspection,diagnosis,and maintenance strategies for quality control.In practical applications,the proposed modular and hierarchical HCP ontology exhibits significant superiority in terms of shareability and reusability of the acquired knowledge.Moreover,the HCP knowledge graph deeply integrates the provided HCP data and effectively supports comprehensive decision making.The proposed method was implemented in cases involving an automotive production line and a gear manufacturing process,and the effectiveness of the method was verified by the application system deployed.Furthermore,the proposed method can be extended to other manufacturing process quality control tasks.展开更多
Knowledge distillation,as a pivotal technique in the field of model compression,has been widely applied across various domains.However,the problem of student model performance being limited due to inherent biases in t...Knowledge distillation,as a pivotal technique in the field of model compression,has been widely applied across various domains.However,the problem of student model performance being limited due to inherent biases in the teacher model during the distillation process still persists.To address the inherent biases in knowledge distillation,we propose a de-biased knowledge distillation framework tailored for binary classification tasks.For the pre-trained teacher model,biases in the soft labels are mitigated through knowledge infusion and label de-biasing techniques.Based on this,a de-biased distillation loss is introduced,allowing the de-biased labels to replace the soft labels as the fitting target for the student model.This approach enables the student model to learn from the corrected model information,achieving high-performance deployment on lightweight student models.Experiments conducted on multiple real-world datasets demonstrate that deep learning models compressed under the de-biased knowledge distillation framework significantly outperform traditional response-based and feature-based knowledge distillation models across various evaluation metrics,highlighting the effectiveness and superiority of the de-biased knowledge distillation framework in model compression.展开更多
In recent years,with the continuous development of deep learning and knowledge graph reasoning methods,more and more researchers have shown great interest in improving knowledge graph reasoning methods by inferring mi...In recent years,with the continuous development of deep learning and knowledge graph reasoning methods,more and more researchers have shown great interest in improving knowledge graph reasoning methods by inferring missing facts through reasoning.By searching paths on the knowledge graph and making fact and link predictions based on these paths,deep learning-based Reinforcement Learning(RL)agents can demonstrate good performance and interpretability.Therefore,deep reinforcement learning-based knowledge reasoning methods have rapidly emerged in recent years and have become a hot research topic.However,even in a small and fixed knowledge graph reasoning action space,there are still a large number of invalid actions.It often leads to the interruption of RL agents’wandering due to the selection of invalid actions,resulting in a significant decrease in the success rate of path mining.In order to improve the success rate of RL agents in the early stages of path search,this article proposes a knowledge reasoning method based on Deep Transfer Reinforcement Learning path(DTRLpath).Before supervised pre-training and retraining,a pre-task of searching for effective actions in a single step is added.The RL agent is first trained in the pre-task to improve its ability to search for effective actions.Then,the trained agent is transferred to the target reasoning task for path search training,which improves its success rate in searching for target task paths.Finally,based on the comparative experimental results on the FB15K-237 and NELL-995 datasets,it can be concluded that the proposed method significantly improves the success rate of path search and outperforms similar methods in most reasoning tasks.展开更多
Enterprise risk management holds significant importance in fostering sustainable growth of businesses and in serving as a critical element for regulatory bodies to uphold market order.Amidst the challenges posed by in...Enterprise risk management holds significant importance in fostering sustainable growth of businesses and in serving as a critical element for regulatory bodies to uphold market order.Amidst the challenges posed by intricate and unpredictable risk factors,knowledge graph technology is effectively driving risk management,leveraging its ability to associate and infer knowledge from diverse sources.This review aims to comprehensively summarize the construction techniques of enterprise risk knowledge graphs and their prominent applications across various business scenarios.Firstly,employing bibliometric methods,the aim is to uncover the developmental trends and current research hotspots within the domain of enterprise risk knowledge graphs.In the succeeding section,systematically delineate the technical methods for knowledge extraction and fusion in the standardized construction process of enterprise risk knowledge graphs.Objectively comparing and summarizing the strengths and weaknesses of each method,we provide recommendations for addressing the existing challenges in the construction process.Subsequently,categorizing the applied research of enterprise risk knowledge graphs based on research hotspots and risk category standards,and furnishing a detailed exposition on the applicability of technical routes and methods.Finally,the future research directions that still need to be explored in enterprise risk knowledge graphs were discussed,and relevant improvement suggestions were proposed.Practitioners and researchers can gain insights into the construction of technical theories and practical guidance of enterprise risk knowledge graphs based on this foundation.展开更多
Strabismus significantly impacts human health as a prevalent ophthalmic condition.Early detection of strabismus is crucial for effective treatment and prognosis.Traditional deep learning models for strabismus detectio...Strabismus significantly impacts human health as a prevalent ophthalmic condition.Early detection of strabismus is crucial for effective treatment and prognosis.Traditional deep learning models for strabismus detection often fail to estimate prediction certainty precisely.This paper employed a Bayesian deep learning algorithm with knowledge distillation,improving the model's performance and uncertainty estimation ability.Trained on 6807 images from two tertiary hospitals,the model showed significantly higher diagnostic accuracy than traditional deep-learning models.Experimental results revealed that knowledge distillation enhanced the Bayesian model’s performance and uncertainty estimation ability.These findings underscore the combined benefits of using Bayesian deep learning algorithms and knowledge distillation,which improve the reliability and accuracy of strabismus diagnostic predictions.展开更多
Identification of underlying partial differential equations(PDEs)for complex systems remains a formidable challenge.In the present study,a robust PDE identification method is proposed,demonstrating the ability to extr...Identification of underlying partial differential equations(PDEs)for complex systems remains a formidable challenge.In the present study,a robust PDE identification method is proposed,demonstrating the ability to extract accurate governing equations under noisy conditions without prior knowledge.Specifically,the proposed method combines gene expression programming,one type of evolutionary algorithm capable of generating unseen terms based solely on basic operators and functional terms,with symbolic regression neural networks.These networks are designed to represent explicit functional expressions and optimize them with data gradients.In particular,the specifically designed neural networks can be easily transformed to physical constraints for the training data,embedding the discovered PDEs to further optimize the metadata used for iterative PDE identification.The proposed method has been tested in four canonical PDE cases,validating its effectiveness without preliminary information and confirming its suitability for practical applications across various noise levels.展开更多
Time-frequency analysis is a successfully used tool for analyzing the local features of seismic data.However,it suffers from several inevitable limitations,such as the restricted time-frequency resolution,the difficul...Time-frequency analysis is a successfully used tool for analyzing the local features of seismic data.However,it suffers from several inevitable limitations,such as the restricted time-frequency resolution,the difficulty in selecting parameters,and the low computational efficiency.Inspired by deep learning,we suggest a deep learning-based workflow for seismic time-frequency analysis.The sparse S transform network(SSTNet)is first built to map the relationship between synthetic traces and sparse S transform spectra,which can be easily pre-trained by using synthetic traces and training labels.Next,we introduce knowledge distillation(KD)based transfer learning to re-train SSTNet by using a field data set without training labels,which is named the sparse S transform network with knowledge distillation(KD-SSTNet).In this way,we can effectively calculate the sparse time-frequency spectra of field data and avoid the use of field training labels.To test the availability of the suggested KD-SSTNet,we apply it to field data to estimate seismic attenuation for reservoir characterization and make detailed comparisons with the traditional time-frequency analysis methods.展开更多
With the widespread data collection and processing,privacy-preserving machine learning has become increasingly important in addressing privacy risks related to individuals.Support vector machine(SVM)is one of the most...With the widespread data collection and processing,privacy-preserving machine learning has become increasingly important in addressing privacy risks related to individuals.Support vector machine(SVM)is one of the most elementary learning models of machine learning.Privacy issues surrounding SVM classifier training have attracted increasing attention.In this paper,we investigate Differential Privacy-compliant Federated Machine Learning with Dimensionality Reduction,called FedDPDR-DPML,which greatly improves data utility while providing strong privacy guarantees.Considering in distributed learning scenarios,multiple participants usually hold unbalanced or small amounts of data.Therefore,FedDPDR-DPML enables multiple participants to collaboratively learn a global model based on weighted model averaging and knowledge aggregation and then the server distributes the global model to each participant to improve local data utility.Aiming at high-dimensional data,we adopt differential privacy in both the principal component analysis(PCA)-based dimensionality reduction phase and SVM classifiers training phase,which improves model accuracy while achieving strict differential privacy protection.Besides,we train Differential privacy(DP)-compliant SVM classifiers by adding noise to the objective function itself,thus leading to better data utility.Extensive experiments on three high-dimensional datasets demonstrate that FedDPDR-DPML can achieve high accuracy while ensuring strong privacy protection.展开更多
Media convergence works by processing information from different modalities and applying them to different domains.It is difficult for the conventional knowledge graph to utilise multi-media features because the intro...Media convergence works by processing information from different modalities and applying them to different domains.It is difficult for the conventional knowledge graph to utilise multi-media features because the introduction of a large amount of information from other modalities reduces the effectiveness of representation learning and makes knowledge graph inference less effective.To address the issue,an inference method based on Media Convergence and Rule-guided Joint Inference model(MCRJI)has been pro-posed.The authors not only converge multi-media features of entities but also introduce logic rules to improve the accuracy and interpretability of link prediction.First,a multi-headed self-attention approach is used to obtain the attention of different media features of entities during semantic synthesis.Second,logic rules of different lengths are mined from knowledge graph to learn new entity representations.Finally,knowledge graph inference is performed based on representing entities that converge multi-media features.Numerous experimental results show that MCRJI outperforms other advanced baselines in using multi-media features and knowledge graph inference,demonstrating that MCRJI provides an excellent approach for knowledge graph inference with converged multi-media features.展开更多
Worldwide interest has increasingly focused on the sustainable utilization of landscape as a resource in urban areas,emphasizing its ecological,cultural and social significance.This study examines Guilin City,China,as...Worldwide interest has increasingly focused on the sustainable utilization of landscape as a resource in urban areas,emphasizing its ecological,cultural and social significance.This study examines Guilin City,China,as a representative case study due to its rich landscape resources and status as a national innovation demonstration zone for implementing the 2030 Agenda for Sustainable Development.This study uses bibliometric visualization tools like CiteSpace and VOSviewer to analyze research trends from 1980 to 2021 in the Chinese Academic Journal Network Publishing Database(CNKI).The results show increasing academic interest over three stages:initiation(1982-1997),exploration(1998-2004),and diversified development(2005-2021).Contributions are predominantly from local academic and tourism sectors,indicating a strong regional influence;however,relatively weak interinstitutional collaboration occurs,suggesting potential for more integrated research efforts.Primary research is also concentrated within economic disciplines,particularly tourism-related ones.The evolution of research frontiers reveals three main paths:urban development strategies,industrial economic theories and empirical validation,and ecosystem analysis and evaluation.A multidisciplinary approach and stronger collaborative efforts are crucial to enhance research on ecological values and empirical models while supporting evidence-based urban development strategies in Guilin City and comparable cities globally.展开更多
Accurately recommending candidate news to users is a basic challenge of personalized news recommendation systems.Traditional methods are usually difficult to learn and acquire complex semantic information in news text...Accurately recommending candidate news to users is a basic challenge of personalized news recommendation systems.Traditional methods are usually difficult to learn and acquire complex semantic information in news texts,resulting in unsatisfactory recommendation results.Besides,these traditional methods are more friendly to active users with rich historical behaviors.However,they can not effectively solve the long tail problem of inactive users.To address these issues,this research presents a novel general framework that combines Large Language Models(LLM)and Knowledge Graphs(KG)into traditional methods.To learn the contextual information of news text,we use LLMs’powerful text understanding ability to generate news representations with rich semantic information,and then,the generated news representations are used to enhance the news encoding in traditional methods.In addition,multi-hops relationship of news entities is mined and the structural information of news is encoded using KG,thus alleviating the challenge of long-tail distribution.Experimental results demonstrate that compared with various traditional models,on evaluation indicators such as AUC,MRR,nDCG@5 and nDCG@10,the framework significantly improves the recommendation performance.The successful integration of LLM and KG in our framework has established a feasible way for achieving more accurate personalized news recommendation.Our code is available at https://github.com/Xuan-ZW/LKPNR.展开更多
This study employed the bibliometric software CiteSpace 6.1.R6 to analyze the correlation between thermal infrared,spectral remote sensing technology,and the estimation of economic forest water stress.It aimed to revi...This study employed the bibliometric software CiteSpace 6.1.R6 to analyze the correlation between thermal infrared,spectral remote sensing technology,and the estimation of economic forest water stress.It aimed to review the development and current status of this field,as well as to identify future research trends.A search was conducted on the China National Knowledge Infrastructure(CNKI)database using the keyword“water stress”for relevant studies from 2003 to 2023.The visual analysis function of CNKI was used to generate the distribution of annual publication volume,and CiteSpace 6.1.R6 was utilized to create network maps illustrating collaboration among authors and institutions.The study also analyzed the hotspots and frontiers of economic forest water stress.As a result,a total of 6664 academic journal articles related to water stress were retrieved.Considerable collaboration networks were observed among scholars and institutions,with a focus on using crown temperature monitoring to diagnose crop water stress.Based on the research findings,it was evident that the primary research trend involved the use of thermal infrared and spectral remote sensing technology for estimating water stress,making it a future research hotspot.展开更多
Objective: To grasp the changing trend of research hotspots of traditional Chinese medicine in the prevention and treatment of COVID-19, and to better play the role of traditional Chinese medicine in the prevention an...Objective: To grasp the changing trend of research hotspots of traditional Chinese medicine in the prevention and treatment of COVID-19, and to better play the role of traditional Chinese medicine in the prevention and treatment of COVID-19 and other diseases. Methods: The research literature from 2020 to 2022 was searched in the CNKI database, and CiteSpace software was used for visual analysis. Results: The papers on the prevention and treatment of COVID-19 by traditional Chinese medicine changed from cases, overviews, reports, and efficacy studies to more in-depth mechanism research, theoretical exploration, and social impact analysis, and finally formed a theory-clinical-society Influence-institutional change and other multi-dimensional achievement systems. Conclusion: Analyzing the changing trends of TCM hotspots in the prevention and treatment of COVID-19 can fully understand the important value of TCM, take the coordination of TCM and Western medicine as an important means to deal with public health security incidents, and promote the exploration of the potential efficacy of TCM, so as to enhance the role of TCM in Applications in social stability, emergency security, clinical practice, etc.展开更多
基金supported in part by the National Natural Science Foundation of China under Grant 72264036in part by the West Light Foundation of The Chinese Academy of Sciences under Grant 2020-XBQNXZ-020+1 种基金Social Science Foundation of Xinjiang under Grant 2023BGL077the Research Program for High-level Talent Program of Xinjiang University of Finance and Economics 2022XGC041,2022XGC042.
文摘Purpose:This paper aims to address the limitations in existing research on the evolution of knowledge flow networks by proposing a meso-level institutional field knowledge flow network evolution model(IKM).The purpose is to simulate the construction process of a knowledge flow network using knowledge organizations as units and to investigate its effectiveness in replicating institutional field knowledge flow networks.Design/Methodology/Approach:The IKM model enhances the preferential attachment and growth observed in scale-free BA networks,while incorporating three adjustment parameters to simulate the selection of connection targets and the types of nodes involved in the network evolution process Using the PageRank algorithm to calculate the significance of nodes within the knowledge flow network.To compare its performance,the BA and DMS models are also employed for simulating the network.Pearson coefficient analysis is conducted on the simulated networks generated by the IKM,BA and DMS models,as well as on the actual network.Findings:The research findings demonstrate that the IKM model outperforms the BA and DMS models in replicating the institutional field knowledge flow network.It provides comprehensive insights into the evolution mechanism of knowledge flow networks in the scientific research realm.The model also exhibits potential applicability to other knowledge networks that involve knowledge organizations as node units.Research Limitations:This study has some limitations.Firstly,it primarily focuses on the evolution of knowledge flow networks within the field of physics,neglecting other fields.Additionally,the analysis is based on a specific set of data,which may limit the generalizability of the findings.Future research could address these limitations by exploring knowledge flow networks in diverse fields and utilizing broader datasets.Practical Implications:The proposed IKM model offers practical implications for the construction and analysis of knowledge flow networks within institutions.It provides a valuable tool for understanding and managing knowledge exchange between knowledge organizations.The model can aid in optimizing knowledge flow and enhancing collaboration within organizations.Originality/value:This research highlights the significance of meso-level studies in understanding knowledge organization and its impact on knowledge flow networks.The IKM model demonstrates its effectiveness in replicating institutional field knowledge flow networks and offers practical implications for knowledge management in institutions.Moreover,the model has the potential to be applied to other knowledge networks,which are formed by knowledge organizations as node units.
基金supported in part by the Science and Technology Innovation 2030-“New Generation of Artificial Intelligence”Major Project(No.2021ZD0111000)Henan Provincial Science and Technology Research Project(No.232102211039).
文摘The growing prevalence of knowledge reasoning using knowledge graphs(KGs)has substantially improved the accuracy and efficiency of intelligent medical diagnosis.However,current models primarily integrate electronic medical records(EMRs)and KGs into the knowledge reasoning process,ignoring the differing significance of various types of knowledge in EMRs and the diverse data types present in the text.To better integrate EMR text information,we propose a novel intelligent diagnostic model named the Graph ATtention network incorporating Text representation in knowledge reasoning(GATiT),which comprises text representation,subgraph construction,knowledge reasoning,and diagnostic classification.In the text representation process,GATiT uses a pre-trained model to obtain text representations of the EMRs and additionally enhances embeddings by including chief complaint information and numerical information in the input.In the subgraph construction process,GATiT constructs text subgraphs and disease subgraphs from the KG,utilizing EMR text and the disease to be diagnosed.To differentiate the varying importance of nodes within the subgraphs features such as node categories,relevance scores,and other relevant factors are introduced into the text subgraph.Themessage-passing strategy and attention weight calculation of the graph attention network are adjusted to learn these features in the knowledge reasoning process.Finally,in the diagnostic classification process,the interactive attention-based fusion method integrates the results of knowledge reasoning with text representations to produce the final diagnosis results.Experimental results on multi-label and single-label EMR datasets demonstrate the model’s superiority over several state-of-theart methods.
基金This research is supported by the Chinese Special Projects of the National Key Research and Development Plan(2019YFB1405702).
文摘The acquisition of valuable design knowledge from massive fragmentary data is challenging for designers in conceptual product design.This study proposes a novel method for acquiring design knowledge by combining deep learning with knowledge graph.Specifically,the design knowledge acquisition method utilises the knowledge extraction model to extract design-related entities and relations from fragmentary data,and further constructs the knowledge graph to support design knowledge acquisition for conceptual product design.Moreover,the knowledge extraction model introduces ALBERT to solve memory limitation and communication overhead in the entity extraction module,and uses multi-granularity information to overcome segmentation errors and polysemy ambiguity in the relation extraction module.Experimental comparison verified the effectiveness and accuracy of the proposed knowledge extraction model.The case study demonstrated the feasibility of the knowledge graph construction with real fragmentary porcelain data and showed the capability to provide designers with interconnected and visualised design knowledge.
基金supported in part by the Beijing Natural Science Foundation under Grants L211020 and M21032in part by the National Natural Science Foundation of China under Grants U1836106 and 62271045in part by the Scientific and Technological Innovation Foundation of Foshan under Grants BK21BF001 and BK20BF010。
文摘Knowledge graph(KG)serves as a specialized semantic network that encapsulates intricate relationships among real-world entities within a structured framework.This framework facilitates a transformation in information retrieval,transitioning it from mere string matching to far more sophisticated entity matching.In this transformative process,the advancement of artificial intelligence and intelligent information services is invigorated.Meanwhile,the role ofmachine learningmethod in the construction of KG is important,and these techniques have already achieved initial success.This article embarks on a comprehensive journey through the last strides in the field of KG via machine learning.With a profound amalgamation of cutting-edge research in machine learning,this article undertakes a systematical exploration of KG construction methods in three distinct phases:entity learning,ontology learning,and knowledge reasoning.Especially,a meticulous dissection of machine learningdriven algorithms is conducted,spotlighting their contributions to critical facets such as entity extraction,relation extraction,entity linking,and link prediction.Moreover,this article also provides an analysis of the unresolved challenges and emerging trajectories that beckon within the expansive application of machine learning-fueled,large-scale KG construction.
基金supported by the Major Scientific and Technological Projects of CNPC under grant ZD2019-183-006the National Science and Technology Major Project of China (2016ZX05014002-006)the National Natural Science Foundation of China (42072234,42272180)。
文摘This study endeavors to formulate a comprehensive methodology for establishing a Geological Knowledge Base(GKB)tailored to fracture-cavity reservoir outcrops within the North Tarim Basin.The acquisition of quantitative geological parameters was accomplished through diverse means such as outcrop observations,thin section studies,unmanned aerial vehicle scanning,and high-resolution cameras.Subsequently,a three-dimensional digital outcrop model was generated,and the parameters were standardized.An assessment of traditional geological knowledge was conducted to delineate the knowledge framework,content,and system of the GKB.The basic parameter knowledge was extracted using multiscale fine characterization techniques,including core statistics,field observations,and microscopic thin section analysis.Key mechanism knowledge was identified by integrating trace elements from filling,isotope geochemical tests,and water-rock simulation experiments.Significant representational knowledge was then extracted by employing various methods such as multiple linear regression,neural network technology,and discriminant classification.Subsequently,an analogy study was performed on the karst fracture-cavity system(KFCS)in both outcrop and underground reservoir settings.The results underscored several key findings:(1)Utilization of a diverse range of techniques,including outcrop observations,core statistics,unmanned aerial vehicle scanning,high-resolution cameras,thin section analysis,and electron scanning imaging,enabled the acquisition and standardization of data.This facilitated effective management and integration of geological parameter data from multiple sources and scales.(2)The GKB for fracture-cavity reservoir outcrops,encompassing basic parameter knowledge,key mechanism knowledge,and significant representational knowledge,provides robust data support and systematic geological insights for the intricate and in-depth examination of the genetic mechanisms of fracture-cavity reservoirs.(3)The developmental characteristics of fracturecavities in karst outcrops offer effective,efficient,and accurate guidance for fracture-cavity research in underground karst reservoirs.The outlined construction method of the outcrop geological knowledge base is applicable to various fracture-cavity reservoirs in different layers and regions worldwide.
基金the Beijing Municipal Science and Technology Program(No.Z231100001323004).
文摘Utilizing graph neural networks for knowledge embedding to accomplish the task of knowledge graph completion(KGC)has become an important research area in knowledge graph completion.However,the number of nodes in the knowledge graph increases exponentially with the depth of the tree,whereas the distances of nodes in Euclidean space are second-order polynomial distances,whereby knowledge embedding using graph neural networks in Euclidean space will not represent the distances between nodes well.This paper introduces a novel approach called hyperbolic hierarchical graph attention network(H2GAT)to rectify this limitation.Firstly,the paper conducts knowledge representation in the hyperbolic space,effectively mitigating the issue of exponential growth of nodes with tree depth and consequent information loss.Secondly,it introduces a hierarchical graph atten-tion mechanism specifically designed for the hyperbolic space,allowing for enhanced capture of the network structure inherent in the knowledge graph.Finally,the efficacy of the proposed H2GAT model is evaluated on benchmark datasets,namely WN18RR and FB15K-237,thereby validating its effectiveness.The H2GAT model achieved 0.445,0.515,and 0.586 in the Hits@1,Hits@3 and Hits@10 metrics respectively on the WN18RR dataset and 0.243,0.367 and 0.518 on the FB15K-237 dataset.By incorporating hyperbolic space embedding and hierarchical graph attention,the H2GAT model successfully addresses the limitations of existing hyperbolic knowledge embedding models,exhibiting its competence in knowledge graph completion tasks.
基金supported in part by the National Natural Science Foundation of China(62371116 and 62231020)in part by the Science and Technology Project of Hebei Province Education Department(ZD2022164)+2 种基金in part by the Fundamental Research Funds for the Central Universities(N2223031)in part by the Open Research Project of Xidian University(ISN24-08)Key Laboratory of Cognitive Radio and Information Processing,Ministry of Education(Guilin University of Electronic Technology,China,CRKL210203)。
文摘High-efficiency and low-cost knowledge sharing can improve the decision-making ability of autonomous vehicles by mining knowledge from the Internet of Vehicles(IoVs).However,it is challenging to ensure high efficiency of local data learning models while preventing privacy leakage in a high mobility environment.In order to protect data privacy and improve data learning efficiency in knowledge sharing,we propose an asynchronous federated broad learning(FBL)framework that integrates broad learning(BL)into federated learning(FL).In FBL,we design a broad fully connected model(BFCM)as a local model for training client data.To enhance the wireless channel quality for knowledge sharing and reduce the communication and computation cost of participating clients,we construct a joint resource allocation and reconfigurable intelligent surface(RIS)configuration optimization framework for FBL.The problem is decoupled into two convex subproblems.Aiming to improve the resource scheduling efficiency in FBL,a double Davidon–Fletcher–Powell(DDFP)algorithm is presented to solve the time slot allocation and RIS configuration problem.Based on the results of resource scheduling,we design a reward-allocation algorithm based on federated incentive learning(FIL)in FBL to compensate clients for their costs.The simulation results show that the proposed FBL framework achieves better performance than the comparison models in terms of efficiency,accuracy,and cost for knowledge sharing in the IoV.
基金supported by the National Science and Technology Innovation 2030 of China Next-Generation Artificial Intelligence Major Project(2018AAA0101800)the National Natural Science Foundation of China(52375482)the Regional Innovation Cooperation Project of Sichuan Province(2023YFQ0019).
文摘Quality management is a constant and significant concern in enterprises.Effective determination of correct solutions for comprehensive problems helps avoid increased backtesting costs.This study proposes an intelligent quality control method for manufacturing processes based on a human–cyber–physical(HCP)knowledge graph,which is a systematic method that encompasses the following elements:data management and classification based on HCP ternary data,HCP ontology construction,knowledge extraction for constructing an HCP knowledge graph,and comprehensive application of quality control based on HCP knowledge.The proposed method implements case retrieval,automatic analysis,and assisted decision making based on an HCP knowledge graph,enabling quality monitoring,inspection,diagnosis,and maintenance strategies for quality control.In practical applications,the proposed modular and hierarchical HCP ontology exhibits significant superiority in terms of shareability and reusability of the acquired knowledge.Moreover,the HCP knowledge graph deeply integrates the provided HCP data and effectively supports comprehensive decision making.The proposed method was implemented in cases involving an automotive production line and a gear manufacturing process,and the effectiveness of the method was verified by the application system deployed.Furthermore,the proposed method can be extended to other manufacturing process quality control tasks.
基金supported by the National Natural Science Foundation of China under Grant No.62172056Young Elite Scientists Sponsorship Program by CAST under Grant No.2022QNRC001.
文摘Knowledge distillation,as a pivotal technique in the field of model compression,has been widely applied across various domains.However,the problem of student model performance being limited due to inherent biases in the teacher model during the distillation process still persists.To address the inherent biases in knowledge distillation,we propose a de-biased knowledge distillation framework tailored for binary classification tasks.For the pre-trained teacher model,biases in the soft labels are mitigated through knowledge infusion and label de-biasing techniques.Based on this,a de-biased distillation loss is introduced,allowing the de-biased labels to replace the soft labels as the fitting target for the student model.This approach enables the student model to learn from the corrected model information,achieving high-performance deployment on lightweight student models.Experiments conducted on multiple real-world datasets demonstrate that deep learning models compressed under the de-biased knowledge distillation framework significantly outperform traditional response-based and feature-based knowledge distillation models across various evaluation metrics,highlighting the effectiveness and superiority of the de-biased knowledge distillation framework in model compression.
基金supported by Key Laboratory of Information System Requirement,No.LHZZ202202Natural Science Foundation of Xinjiang Uyghur Autonomous Region(2023D01C55)Scientific Research Program of the Higher Education Institution of Xinjiang(XJEDU2023P127).
文摘In recent years,with the continuous development of deep learning and knowledge graph reasoning methods,more and more researchers have shown great interest in improving knowledge graph reasoning methods by inferring missing facts through reasoning.By searching paths on the knowledge graph and making fact and link predictions based on these paths,deep learning-based Reinforcement Learning(RL)agents can demonstrate good performance and interpretability.Therefore,deep reinforcement learning-based knowledge reasoning methods have rapidly emerged in recent years and have become a hot research topic.However,even in a small and fixed knowledge graph reasoning action space,there are still a large number of invalid actions.It often leads to the interruption of RL agents’wandering due to the selection of invalid actions,resulting in a significant decrease in the success rate of path mining.In order to improve the success rate of RL agents in the early stages of path search,this article proposes a knowledge reasoning method based on Deep Transfer Reinforcement Learning path(DTRLpath).Before supervised pre-training and retraining,a pre-task of searching for effective actions in a single step is added.The RL agent is first trained in the pre-task to improve its ability to search for effective actions.Then,the trained agent is transferred to the target reasoning task for path search training,which improves its success rate in searching for target task paths.Finally,based on the comparative experimental results on the FB15K-237 and NELL-995 datasets,it can be concluded that the proposed method significantly improves the success rate of path search and outperforms similar methods in most reasoning tasks.
基金supported by the Shandong Province Science and Technology Project(2023TSGC0509,2022TSGC2234)Qingdao Science and Technology Plan Project(23-1-5-yqpy-2-qy).
文摘Enterprise risk management holds significant importance in fostering sustainable growth of businesses and in serving as a critical element for regulatory bodies to uphold market order.Amidst the challenges posed by intricate and unpredictable risk factors,knowledge graph technology is effectively driving risk management,leveraging its ability to associate and infer knowledge from diverse sources.This review aims to comprehensively summarize the construction techniques of enterprise risk knowledge graphs and their prominent applications across various business scenarios.Firstly,employing bibliometric methods,the aim is to uncover the developmental trends and current research hotspots within the domain of enterprise risk knowledge graphs.In the succeeding section,systematically delineate the technical methods for knowledge extraction and fusion in the standardized construction process of enterprise risk knowledge graphs.Objectively comparing and summarizing the strengths and weaknesses of each method,we provide recommendations for addressing the existing challenges in the construction process.Subsequently,categorizing the applied research of enterprise risk knowledge graphs based on research hotspots and risk category standards,and furnishing a detailed exposition on the applicability of technical routes and methods.Finally,the future research directions that still need to be explored in enterprise risk knowledge graphs were discussed,and relevant improvement suggestions were proposed.Practitioners and researchers can gain insights into the construction of technical theories and practical guidance of enterprise risk knowledge graphs based on this foundation.
基金supported in part by the Guangdong Natu-ral Science Foundation(No.2022A1515011396)in part by the National Key R and D Program of China(No.2021ZD0111502)in part by the Science Research Startup Foundation of Shantou University(No.NTF20021)。
文摘Strabismus significantly impacts human health as a prevalent ophthalmic condition.Early detection of strabismus is crucial for effective treatment and prognosis.Traditional deep learning models for strabismus detection often fail to estimate prediction certainty precisely.This paper employed a Bayesian deep learning algorithm with knowledge distillation,improving the model's performance and uncertainty estimation ability.Trained on 6807 images from two tertiary hospitals,the model showed significantly higher diagnostic accuracy than traditional deep-learning models.Experimental results revealed that knowledge distillation enhanced the Bayesian model’s performance and uncertainty estimation ability.These findings underscore the combined benefits of using Bayesian deep learning algorithms and knowledge distillation,which improve the reliability and accuracy of strabismus diagnostic predictions.
基金supported by the National Natural Science Foundation of China(Grant Nos.92152102 and 92152202)the Advanced Jet Propulsion Innovation Center/AEAC(Grant No.HKCX2022-01-010)。
文摘Identification of underlying partial differential equations(PDEs)for complex systems remains a formidable challenge.In the present study,a robust PDE identification method is proposed,demonstrating the ability to extract accurate governing equations under noisy conditions without prior knowledge.Specifically,the proposed method combines gene expression programming,one type of evolutionary algorithm capable of generating unseen terms based solely on basic operators and functional terms,with symbolic regression neural networks.These networks are designed to represent explicit functional expressions and optimize them with data gradients.In particular,the specifically designed neural networks can be easily transformed to physical constraints for the training data,embedding the discovered PDEs to further optimize the metadata used for iterative PDE identification.The proposed method has been tested in four canonical PDE cases,validating its effectiveness without preliminary information and confirming its suitability for practical applications across various noise levels.
基金supported by the National Natural Science Foundation of China (42274144,42304122,and 41974155)the Key Research and Development Program of Shaanxi (2023-YBGY-076)+1 种基金the National Key R&D Program of China (2020YFA0713404)the China Uranium Industry and East China University of Technology Joint Innovation Fund (NRE202107)。
文摘Time-frequency analysis is a successfully used tool for analyzing the local features of seismic data.However,it suffers from several inevitable limitations,such as the restricted time-frequency resolution,the difficulty in selecting parameters,and the low computational efficiency.Inspired by deep learning,we suggest a deep learning-based workflow for seismic time-frequency analysis.The sparse S transform network(SSTNet)is first built to map the relationship between synthetic traces and sparse S transform spectra,which can be easily pre-trained by using synthetic traces and training labels.Next,we introduce knowledge distillation(KD)based transfer learning to re-train SSTNet by using a field data set without training labels,which is named the sparse S transform network with knowledge distillation(KD-SSTNet).In this way,we can effectively calculate the sparse time-frequency spectra of field data and avoid the use of field training labels.To test the availability of the suggested KD-SSTNet,we apply it to field data to estimate seismic attenuation for reservoir characterization and make detailed comparisons with the traditional time-frequency analysis methods.
基金supported in part by National Natural Science Foundation of China(Nos.62102311,62202377,62272385)in part by Natural Science Basic Research Program of Shaanxi(Nos.2022JQ-600,2022JM-353,2023-JC-QN-0327)+2 种基金in part by Shaanxi Distinguished Youth Project(No.2022JC-47)in part by Scientific Research Program Funded by Shaanxi Provincial Education Department(No.22JK0560)in part by Distinguished Youth Talents of Shaanxi Universities,and in part by Youth Innovation Team of Shaanxi Universities.
文摘With the widespread data collection and processing,privacy-preserving machine learning has become increasingly important in addressing privacy risks related to individuals.Support vector machine(SVM)is one of the most elementary learning models of machine learning.Privacy issues surrounding SVM classifier training have attracted increasing attention.In this paper,we investigate Differential Privacy-compliant Federated Machine Learning with Dimensionality Reduction,called FedDPDR-DPML,which greatly improves data utility while providing strong privacy guarantees.Considering in distributed learning scenarios,multiple participants usually hold unbalanced or small amounts of data.Therefore,FedDPDR-DPML enables multiple participants to collaboratively learn a global model based on weighted model averaging and knowledge aggregation and then the server distributes the global model to each participant to improve local data utility.Aiming at high-dimensional data,we adopt differential privacy in both the principal component analysis(PCA)-based dimensionality reduction phase and SVM classifiers training phase,which improves model accuracy while achieving strict differential privacy protection.Besides,we train Differential privacy(DP)-compliant SVM classifiers by adding noise to the objective function itself,thus leading to better data utility.Extensive experiments on three high-dimensional datasets demonstrate that FedDPDR-DPML can achieve high accuracy while ensuring strong privacy protection.
基金National College Students’Training Programs of Innovation and Entrepreneurship,Grant/Award Number:S202210022060the CACMS Innovation Fund,Grant/Award Number:CI2021A00512the National Nature Science Foundation of China under Grant,Grant/Award Number:62206021。
文摘Media convergence works by processing information from different modalities and applying them to different domains.It is difficult for the conventional knowledge graph to utilise multi-media features because the introduction of a large amount of information from other modalities reduces the effectiveness of representation learning and makes knowledge graph inference less effective.To address the issue,an inference method based on Media Convergence and Rule-guided Joint Inference model(MCRJI)has been pro-posed.The authors not only converge multi-media features of entities but also introduce logic rules to improve the accuracy and interpretability of link prediction.First,a multi-headed self-attention approach is used to obtain the attention of different media features of entities during semantic synthesis.Second,logic rules of different lengths are mined from knowledge graph to learn new entity representations.Finally,knowledge graph inference is performed based on representing entities that converge multi-media features.Numerous experimental results show that MCRJI outperforms other advanced baselines in using multi-media features and knowledge graph inference,demonstrating that MCRJI provides an excellent approach for knowledge graph inference with converged multi-media features.
基金supported by the National Key Research and Development Program of China under the theme“Research on urban sustainable development interactive decision-making and management technologies”[Grant No.2022YFC3802904].
文摘Worldwide interest has increasingly focused on the sustainable utilization of landscape as a resource in urban areas,emphasizing its ecological,cultural and social significance.This study examines Guilin City,China,as a representative case study due to its rich landscape resources and status as a national innovation demonstration zone for implementing the 2030 Agenda for Sustainable Development.This study uses bibliometric visualization tools like CiteSpace and VOSviewer to analyze research trends from 1980 to 2021 in the Chinese Academic Journal Network Publishing Database(CNKI).The results show increasing academic interest over three stages:initiation(1982-1997),exploration(1998-2004),and diversified development(2005-2021).Contributions are predominantly from local academic and tourism sectors,indicating a strong regional influence;however,relatively weak interinstitutional collaboration occurs,suggesting potential for more integrated research efforts.Primary research is also concentrated within economic disciplines,particularly tourism-related ones.The evolution of research frontiers reveals three main paths:urban development strategies,industrial economic theories and empirical validation,and ecosystem analysis and evaluation.A multidisciplinary approach and stronger collaborative efforts are crucial to enhance research on ecological values and empirical models while supporting evidence-based urban development strategies in Guilin City and comparable cities globally.
基金supported by National Key R&D Program of China(2022QY2000-02).
文摘Accurately recommending candidate news to users is a basic challenge of personalized news recommendation systems.Traditional methods are usually difficult to learn and acquire complex semantic information in news texts,resulting in unsatisfactory recommendation results.Besides,these traditional methods are more friendly to active users with rich historical behaviors.However,they can not effectively solve the long tail problem of inactive users.To address these issues,this research presents a novel general framework that combines Large Language Models(LLM)and Knowledge Graphs(KG)into traditional methods.To learn the contextual information of news text,we use LLMs’powerful text understanding ability to generate news representations with rich semantic information,and then,the generated news representations are used to enhance the news encoding in traditional methods.In addition,multi-hops relationship of news entities is mined and the structural information of news is encoded using KG,thus alleviating the challenge of long-tail distribution.Experimental results demonstrate that compared with various traditional models,on evaluation indicators such as AUC,MRR,nDCG@5 and nDCG@10,the framework significantly improves the recommendation performance.The successful integration of LLM and KG in our framework has established a feasible way for achieving more accurate personalized news recommendation.Our code is available at https://github.com/Xuan-ZW/LKPNR.
基金the Inner Mongolia Natural Science Foundation(2023MS06002)the Scientific Research Project of Higher Education Institutions of Inner Mongolia Autonomous Region(NJZZ22509)+1 种基金the Development Project of Young Scientific and Technological Talents(Innovative Teams)of Inner Mongolia Autonomous Region 2023(NHGIRT2312)the Project of Research and Practice on Teaching Reform of Graduate Education of Inner Mongolia Autonomous Region(JGCG2023049)were funded.
文摘This study employed the bibliometric software CiteSpace 6.1.R6 to analyze the correlation between thermal infrared,spectral remote sensing technology,and the estimation of economic forest water stress.It aimed to review the development and current status of this field,as well as to identify future research trends.A search was conducted on the China National Knowledge Infrastructure(CNKI)database using the keyword“water stress”for relevant studies from 2003 to 2023.The visual analysis function of CNKI was used to generate the distribution of annual publication volume,and CiteSpace 6.1.R6 was utilized to create network maps illustrating collaboration among authors and institutions.The study also analyzed the hotspots and frontiers of economic forest water stress.As a result,a total of 6664 academic journal articles related to water stress were retrieved.Considerable collaboration networks were observed among scholars and institutions,with a focus on using crown temperature monitoring to diagnose crop water stress.Based on the research findings,it was evident that the primary research trend involved the use of thermal infrared and spectral remote sensing technology for estimating water stress,making it a future research hotspot.
文摘Objective: To grasp the changing trend of research hotspots of traditional Chinese medicine in the prevention and treatment of COVID-19, and to better play the role of traditional Chinese medicine in the prevention and treatment of COVID-19 and other diseases. Methods: The research literature from 2020 to 2022 was searched in the CNKI database, and CiteSpace software was used for visual analysis. Results: The papers on the prevention and treatment of COVID-19 by traditional Chinese medicine changed from cases, overviews, reports, and efficacy studies to more in-depth mechanism research, theoretical exploration, and social impact analysis, and finally formed a theory-clinical-society Influence-institutional change and other multi-dimensional achievement systems. Conclusion: Analyzing the changing trends of TCM hotspots in the prevention and treatment of COVID-19 can fully understand the important value of TCM, take the coordination of TCM and Western medicine as an important means to deal with public health security incidents, and promote the exploration of the potential efficacy of TCM, so as to enhance the role of TCM in Applications in social stability, emergency security, clinical practice, etc.