期刊文献+
共找到9篇文章
< 1 >
每页显示 20 50 100
Hyperbolic hierarchical graph attention network for knowledge graph completion
1
作者 XU Hao CHEN Shudong +3 位作者 QI Donglin TONG Da YU Yong CHEN Shuai 《High Technology Letters》 EI CAS 2024年第3期271-279,共9页
Utilizing graph neural networks for knowledge embedding to accomplish the task of knowledge graph completion(KGC)has become an important research area in knowledge graph completion.However,the number of nodes in the k... Utilizing graph neural networks for knowledge embedding to accomplish the task of knowledge graph completion(KGC)has become an important research area in knowledge graph completion.However,the number of nodes in the knowledge graph increases exponentially with the depth of the tree,whereas the distances of nodes in Euclidean space are second-order polynomial distances,whereby knowledge embedding using graph neural networks in Euclidean space will not represent the distances between nodes well.This paper introduces a novel approach called hyperbolic hierarchical graph attention network(H2GAT)to rectify this limitation.Firstly,the paper conducts knowledge representation in the hyperbolic space,effectively mitigating the issue of exponential growth of nodes with tree depth and consequent information loss.Secondly,it introduces a hierarchical graph atten-tion mechanism specifically designed for the hyperbolic space,allowing for enhanced capture of the network structure inherent in the knowledge graph.Finally,the efficacy of the proposed H2GAT model is evaluated on benchmark datasets,namely WN18RR and FB15K-237,thereby validating its effectiveness.The H2GAT model achieved 0.445,0.515,and 0.586 in the Hits@1,Hits@3 and Hits@10 metrics respectively on the WN18RR dataset and 0.243,0.367 and 0.518 on the FB15K-237 dataset.By incorporating hyperbolic space embedding and hierarchical graph attention,the H2GAT model successfully addresses the limitations of existing hyperbolic knowledge embedding models,exhibiting its competence in knowledge graph completion tasks. 展开更多
关键词 hyperbolic space link prediction knowledge graph embedding knowledge graph completion(KGC)
下载PDF
Collective Entity Alignment for Knowledge Fusion of Power Grid Dispatching Knowledge Graphs 被引量:6
2
作者 Linyao Yang Chen Lv +4 位作者 Xiao Wang Ji Qiao Weiping Ding Jun Zhang Fei-Yue Wang 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2022年第11期1990-2004,共15页
Knowledge graphs(KGs)have been widely accepted as powerful tools for modeling the complex relationships between concepts and developing knowledge-based services.In recent years,researchers in the field of power system... Knowledge graphs(KGs)have been widely accepted as powerful tools for modeling the complex relationships between concepts and developing knowledge-based services.In recent years,researchers in the field of power systems have explored KGs to develop intelligent dispatching systems for increasingly large power grids.With multiple power grid dispatching knowledge graphs(PDKGs)constructed by different agencies,the knowledge fusion of different PDKGs is useful for providing more accurate decision supports.To achieve this,entity alignment that aims at connecting different KGs by identifying equivalent entities is a critical step.Existing entity alignment methods cannot integrate useful structural,attribute,and relational information while calculating entities’similarities and are prone to making many-to-one alignments,thus can hardly achieve the best performance.To address these issues,this paper proposes a collective entity alignment model that integrates three kinds of available information and makes collective counterpart assignments.This model proposes a novel knowledge graph attention network(KGAT)to learn the embeddings of entities and relations explicitly and calculates entities’similarities by adaptively incorporating the structural,attribute,and relational similarities.Then,we formulate the counterpart assignment task as an integer programming(IP)problem to obtain one-to-one alignments.We not only conduct experiments on a pair of PDKGs but also evaluate o ur model on three commonly used cross-lingual KGs.Experimental comparisons indicate that our model outperforms other methods and provides an effective tool for the knowledge fusion of PDKGs. 展开更多
关键词 Entity alignment integer programming(IP) knowledge fusion knowledge graph embedding power dispatch
下载PDF
Learning Context-based Embeddings for Knowledge Graph Completion 被引量:5
3
作者 Fei Pu Zhongwei Zhang +1 位作者 Yan Feng Bailin Yang 《Journal of Data and Information Science》 CSCD 2022年第2期84-106,共23页
Purpose:Due to the incompleteness nature of knowledge graphs(KGs),the task of predicting missing links between entities becomes important.Many previous approaches are static,this posed a notable problem that all meani... Purpose:Due to the incompleteness nature of knowledge graphs(KGs),the task of predicting missing links between entities becomes important.Many previous approaches are static,this posed a notable problem that all meanings of a polysemous entity share one embedding vector.This study aims to propose a polysemous embedding approach,named KG embedding under relational contexts(ContE for short),for missing link prediction.Design/methodology/approach:ContE models and infers different relationship patterns by considering the context of the relationship,which is implicit in the local neighborhood of the relationship.The forward and backward impacts of the relationship in ContE are mapped to two different embedding vectors,which represent the contextual information of the relationship.Then,according to the position of the entity,the entity’s polysemous representation is obtained by adding its static embedding vector to the corresponding context vector of the relationship.Findings:ContE is a fully expressive,that is,given any ground truth over the triples,there are embedding assignments to entities and relations that can precisely separate the true triples from false ones.ContE is capable of modeling four connectivity patterns such as symmetry,antisymmetry,inversion and composition.Research limitations:ContE needs to do a grid search to find best parameters to get best performance in practice,which is a time-consuming task.Sometimes,it requires longer entity vectors to get better performance than some other models.Practical implications:ContE is a bilinear model,which is a quite simple model that could be applied to large-scale KGs.By considering contexts of relations,ContE can distinguish the exact meaning of an entity in different triples so that when performing compositional reasoning,it is capable to infer the connectivity patterns of relations and achieves good performance on link prediction tasks.Originality/value:ContE considers the contexts of entities in terms of their positions in triples and the relationships they link to.It decomposes a relation vector into two vectors,namely,forward impact vector and backward impact vector in order to capture the relational contexts.ContE has the same low computational complexity as TransE.Therefore,it provides a new approach for contextualized knowledge graph embedding. 展开更多
关键词 Full expressiveness Relational contexts knowledge graph embedding Relation patterns Link prediction
下载PDF
Deep Knowledge Tracing Embedding Neural Network for Individualized Learning 被引量:1
4
作者 HUANG Yongfeng SHI Jie 《Journal of Donghua University(English Edition)》 EI CAS 2020年第6期512-520,共9页
Knowledge tracing is the key component in online individualized learning,which is capable of assessing the users'mastery of skills and predicting the probability that the users can solve specific problems.Availabl... Knowledge tracing is the key component in online individualized learning,which is capable of assessing the users'mastery of skills and predicting the probability that the users can solve specific problems.Available knowledge tracing models have the problem that the assessments are not directly used in the predictions.To make full use of the assessments during predictions,a novel model,named deep knowledge tracing embedding neural network(DKTENN),is proposed in this work.DKTENN is a synthesis of deep knowledge tracing(DKT)and knowledge graph embedding(KGE).DKT utilizes sophisticated long short-term memory(LSTM)to assess the users and track the mastery of skills according to the users'interaction sequences with skill-level tags,and KGE is applied to predict the probability on the basis of both the embedded problems and DKT's assessments.DKTENN outperforms performance factors analysis and the other knowledge tracing models based on deep learning in the experiments. 展开更多
关键词 knowledge tracing knowledge graph embedding(KGE) deep neural network user assessment personalized prediction
下载PDF
Knowledge Graph Representation Learning Based on Automatic Network Search for Link Prediction
5
作者 Zefeng Gu Hua Chen 《Computer Modeling in Engineering & Sciences》 SCIE EI 2023年第6期2497-2514,共18页
Link prediction,also known as Knowledge Graph Completion(KGC),is the common task in Knowledge Graphs(KGs)to predict missing connections between entities.Most existing methods focus on designing shallow,scalable models... Link prediction,also known as Knowledge Graph Completion(KGC),is the common task in Knowledge Graphs(KGs)to predict missing connections between entities.Most existing methods focus on designing shallow,scalable models,which have less expressive than deep,multi-layer models.Furthermore,most operations like addition,matrix multiplications or factorization are handcrafted based on a few known relation patterns in several wellknown datasets,such as FB15k,WN18,etc.However,due to the diversity and complex nature of real-world data distribution,it is inherently difficult to preset all latent patterns.To address this issue,we proposeKGE-ANS,a novel knowledge graph embedding framework for general link prediction tasks using automatic network search.KGEANS can learn a deep,multi-layer effective architecture to adapt to different datasets through neural architecture search.In addition,the general search spacewe designed is tailored forKGtasks.We performextensive experiments on benchmark datasets and the dataset constructed in this paper.The results show that our KGE-ANS outperforms several state-of-the-art methods,especially on these datasets with complex relation patterns. 展开更多
关键词 knowledge graph embedding link prediction automatic network search
下载PDF
Joint learning based on multi-shaped filters for knowledge graph completion 被引量:2
6
作者 Li Shaojie Chen Shudong +1 位作者 Ouyang Xiaoye Gong Lichen 《High Technology Letters》 EI CAS 2021年第1期43-52,共10页
To solve the problem of missing many valid triples in knowledge graphs(KGs),a novel model based on a convolutional neural network(CNN)called ConvKG is proposed,which employs a joint learning strategy for knowledge gra... To solve the problem of missing many valid triples in knowledge graphs(KGs),a novel model based on a convolutional neural network(CNN)called ConvKG is proposed,which employs a joint learning strategy for knowledge graph completion(KGC).Related research work has shown the superiority of convolutional neural networks(CNNs)in extracting semantic features of triple embeddings.However,these researches use only one single-shaped filter and fail to extract semantic features of different granularity.To solve this problem,ConvKG exploits multi-shaped filters to co-convolute on the triple embeddings,joint learning semantic features of different granularity.Different shaped filters cover different sizes on the triple embeddings and capture pairwise interactions of different granularity among triple elements.Experimental results confirm the strength of joint learning,and compared with state-of-the-art CNN-based KGC models,ConvKG achieves the better mean rank(MR)and Hits@10 metrics on dataset WN18 RR,and the better MR on dataset FB15k-237. 展开更多
关键词 knowledge graph embedding(KGE) knowledge graph completion(KGC) convolutional neural network(CNN) joint learning multi-shaped filter
下载PDF
Efficient Knowledge Graph Embedding Training Framework with Multiple GPUs 被引量:1
7
作者 Ding Sun Zhen Huang +1 位作者 Dongsheng Li Min Guo 《Tsinghua Science and Technology》 SCIE EI CAS CSCD 2023年第1期167-175,共9页
When training a large-scale knowledge graph embedding(KGE)model with multiple graphics processing units(GPUs),the partition-based method is necessary for parallel training.However,existing partition-based training met... When training a large-scale knowledge graph embedding(KGE)model with multiple graphics processing units(GPUs),the partition-based method is necessary for parallel training.However,existing partition-based training methods suffer from low GPU utilization and high input/output(IO)overhead between the memory and disk.For a high IO overhead between the disk and memory problem,we optimized the twice partitioning with fine-grained GPU scheduling to reduce the IO overhead between the CPU memory and disk.For low GPU utilization caused by the GPU load imbalance problem,we proposed balanced partitioning and dynamic scheduling methods to accelerate the training speed in different cases.With the above methods,we proposed fine-grained partitioning KGE,an efficient KGE training framework with multiple GPUs.We conducted experiments on some benchmarks of the knowledge graph,and the results show that our method achieves speedup compared to existing framework on the training of KGE. 展开更多
关键词 knowledge graph embedding parallel algorithm partitioning graph framework graphics processing unit(GPU)
原文传递
Modeling the Correlations of Relations for Knowledge Graph Embedding 被引量:8
8
作者 Ji-Zhao Zhu Yan-Tao Jia +2 位作者 Jun Xu Jian-Zhong Qiao Xue-Qi Cheng 《Journal of Computer Science & Technology》 SCIE EI CSCD 2018年第2期323-334,共12页
Knowledge graph embedding, which maps the entities and relations into low-dimensional vector spaces, has demonstrated its effectiveness in many tasks such as link prediction and relation extraction. Typical methods in... Knowledge graph embedding, which maps the entities and relations into low-dimensional vector spaces, has demonstrated its effectiveness in many tasks such as link prediction and relation extraction. Typical methods include TransE, TransH, and TransR. All these methods map different relations into the vector space separately and the intrinsic correlations of these relations are ignored. It is obvious that there exist some correlations among relations because different relations may connect to a common entity. For example, the triples (Steve Jobs, PlaceOfBrith, California) and (Apple Inc., Location, California) share the same entity California as their tail entity. We analyze the embedded relation matrices learned by TransE/TransH/TransR, and find that the correlations of relations do exist and they are showed as low-rank structure over the embedded relation matrix. It is natural to ask whether we can leverage these correlations to learn better embeddings for the entities and relations in a knowledge graph. In this paper, we propose to learn the embedded relation matrix by decomposing it as a product of two low-dimensional matrices, for characterizing the low-rank structure. The proposed method, called TransCoRe (Translation-Based Method via Modeling the Correlations of Relations), learns the embeddings of entities and relations with translation-based framework. Experimental results based on the benchmark datasets of WordNet and Freebase demonstrate that our method outperforms the typical baselines on link prediction and triple classification tasks. 展开更多
关键词 knowledge graph embedding low-rank matrix decomposition
原文传递
Knowledge Graph Embedding for Hyper-Relational Data 被引量:7
9
作者 Chunhong Zhang Miao Zhou +2 位作者 Xiao Han Zheng Hu Yang Ji 《Tsinghua Science and Technology》 SCIE EI CAS CSCD 2017年第2期185-197,共13页
Knowledge graph representation has been a long standing goal of artificial intelligence. In this paper,we consider a method for knowledge graph embedding of hyper-relational data, which are commonly found in knowledge... Knowledge graph representation has been a long standing goal of artificial intelligence. In this paper,we consider a method for knowledge graph embedding of hyper-relational data, which are commonly found in knowledge graphs. Previous models such as Trans(E, H, R) and CTrans R are either insufficient for embedding hyper-relational data or focus on projecting an entity into multiple embeddings, which might not be effective for generalization nor accurately reflect real knowledge. To overcome these issues, we propose the novel model Trans HR, which transforms the hyper-relations in a pair of entities into an individual vector, serving as a translation between them. We experimentally evaluate our model on two typical tasks—link prediction and triple classification.The results demonstrate that Trans HR significantly outperforms Trans(E, H, R) and CTrans R, especially for hyperrelational data. 展开更多
关键词 distributed representation transfer matrix knowledge graph embedding
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部