期刊文献+

NLGAE:一种基于改进网络结构及损失函数的图自编码器节点分类模型

NLGAE:A Graph Autoencoder Model Based on Improved Network Structure and Loss Function for Node Classification Task
下载PDF
导出
摘要 利用图嵌入方法将图的拓扑结构、节点属性等高维异构信息映射到稠密的向量空间,是解决图数据由非欧空间性带来的计算不友好、邻接矩阵的高度空间复杂性等问题的主流方法。在对经典图自编码器模型GAE与VGAE所存在的问题进行分析的基础上,尝试从编码器、解码器及损失函数3个方面对基于图自编码器的图嵌入方法进行改进,提出一种基于改进网络结构及损失函数的图自编码器模型NLGAE。首先,在模型结构设计上,一方面将编码器中堆叠的图卷积层倒置,以解决GAE与VGAE中无参Decoder缺乏灵活性并且表达能力不足的问题,另一方面引入注意力机制的图卷积网络GAT来解决节点之间的权重系数固化的问题;其次,重新设计的损失函数能够同时考虑到图结构与节点特征属性两部分信息。对比实验结果表明:NLGAE作为一种无监督模型,能够学习到高质量的节点嵌入特征,在下游节点分类任务上优于DeepWalk,GAE,GrpahMAE,GATE等经典无监督模型,并且在选择合适分类模型的情况下,甚至优于GAT和GCN等有监督的图神经网络模型。 The universally accepted technique to address the issues of computational complexity and high spatial complexity of adjacency matrix due to non-Euclidean spatiality of graph data is to use graph embedding methods to map high-dimensional heterogeneous information,such as graph topology and node attributes,to dense vector space.In this paper,based on the analysis of the problems of the classical graph auto-encoder model GAE(graph auto-encoder)and VGAE(variational graph auto-encoder),we try to improve the graph embedding method based on graph auto-encoder from three aspects:encoder,decoder and loss function,and propose a graph auto-encoder model NLGAE based on the improved network structure and loss function.First,in the model structure design,on the one hand,the stacked graph convolutional layers in the encoder are inverted to solve the problem of lack of flexibility and insufficient expressiveness of the non-reference decoder in GAE and VGAE,on the other hand,the graph convolutional network GAT is introduced to solve the problem of solidifying the weight coefficients between nodes by introducing the attention mechanism.Second,both the graph structure and the node feature information could be taken into account by the redesigned loss function.The comparative experimental results show that,as an unsupervised model,the proposed NLGAE can learn high-quality node embedding features and outperform not only traditional unsupervised models DeepWalk,GAE,GrpahMAE,GATE,etc.in node classification tasks,but also supervised graph neural network models such as GAT and GCN in the case of selecting an appropriate classification model.
作者 廖彬 张陶 于炯 李敏 LIAO Bin;ZHANG Tao;YU Jiong;LI Min(College of Big Data Statistics,Guizhou University of Finance and Economics,Guiyang 550025,China;College of Information Engineering,Guizhou University of Traditional Chinese Medical,Guiyang 550025,China;School of Information Science and Engineering,Xinjiang University,Urumqi 830008,China)
出处 《计算机科学》 CSCD 北大核心 2024年第10期234-246,共13页 Computer Science
基金 国家自然科学基金(61562078) 新疆天山青年计划项目(2018Q073)。
关键词 图表示学习 图自编码器 注意力机制 节点分类 Graph representation learning Graph auto-encoder Attention mechanism Node classification
  • 相关文献

参考文献7

二级参考文献25

共引文献103

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部