摘要
现有的图自动编码器忽视了图邻居节点的差异和图潜在的数据分布。为了提高图自动编码器嵌入能力,提出图注意力对抗变分自动编码器(AAVGA-d),该方法将注意力引入编码器,并在嵌入训练中使用对抗机制。图注意力编码器实现了对邻居节点权重的自适应分配,对抗正则化使编码器生成的嵌入向量分布接近数据的真实分布。为了加深图注意力层数,设计一种针对注意力网络的随机边删除技术(RDEdge),减少了层数过深引起的过平滑信息丢失。实验结果表明,AAVGA-d的图嵌入能力与目前流行的图自动编码器相比具有竞争优势。
The existing graph autoencoder ignores the difference between the neighbor nodes of the graph and the potential data distribution of the graph.In order to improve the embedding ability of the graph autoencoder,the graph attention adversarial variational autoencoder(AAVGA-d)is proposed.This method introduced attention to the encoder and used an adversarial mechanism in the embedding training.The graph attention encoder realized the adaptive allocation of the weights of neighbor nodes,and the adversarial regularization made the distribution of the embedding vector generated by the encoder close to the true distribution of the data.In order to deepen the number of graph attention layers,a random edge deletion technology(RDEdge)for attention networks was designed to reduce the loss of over-smooth information caused by excessively deep layers.The experimental results prove that the graph embedding capability of AAVAG-d has a competitive advantage compared with the current popular graph autoencoders.
作者
翁自强
张维玉
孙旭
Weng Ziqiang;Zhang Weiyu;Sun Xu(School of Computer Science and Technology,Qilu University of Technology(Shandong Academy of Sciences),Jinan 250353,Shandong,China)
出处
《计算机应用与软件》
北大核心
2024年第9期156-165,共10页
Computer Applications and Software
基金
国家重点研发计划项目(2018YFC0831704)
国家自然科学基金项目(61806105)
山东省自然科学基金项目(ZR2017MF056)。
关键词
图注意力
过平滑
自动编码器
对抗
Graph attention
Over-smoothing
Autoencoder
Adversarial