摘要
图神经网络中的注意力机制在处理图结构化数据方面表现出优异的性能。传统的图注意力计算直接连接的节点之间的注意力,并通过堆叠层数隐式获取高阶信息。尽管在图注意力机制方面目前已有广泛的研究,但用于注意力计算的堆叠范式在建模远程依赖方面效果较差。为了提高表达能力,设计了一种新颖的直接注意力机制,这一机制通过K阶邻接矩阵直接计算高阶邻居之间的注意力。通过自适应路由聚合过程进一步传播高阶信息,这使得聚合过程更灵活地适应不同图的特性。在引文网络上的节点分类任务上进行了大量的实验。实验表明,该方法优于最先进的基线模型。
Recently,the attention mechanism in Graph Neural Networks shows excellent performance in processing graph structured data.Traditional graph attention calculates the attention between directly connected nodes,and implicitly gets high-order information by stacking layers.Despite the extensive research about the graph attention mechanism,we argue that the stacking paradigm for attention calculation is less effective in modeling long-range dependency.To improve the expression ability,we design a novel direct attention mechanism,which directly calculates attention between higher-order neighbors via K-power adjacency matrix.We further propagate the higher-order information with an adaptive routing aggregation process,which makes the aggregation more flexible to adjust to the property of different graphs.We perform extensive experiments on node classifications on citation networks.Experiments shows that our method consistently outperforms the state-of-the-art baselines,which validates the effectiveness of our method.
作者
杨广乾
李金龙
Yang Guangqian;Li Jinlong(School of Computer Science and Technology,University of Science and Technology of China,Hefei 230026,China)
出处
《信息技术与网络安全》
2022年第6期64-72,共9页
Information Technology and Network Security
关键词
图神经网络
注意力
动态路由
graph neural networks
attention
dynamic routing