摘要
图神经网络(Graph Neural Network, GNN)作为深度神经网络在图数据上的延伸,在许多与图相关的任务上取得重大突破,但容易遭受对抗性攻击。既有研究针对图对抗性攻击提出许多防御方法,但多数方法存在牺牲原模型性能以提升鲁棒性的缺点。以图卷积网络(Graph Convolutional Network, GCN)为模型,提出一种基于注意力机制的图防御方法GA-GCN。在GCN遭受投毒攻击后,首先通过结构相似性和特征相似性筛选出对抗边,然后在GCN中引入注意力机制,为对抗边分配较低的注意力系数,减少污染数据在模型中的传播,从而实现有效防御。在Cora、Citeseer和Pubmed数据集上进行实验,遭受Metattack攻击10%的连边后的GCN采用该方法后,节点分类准确率分别提升了7.2、3.6、3.1个百分点,结果显示该方法能有效提高模型抵御投毒攻击的鲁棒性。
Graph neural network(GNN)as an extension of depth neural network in graph data,has made great breakthroughs in many graph-related tasks,but it is vulnerable to adversarial attacks.Previous studies have proposed many defense methods against graph adversarial attacks,but most of them have the disadvantage of sacrificing the performance of the original model to improve robust-ness.Based on the graph convolution network(GCN)model,a graph defense method GA-GCN based on attention mechanism is proposed in this paper.After the GCN is poisoned,the antagonistic edge is screened by structural similarity and feature similarity,and then the attention mechanism is introduced into the GCN to allocate a lower attention coefficient to the antagonistic side and reduce the spread of pollution data in the model,thus achieving effective defense.Experiments are carried out on Cora,Citeseer and Pubmed datasets.After GCN with 10%Metattack attacks,the accuracy of node classification is improved by 7.2,3.6 and 3.1 percentage points,respectively.The results show that this method can effectively improve the robustness of the model against poisoning atacks.
作者
金柯君
于洪涛
李邵梅
张建朋
JIN Kejun;YU Hongtao;LI Shaomei;ZHANG Jianpeng(Information Engineering University,Zhengzhou 450001,China)
出处
《信息工程大学学报》
2023年第6期718-724,共7页
Journal of Information Engineering University
基金
国家自然科学基金资助项目(62002384)
中国博士后科学基金面上项目(2020M683760)。
关键词
图神经网络
对抗性攻击
对抗性防御
注意力机制
graph neural network(GNN)
adversarial attack
adversarial defense
attention mecha-nism