期刊文献+

基于启发式粗化算法的半监督图神经网络的训练加速框架及算法

Framework and Algorithms for Accelerating Training of Semi-supervised Graph Neural Network Based on Heuristic Coarsening Algorithms
下载PDF
导出
摘要 图神经网络是当前阶段图机器学习的主流工具,发展势头强劲。通过构建抽象图结构,运用图神经网络模型能够高效地处理多种应用场景下的问题,包括节点预测、链接预测和图分类等方向。与之相对应,一直以来,在大规模图上的应用是图神经网络训练中的关键点和难点,如何有效、快速地在大规模图数据上进行图神经网络的训练和部署是阻碍图神经网络进一步工业化应用的一大难题。图神经网络因为能够利用图的网络结构的拓扑信息,所以在如节点预测的赛道上能够取得比一般其他神经网络如多层感知机等更好的效果,但是图的网络结构的节点个数和边的条数的规模增长制约了图神经网络的训练,真实数据集的节点数量规模达到千万级别甚至亿级别,或者是部分稠密的网络结构中边的数量规模亦达到了千万级别,使得传统的图神经网络训练方法均难以直接取得成效。针对以上问题,改进并提出了基于图粗化算法的新型图神经网络训练框架,并在此基础上提出了两种具体的训练算法,同时配合提出了两种简单的启发式图粗化算法。在精度损失可以接受和内存空间消耗大大降低的前提下,所提算法能够进一步显著地降低图神经网络的计算量,缩短训练时间,实验结果表明其在常见数据集上均能取得令人满意的成绩。 graph structure,the graph neural network model can be used to efficiently deal with problems in various application scenarios,including node prediction,link prediction,and graph classification.But the application on large-scale graphs has always been the key point and difficulty in graph neural network training.And how to effectively and quickly train and deploy graph neural networks on large-scale graph data is a major problem hindering the further industrial application of graph neural networks.Graph neural network can use the topological information of the network structure of the graph,so as to achieve better results than other general neural networks such as multi-layer perceptron on the node prediction problem.But the rapid growth of the number of nodes and edges of the graph’s network structure restricts the training of the graph neural network,and the number of nodes in the real dataset is tens of millions or even billions,or the number of edges in some dense network structures has reached tens of millions.This makes it difficult for traditional graph neural network training methods to achieve direct results.This paper improves and proposes a new framework for graph neural network training based on heuristic graph coarsening algorithms,and proposes two specific training algorithms on this basis.Then this paper proposes two simple heuristic graph coarsening algorithms.Under the guarantee that the loss of accuracy is acceptable and the memory space consumption is greatly reduced,the proposed algorithm can further significantly reduce both calculation time and training time of graph neural networks.Experiment shows that satisfactory results can be achieved on common datasets.
作者 陈裕丰 黄增峰 CHEN Yufeng;HUANG Zengfeng(School of Data Science,Fudan University,Shanghai 200433,China)
出处 《计算机科学》 CSCD 北大核心 2024年第3期48-55,共8页 Computer Science
关键词 图神经网络 图粗化 训练加速 启发式 随机游走 无偏 Graph neural network Graph coarsening Training acceleration Heuristic Random walk Unbiased
  • 相关文献

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部