摘要
针对传统的信息预测缺乏对用户全局性依赖挖掘进行研究,提出了一种融合超图注意力机制与图卷积网络的信息扩散预测模型(HGACN)。首先构建用户社交关系子图,采样获得子级联序列,输入图卷积神经网络学习用户社交关系结构特征;其次,综合考虑用户间和级联间的全局依赖,采用超图注意机制(HGAT)学习用户不同时间间隔的交互特征;最后,将学习到的用户表示捕获到嵌入模块,利用门控机制将其融合获得更具表现力的用户表示,利用带掩码的多头注意力机制进行信息预测。在Twitter等五个数据集上的实验结果表明,提出的HGACN模型在hits@N提高了4.4%,map@N提高了2.2%,都显著优于已有的MS-HGAT等扩散预测模型,证明HGACN模型是合理、有效的。这对谣言监测以及恶意账户的检测有非常重大的意义。
Aiming at the lack of global dependency mining of users in traditional information prediction models research,this paper proposed an information diffusion prediction model based on hypergraph attention mechanism and graph convolution network(HGACN).Firstly,it constructed the subgraph of user social relationship,and obtained the subcascade sequence by sampling,and learnt the structural features of user social relationship by graph convolutional neural network.Secondly,considering the global dependence between users and cascades,it used the hypergraph attention mechanism(HGAT)to learn the interaction characteristics of users at different time intervals.Finally,it captured the learned user representation into the embedded module,and used the gating mechanism to fuse it to obtain a more expressive user representation,and used the multi-head attention mechanism with mask for information prediction.The experimental results on Twitter and other five datasets show that the proposed HGACN model is improved by 4.4%in hits@N and 2.2%in map@N,which are significantly better than the existing diffusion prediction models such as MS-HGAT,proving that the proposed HGACN model is reasonable and effective.It is of great significance for HGACN to monitor rumors and detect malicious accounts.
作者
苗琛香
刘小洋
Miao Chenxiang;Liu Xiaoyang(School of Computer Science&Engineering,Chongqing University of Technology,Chongqing 400054,China)
出处
《计算机应用研究》
CSCD
北大核心
2023年第6期1715-1720,共6页
Application Research of Computers
基金
重庆市教委人文社科重点项目(23SKGH247)
国家教育考试科研规划2021年度课题(GJK2021028)。
关键词
超图
图卷积网络
门控机制
多头注意力机制
扩散预测
hypergraph
graph convolutional network
gating mechanism
multi-head attention mechanism
diffusion prediction