期刊文献+

Dynamic depth-width optimization for capsule graph convolutional network

原文传递
导出
摘要 1 Introduction Encouraged by the success of Convolutional Neural Networks(CNNs),many studies[1],known as Graph Convolutional Networks(GCNs),borrowed the idea of convolution and redefined it for graph data.In graph-level classification tasks,Classic GCN methods[2,3]generate graph embeddings based on the learned node embeddings which consider each node’s representation as multiple independent scalar features.However,they neglect the detailed mutual relations among different node features such as position,direction,and connection.Inspired by CapsNet[4]which encodes each feature of an image as a vector(a capsule),CapsGNN[5]extracts multi-scale node features from different convolutional layers in the form of capsules.However,CapsGNN uses a static model structure to conduct training,which inherently restricts its representation ability on different datasets.
出处 《Frontiers of Computer Science》 SCIE EI CSCD 2023年第6期159-161,共3页 中国计算机科学前沿(英文版)
基金 supported by the National Natural Science Foundation of China(Grant Nos.62141214 and 62272171).
  • 相关文献

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部