期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
SOOP: Efficient Distributed Graph Computation Supporting Second-Order Random Walks
1
作者 songjie niu Dongyan Zhou 《Journal of Computer Science & Technology》 SCIE EI CSCD 2021年第5期985-1001,共17页
The second-order random walk has recently been shown to effectively improve the accuracy in graph analysis tasks.Existing work mainly focuses on centralized second-order random walk(SOW)algorithms.SOW algorithms rely ... The second-order random walk has recently been shown to effectively improve the accuracy in graph analysis tasks.Existing work mainly focuses on centralized second-order random walk(SOW)algorithms.SOW algorithms rely on edge-to-edge transition probabilities to generate next random steps.However,it is prohibitively costly to store all the probabilities for large-scale graphs,and restricting the number of probabilities to consider can negatively impact the accuracy of graph analysis tasks.In this paper,we propose and study an alternative approach,SOOP(second-order random walks with on-demand probability computation),that avoids the space overhead by computing the edge-to-edge transition probabilities on demand during the random walk.However,the same probabilities may be computed multiple times when the same edge appears multiple times in SOW,incurring extra cost for redundant computation and communication.We propose two optimization techniques that reduce the complexity of computing edge-to-edge transition probabilities to generate next random steps,and reduce the cost of communicating out-neighbors for the probability computation,respectively.Our experiments on real-world and synthetic graphs show that SOOP achieves orders of magnitude better performance than baseline precompute solutions,and it can efficiently computes SOW algorithms on billion-scale graphs. 展开更多
关键词 second-order random walk(SOW) Node2Vec second-order PageRank distributed graph computation SOOP(second-order random walks with on-demand probability computation)
原文传递
TransGPerf:Exploiting Transfer Learning for Modeling Distributed Graph Computation Performance
2
作者 songjie niu Shimin Chen 《Journal of Computer Science & Technology》 SCIE EI CSCD 2021年第4期778-791,共14页
It is challenging to model the performance of distributed graph computation.Explicit formulation cannot easily capture the diversified factors and complex interactions in the system.Statistical learning methods requir... It is challenging to model the performance of distributed graph computation.Explicit formulation cannot easily capture the diversified factors and complex interactions in the system.Statistical learning methods require a large number of training samples to generate an accurate prediction model.However,it is time-consuming to run the required graph computation tests to obtain the training samples.In this paper,we propose TransGPerf,a transfer learning based solution that can exploit prior knowledge from a source scenario and utilize a manageable amount of training data for modeling the performance of a target graph computation scenario.Experimental results show that our proposed method is capable of generating accurate models for a wide range of graph computation tasks on PowerGraph and GraphX.It outperforms transfer learning methods proposed for other applications in the literature. 展开更多
关键词 performance modeling distributed graph computation deep learning transfer learning
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部