Graph data mining has been a crucial as well as inevitable area of research.Large amounts of graph data are produced in many areas,such as Bioinformatics,Cheminformatics,Social Networks,etc.Scalable graph data mining ...Graph data mining has been a crucial as well as inevitable area of research.Large amounts of graph data are produced in many areas,such as Bioinformatics,Cheminformatics,Social Networks,etc.Scalable graph data mining methods are getting increasingly popular and necessary due to increased graph complexities.Frequent subgraph mining is one such area where the task is to find overly recurring patterns/subgraphs.To tackle this problem,many main memory-based methods were proposed,which proved to be inefficient as the data size grew exponentially over time.In the past few years,several research groups have attempted to handle the Frequent Subgraph Mining(FSM)problem in multiple ways.Many authors have tried to achieve better performance using Graphic Processing Units(GPUs)which has multi-fold improvement over in-memory while dealing with large datasets.Later,Google’s MapReduce model with the Hadoop framework proved to be a major breakthrough in high performance large batch processing.Although MapReduce came with many benefits,its disk I/O and noniterative style model could not help much for FSM domain since subgraph mining process is an iterative approach.In recent years,Spark has emerged to be the De Facto industry standard with its distributed in-memory computing capability.This is a right fit solution for iterative style of programming as well.In this survey,we cover how high-performance computing has helped in improving the performance tremendously in the transactional directed and undirected aspect of graphs and performance comparisons of various FSM techniques are done based on experimental results.展开更多
文摘Graph data mining has been a crucial as well as inevitable area of research.Large amounts of graph data are produced in many areas,such as Bioinformatics,Cheminformatics,Social Networks,etc.Scalable graph data mining methods are getting increasingly popular and necessary due to increased graph complexities.Frequent subgraph mining is one such area where the task is to find overly recurring patterns/subgraphs.To tackle this problem,many main memory-based methods were proposed,which proved to be inefficient as the data size grew exponentially over time.In the past few years,several research groups have attempted to handle the Frequent Subgraph Mining(FSM)problem in multiple ways.Many authors have tried to achieve better performance using Graphic Processing Units(GPUs)which has multi-fold improvement over in-memory while dealing with large datasets.Later,Google’s MapReduce model with the Hadoop framework proved to be a major breakthrough in high performance large batch processing.Although MapReduce came with many benefits,its disk I/O and noniterative style model could not help much for FSM domain since subgraph mining process is an iterative approach.In recent years,Spark has emerged to be the De Facto industry standard with its distributed in-memory computing capability.This is a right fit solution for iterative style of programming as well.In this survey,we cover how high-performance computing has helped in improving the performance tremendously in the transactional directed and undirected aspect of graphs and performance comparisons of various FSM techniques are done based on experimental results.