期刊文献+
共找到1篇文章
< 1 >
每页显示 20 50 100
Building a Productive Domain-Specific Cloud for Big Data Processing and Analytics Service
1
作者 Yuzhong Yan mahsa hanifi +1 位作者 Liqi Yi Lei Huang 《Journal of Computer and Communications》 2015年第5期107-117,共11页
Cloud Computing as a disruptive technology, provides a dynamic, elastic and promising computing climate to tackle the challenges of big data processing and analytics. Hadoop and MapReduce are the widely used open sour... Cloud Computing as a disruptive technology, provides a dynamic, elastic and promising computing climate to tackle the challenges of big data processing and analytics. Hadoop and MapReduce are the widely used open source frameworks in Cloud Computing for storing and processing big data in the scalable fashion. Spark is the latest parallel computing engine working together with Hadoop that exceeds MapReduce performance via its in-memory computing and high level programming features. In this paper, we present our design and implementation of a productive, domain-specific big data analytics cloud platform on top of Hadoop and Spark. To increase user’s productivity, we created a variety of data processing templates to simplify the programming efforts. We have conducted experiments for its productivity and performance with a few basic but representative data processing algorithms in the petroleum industry. Geophysicists can use the platform to productively design and implement scalable seismic data processing algorithms without handling the details of data management and the complexity of parallelism. The Cloud platform generates a complete data processing application based on user’s kernel program and simple configurations, allocates resources and executes it in parallel on top of Spark and Hadoop. 展开更多
关键词 BUILDING a Productive Domain-Specific CLOUD for BIG Data PROCESSING and ANALYTICS SERVICE
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部