期刊文献+

Pinpointing and scheduling access conflicts to improve internal resource utilization in solid-state drives 被引量:2

原文传递
导出
摘要 Modern solid-state drives (SSDs)are integrating more internal resources to achieve higher capacity.Parallelizing accesses across internal resources can potentially enhance the performance of SSDs.However,exploiting parallelism inside SSDs is challenging owing to real-time access conflicts.In this paper,we propose a highly parallelizable I/O scheduler (PIOS)to improve internal resource utilization in SSDs from the perspective of I/O scheduling.Specifically, we first pinpoint the conflicting flash requests with precision during the address translation in the Flash Translation Layer (FTL).Then,we introduce conflict eliminated requests (CERs)to reorganize the I/O requests in the device-level queue by dispatching conflicting flash requests to different CERs.Owing to the significant performance discrepancy between flash read and write operations,PIOS employs differentiated scheduling schemes for read and write CER queues to always allocate internal resources to the conflicting CERs that are more valuable.The small dominant size prioritized scheduling policy for the write queue significantly decreases the average write latency.The high parallelism density prioritized scheduling policy for the read queue better utilizes resources by exploiting internal parallelism aggressively.Our evaluation results show that the paralle/izable I/O scheduler (PIOS)can accomplish better SSD performance than existing I/O schedulers implemented in both SSD devices and operating systems.
出处 《Frontiers of Computer Science》 SCIE EI CSCD 2019年第1期35-50,共16页 中国计算机科学前沿(英文版)
  • 相关文献

参考文献2

二级参考文献33

  • 1Franks B. Taming the Big Data Tidal Wave: Finding Opportunities in Huge Data Streams with Advanced Analytics. www.wiley.com. 2012.
  • 2Verta 0, Mastroianni C, Talia D. A super-peer model for resource discovery services in large-scale grids. Future Generation Computer Systems, 2005,21(8): 1235-1248.
  • 3Bent J, Grider G, Kettering Br, Manzanares A, McClelland M, Torres A, Torrez A. Storage challenges at Los Alamos National Lab. In: Proceedings of the 2012 Symposium on Massive Storage Systems and Technologies. 2012: 1-5.
  • 4Watson R W, Coyne R A. The parallel I/O architecture of the highperformance storage system. In: Proceedings of the 14th IEEE Symposium on Mass Storage Systems. 1995,27-44.
  • 5Lofstead J, Zheng F, Liu Q, Klasky S, Oldfield R, Kordenbrock T, Schwan K, Wolf M. Managing variability in the 10 performance of petascale storage system. IEEE Computer Society, 2010: 1-12.
  • 6Zhuge H. The Knowledge Grid. Singapore: World Scientific, 2004.
  • 7Oldfield R A, Maccabe A B, Arunagiri S, Kordenbrock T, Riesen R, Ward L, Widener P. Lightweight I/O for scientific applications. Technical Report of Sandia National Laboratories, 2006, 1-11.
  • 8Liu N, Cope J, Carns PH, Carothers CD, Ross R B, Grider G, Crume A, Maltzahn C. On the role of burst buffers in leadership-class storage systems. In: Proceedings of the 2012 Symposium on Massive Storage Systems and Technologies. 2012: 1-11.
  • 9Zhou E Q, Lu Y T, Zhang W, Dong Y. H2FS: a hybrid hierarchy filesystem for scalable data-intensive computing for HPC systems. Poster paper in International Supercomputing Conference. 2013.
  • 10Lustre: A scalable, high-performance file system. Cluster File Systems Inc. Whitepaper, Version 1.0, November 2002. http://www.lustre. erg/docs/white paper. pdf.

共引文献42

同被引文献3

引证文献2

二级引证文献5

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部