期刊文献+

ELF:基于无用块消除和低重用块过滤的共享Cache管理策略 被引量:1

ELF:Shared Cache Management through Eliminating Dead Blocks and Filtering Less Reused Lines
下载PDF
导出
摘要 当代CMP处理器通常采用基于LRU替换策略或其近似算法的共享最后一级Cache设计.然而,随着LLC容量和相联度的增长,LRU和理论最优替换算法之间的性能差距日趋增大.为此已提出多种Cache管理策略来解决这一问题,但是它们多数仅针对单一的内存访问类型,且对Cache访问的频率信息关注较少,因而性能提升具有很大的局限性.文中提出一种统一的Cache管理策略ELF,不仅可以覆盖多种访存行为,而且能够同时考虑程序中数据的临近性和使用频率信息.根据LLC中Cache块在其生命期内使用频率较低这一实验结果,ELF策略能够(1)通过基于计数的算法预测出无用块并将其尽早替换;(2)通过动态插入和提升策略过滤低重用数据,从而尽量保留那些潜在的活动数据并且使得一部分工作集免受低使用频率数据的干扰.在4路CMPs上的实验结果显示,ELF可以将全局性能平均提升14.5%,同时与PIPP和TADIP相比,可以分别达到1.06倍和1.09倍的加速比. Abstract Modern CMP processors usually employ shared last level cache (LI.C) based on LRU replacement policy and its approximations. However, as the LLC grows in capacity and associa- tivity, the performance gap between the LRU and the theoretical optimal replacement algorithms has widened. Various alternative cache management technologies have been proposed to resolve this problem, but they only cover a single type of memory access behavior, and exploit little fre- quency information of cache accesses, thus have limited performance benefits. In this paper, we propose a unified cache management policy ELF which can cover a variety of memory behaviors and exploit both recency and frequency information of a program simultaneously. Motivatedbythe observation that cache blocks often exhibit a small number of uses during their life time in the LLC, ELF is designed to (1) predict dead lines through a counter-based mechanism and evict them early, (2) filter less reused blocks through dynamic insertion and promotion policies. Thereby, the potentially live blocks ere retained and most of the working set keeps undisturbed in the ELF managed L2 cache. Our evaluation on 4-way CMPs shows that ELF improves the overall performance by 14.5% on average over the LRU policy, and the performance benefit of ELF is 1.06x compared to PIPP and 1.09x compared to TADIP.
出处 《计算机学报》 EI CSCD 北大核心 2011年第1期143-153,共11页 Chinese Journal of Computers
基金 国家"八六三"高技术研究发展计划项目基金(2008AA01Z111) IBM大学合作联合研究项目(JSA200906010) 中国科学技术大学研究生创新基金(KD2008059) 国家自然科学基金重点项目(60533020)资助
关键词 多核 共享高速缓存 插入策略 替换算法 基于计数的算法 multi-core shared cache insertion policy replacement algorithms counter-based algorithms
  • 相关文献

参考文献17

  • 1Belady L A. A study of replacement algorithms for a virtual-storage computer. IBM Systems Journal, 1996, 5(2): 78- 101.
  • 2Oureshi M K, Patt Y N. Utility based cache partitioning: A low-overhead, high performance, runtime mechanism to partition shared caches//Proceedings of the 39th Annual IEEE/ ACM International Symposium on Microarchitecture. Washington: IEEE Computer Society, 2006:423-432.
  • 3Stone H S, Turek J, Wolf J L. Optimal partitioning of cache memory. IEEE Transactions on Computers, 1992, 41 (9): 1054-1068.
  • 4Suh E G, Rudolph L, Devadas S. Dynamic partitioning of shared cache memory. Journal of Supercomputing, 2004, 28 (1) : 7-26.
  • 5Kim S, Chandra D, Solihin Y. Fair cache sharing and partitioning in a chip multiproeessor architecture//Proceedings of the 13th International Conference on Parallel Architecture and Compilation Techniques. Washington: IEEE Computer Society, 2004:111- 122.
  • 6Iyer R R. CQoS: A framework for enabling QoS in shared caches of CMP platforms//Proceedings of the 18th Annual International Conference on Supercomputing. New York: ACM, 2004: 257-266.
  • 7Chang J, Sohi G S. Cooperative cache partitioning for chip multiprocessor//Proceedings of the 21st Annual International Conference on Supercomputing. New York: ACM, 2007: 242 -252.
  • 8Qureshi MK, JaleelA, Patt Y N, JrSCS, EmerJ. Adaprive insertion policies for high performance caching//Proceedings of the 34th Annual Internalional Symposium on Computer Architeeture. New York: ACM, 2007:381-391.
  • 9Kron J D, Prumo B, Loh G H. Double-DIP: Augmenting DIP with adaptive promotion policies to manage shared L2 caches//Proceedings of the 2nd Workshop on Chip Multiprocessor Memory Systems and Interconnects. Beijing, China, 2008.
  • 10Jaleel A, Hasenplaugh W, Qureshi M, Sebot J, Jr S S, Hmer J. Adaptive insertion policies for managing shared caches//Proceedings of the 17th International Conference on Parallel Architectures and Compilation Techniques. New York: ACM, 2008:208-219.

二级参考文献11

  • 1Kalla R, Balaram S et al. IBM Power 5 chip: A dual-core multithreaded processor. IEEE Micro, 2004, 24(2):40-47
  • 2Kongetira P, Aingaran K et al. Niagara: A 32-way multithreaded Sparc processor. IEEE Micro, 2005, 25(2): 21-29
  • 3Kim S, Chandra D, Solihin Y. Fair Cache sharing and partitioning in a chip multiprocessor architecture//Proceedings of the 13th International Conference on Parallel Architectures and Compilation Techniques. Orlando, Florida, 2004:111-122
  • 4Qureshi M K, Patt Y N. Utility-based Cache partitioning: A low-overhead, high-performance, runtime mechanism to partition shared caches//Proceedings of the 39th Annual IEEE/ ACM International Symposium on Microarchitecture. Antibes Juan-les-Pins, France, 2006:423-432
  • 5Suh G E, Rudolph L, Devadas S. Dynamic partitioning of shared Cache memory. Journal of Supercomputing, 2004, 28(1): 7-26
  • 6Iyer R. CQoS: A framework for enabling QoS in shared caches of CMP platforms//Proceedings of the 18th Annual International Conference on Supercomputing. Malo, France 2004:257-266
  • 7Iyer R, Zhao L, Guo F et al. QoS policies and architecture for Cache/memory in CMP platforms. SIGMETRICS Performance Evaluation Review, 2007, 35(1): 25-36
  • 8Chiou D, Jain P, Rudolph L et al. Application-specific memory management for embedded systems using software-controlled caehes//Proceedings of the 37th Conference on Design Automation. Los Angeles, California, United States: 2000: 416-419
  • 9Magnusson P S, Christensson M, Eskilson J et al. Simics: A full system simulation platform. Computer, 2002, 35 (2): 50-58
  • 10Luo K, Gummaraju J, Franklin M. Balancing throughput and fairness in SMT processors//Proceedings of the 21st International Symposium on Performance Analysis of Systems and Software. Tucson, AZ, 2001:164-171

共引文献11

同被引文献19

  • 1Hetherington R. The UltraSPARC T1 Processor- Power Efficient Throughput Computing [M]. Redwood, California: Sun Inc, 2005.
  • 2Ramanathan R. Intel Multi-Core Processors: Making the Move to Quad-Core and Beyond [M]. Santa Clara: Intel Corporation, 2006.
  • 3Mattson T G, Henry G. An overview of the Intel TFLOPS supercomputer[J]. Intel Technology, 1998, 1(1):1-12.
  • 4Kim S, Chandra D, Solihin Y. Fair cache sharing and partitioning in a chip multiprocessor architecture [C] //Proc of PACT. Piscataway, NJ: IEEE Computer Society, 2004:111-122.
  • 5Chen S, Gibbons P B, Kozuch M, et al. Scheduling threads for constructive cache sharing on CMPs [C] //Proc of the 19th Annual ACM Symp on Parallel Algorithms and Architectures. New York: AC1M, 2007:105-115.
  • 6Iyer R. CQoS: A framework for enabling QoS in shared caches of CMP platforms [C] //Proc of the 18th ACM Int Conf on Supercomputing (ICS-18). New York: ACM, 2004: 257-266.
  • 7Qureshi M K, Patt Y N. Utility-based cache partitioning: A low overhead, high-performance, runtime mechanism to partition shared caches [C] //Proc of the 39th Annual Int Syrup on Microarchitecture ( MICRO 39 ). Los Alamitos, CA: IEEE Computer Society, 2006:423-432.
  • 8Muralidhara S P, Kandemir M: Raghavan P, et al. Intra application cache partitioning [C] //Proc of the 24th IEEE/ ACM lnt Parallel and Distributed Syrup. New York: ACM, 2010, 1-12.
  • 9Kharbutli M, Solihin Y. Counter based cache replacement and bypassing algorithms [J]. IEEE Trans on Computers, 2008, 57(4): 433-447.
  • 10Qureshi M K, Jaleel A, Patt Y N, et al. Adaptive insertion policies for high-performance caching [C] //Proc of the 34th Annual Int Symp on Computer Architecture. New York: ACM, 2007:381-391.

引证文献1

二级引证文献5

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部