期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
超细涤纶茶巾织物上浆实践 被引量:1
1
作者 毛雷 王国立 《棉纺织技术》 CAS CSCD 北大核心 2010年第11期57-58,共2页
为开发出毛经纱为超细涤纶低弹网络丝、地经纱为涤纶短纤股线和棉单纱的茶巾织物,对该品种的经纱上浆工艺进行了分析,毛经纱上浆采用"重被覆、顾渗透、小伸长、浆膜完整均匀、便于退浆"的工艺原则,地经纱中的棉单纱采用先经... 为开发出毛经纱为超细涤纶低弹网络丝、地经纱为涤纶短纤股线和棉单纱的茶巾织物,对该品种的经纱上浆工艺进行了分析,毛经纱上浆采用"重被覆、顾渗透、小伸长、浆膜完整均匀、便于退浆"的工艺原则,地经纱中的棉单纱采用先经筒子上浆后与涤纶短纤股线共同在单浆槽浆纱机上上浆的方法,最终使织机效率达91%,且生产出的茶巾织物手感柔软、吸水性能良好,各项物理性能指标符合使用要求。 展开更多
关键词 超细涤纶 浆纱 地经纱 经纱 浆料配方 筒子上浆
下载PDF
CWLP:coordinated warp scheduling and locality-protected cache allocation on GPUs 被引量:1
2
作者 Yang ZHANG Zuo-cheng XING +1 位作者 Cang LIU Chuan TANG 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2018年第2期206-220,共15页
As we approach the exascale era in supercomputing, designing a balanced computer system with a powerful computing ability and low power requirements has becoming increasingly important. The graphics processing unit(... As we approach the exascale era in supercomputing, designing a balanced computer system with a powerful computing ability and low power requirements has becoming increasingly important. The graphics processing unit(GPU) is an accelerator used widely in most of recent supercomputers. It adopts a large number of threads to hide a long latency with a high energy efficiency. In contrast to their powerful computing ability, GPUs have only a few megabytes of fast on-chip memory storage per streaming multiprocessor(SM). The GPU cache is inefficient due to a mismatch between the throughput-oriented execution model and cache hierarchy design. At the same time, current GPUs fail to handle burst-mode long-access latency due to GPU's poor warp scheduling method.Thus, benefits of GPU's high computing ability are reduced dramatically by the poor cache management and warp scheduling methods, which limit the system performance and energy efficiency. In this paper, we put forward a coordinated warp scheduling and locality-protected(CWLP) cache allocation scheme to make full use of data locality and hide latency. We first present a locality-protected cache allocation method based on the instruction program counter(LPC) to promote cache performance. Specifically, we use a PC-based locality detector to collect the reuse information of each cache line and employ a prioritised cache allocation unit(PCAU) which coordinates the data reuse information with the time-stamp information to evict the lines with the least reuse possibility. Moreover, the locality information is used by the warp scheduler to create an intelligent warp reordering scheme to capture locality and hide latency. Simulation results show that CWLP provides a speedup up to 19.8% and an average improvement of 8.8% over the baseline methods. 展开更多
关键词 LOCALITY Graphics processing unit (GPU) Cache allocation Warp scheduling
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部