期刊文献+
共找到3篇文章
< 1 >
每页显示 20 50 100
日本“传统建造物群保存地区”制度中的调查研究工作机制及其对我国的启示 被引量:6
1
作者 吴云 沈济黄 徐雷 《国际城市规划》 CSSCI 北大核心 2011年第1期77-81,共5页
当前我国历史街区、历史村镇的保护中没有形成一套独立完善的调查研究工作体系,对于调查研究成果也少有评判。日本针对历史街区、历史村落保护的"传统建造物群保存地区"制度,在调查研究阶段的具体落实形式是"传统建造物... 当前我国历史街区、历史村镇的保护中没有形成一套独立完善的调查研究工作体系,对于调查研究成果也少有评判。日本针对历史街区、历史村落保护的"传统建造物群保存地区"制度,在调查研究阶段的具体落实形式是"传统建造物群保存对策调查",它是保护工作开展的首要步骤和必要前提,其独立完善的运行方式确保了调查研究成果的权威性;其调查研究工作的独特机制,对我国历史文化遗产保护工作中调查研究工作的开展有重要启示。 展开更多
关键词 传统建造物群保存地区 传统建造物群保存对策调查 调查研究
下载PDF
日本传统建筑群保存地区的概要与特点 被引量:3
2
作者 叶华 《国外城市规划》 1997年第3期11-14,共4页
本文概要地介绍日本传统建筑群保存地区制度的五项主要内容,即在实际运用中的五个主要阶段,并阐述了其发展特点及发展趋势。
关键词 传统建筑群 保存地区 日本 城市建设规划
下载PDF
CWLP:coordinated warp scheduling and locality-protected cache allocation on GPUs 被引量:1
3
作者 Yang ZHANG Zuo-cheng XING +1 位作者 Cang LIU Chuan TANG 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2018年第2期206-220,共15页
As we approach the exascale era in supercomputing, designing a balanced computer system with a powerful computing ability and low power requirements has becoming increasingly important. The graphics processing unit(... As we approach the exascale era in supercomputing, designing a balanced computer system with a powerful computing ability and low power requirements has becoming increasingly important. The graphics processing unit(GPU) is an accelerator used widely in most of recent supercomputers. It adopts a large number of threads to hide a long latency with a high energy efficiency. In contrast to their powerful computing ability, GPUs have only a few megabytes of fast on-chip memory storage per streaming multiprocessor(SM). The GPU cache is inefficient due to a mismatch between the throughput-oriented execution model and cache hierarchy design. At the same time, current GPUs fail to handle burst-mode long-access latency due to GPU's poor warp scheduling method.Thus, benefits of GPU's high computing ability are reduced dramatically by the poor cache management and warp scheduling methods, which limit the system performance and energy efficiency. In this paper, we put forward a coordinated warp scheduling and locality-protected(CWLP) cache allocation scheme to make full use of data locality and hide latency. We first present a locality-protected cache allocation method based on the instruction program counter(LPC) to promote cache performance. Specifically, we use a PC-based locality detector to collect the reuse information of each cache line and employ a prioritised cache allocation unit(PCAU) which coordinates the data reuse information with the time-stamp information to evict the lines with the least reuse possibility. Moreover, the locality information is used by the warp scheduler to create an intelligent warp reordering scheme to capture locality and hide latency. Simulation results show that CWLP provides a speedup up to 19.8% and an average improvement of 8.8% over the baseline methods. 展开更多
关键词 LOCALITY Graphics processing unit (GPU) Cache allocation Warp scheduling
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部