期刊文献+
共找到251篇文章
< 1 2 13 >
每页显示 20 50 100
Shared Cache Based on Content Addressable Memory in a Multi-Core Architecture
1
作者 Allam Abumwais Mahmoud Obaid 《Computers, Materials & Continua》 SCIE EI 2023年第3期4951-4963,共13页
Modern shared-memory multi-core processors typically have shared Level 2(L2)or Level 3(L3)caches.Cache bottlenecks and replacement strategies are the main problems of such architectures,where multiple cores try to acc... Modern shared-memory multi-core processors typically have shared Level 2(L2)or Level 3(L3)caches.Cache bottlenecks and replacement strategies are the main problems of such architectures,where multiple cores try to access the shared cache simultaneously.The main problem in improving memory performance is the shared cache architecture and cache replacement.This paper documents the implementation of a Dual-Port Content Addressable Memory(DPCAM)and a modified Near-Far Access Replacement Algorithm(NFRA),which was previously proposed as a shared L2 cache layer in a multi-core processor.Standard Performance Evaluation Corporation(SPEC)Central Processing Unit(CPU)2006 benchmark workloads are used to evaluate the benefit of the shared L2 cache layer.Results show improved performance of the multicore processor’s DPCAM and NFRA algorithms,corresponding to a higher number of concurrent accesses to shared memory.The new architecture significantly increases system throughput and records performance improvements of up to 8.7%on various types of SPEC 2006 benchmarks.The miss rate is also improved by about 13%,with some exceptions in the sphinx3 and bzip2 benchmarks.These results could open a new window for solving the long-standing problems with shared cache in multi-core processors. 展开更多
关键词 multi-core processor shared cache content addressable memory dual port CAM replacement algorithm benchmark program
下载PDF
Multi-core optimization for conjugate gradient benchmark on heterogeneous processors
2
作者 邓林 窦勇 《Journal of Central South University》 SCIE EI CAS 2011年第2期490-498,共9页
Developing parallel applications on heterogeneous processors is facing the challenges of 'memory wall',due to limited capacity of local storage,limited bandwidth and long latency for memory access. Aiming at t... Developing parallel applications on heterogeneous processors is facing the challenges of 'memory wall',due to limited capacity of local storage,limited bandwidth and long latency for memory access. Aiming at this problem,a parallelization approach was proposed with six memory optimization schemes for CG,four schemes of them aiming at all kinds of sparse matrix-vector multiplication (SPMV) operation. Conducted on IBM QS20,the parallelization approach can reach up to 21 and 133 times speedups with size A and B,respectively,compared with single power processor element. Finally,the conclusion is drawn that the peak bandwidth of memory access on Cell BE can be obtained in SPMV,simple computation is more efficient on heterogeneous processors and loop-unrolling can hide local storage access latency while executing scalar operation on SIMD cores. 展开更多
关键词 multi-core processor NAS parallelization CG memory optimization
下载PDF
Parallel Processing Design for LTE PUSCH Demodulation and Decoding Based on Multi-Core Processor
3
作者 Zhang Ziran,Li Jun,Li Changxiao(ZTE Corporation,Shenzhen 518057,P.R.China) 《ZTE Communications》 2009年第1期54-58,共5页
The Long Term Evolution (LTE) system imposes high requirements for dispatching delay.Moreover,very large air interface rate of LTE requires good processing capability for the devices processing the baseband signals.Co... The Long Term Evolution (LTE) system imposes high requirements for dispatching delay.Moreover,very large air interface rate of LTE requires good processing capability for the devices processing the baseband signals.Consequently,the single-core processor cannot meet the requirements of LTE system.This paper analyzes how to use multi-core processors to achieve parallel processing of uplink demodulation and decoding in LTE systems and designs an approach to parallel processing.The test results prove that this approach works quite well. 展开更多
关键词 CORE LTE Parallel Processing Design for LTE PUSCH Demodulation and Decoding Based on multi-core processor Design
下载PDF
MVSim:面向VLIW多核向量处理器的快速、可扩展和精确的体系结构模拟器
4
作者 刘仲 李程 +3 位作者 田希 刘胜 邓让钰 钱程东 《计算机工程与科学》 CSCD 北大核心 2024年第2期191-199,共9页
设计了一个面向VLIW多核向量处理器的快速、可扩展、精确的体系结构模拟器MVSim。设计了可扩展的VLIW多核向量处理器模型、多级存储体系结构模型和多核性能模型;实现了指令集架构的节拍精准模拟,Cache、DMA和多核同步部件的高效功能模拟... 设计了一个面向VLIW多核向量处理器的快速、可扩展、精确的体系结构模拟器MVSim。设计了可扩展的VLIW多核向量处理器模型、多级存储体系结构模型和多核性能模型;实现了指令集架构的节拍精准模拟,Cache、DMA和多核同步部件的高效功能模拟,采用多线程技术实现了多核处理器的高效和可扩展模拟。实验结果表明,MVSim能够准确模拟多核处理器的目标程序执行,模拟结果完全正确,具有良好的可扩展性。MVSim的平均模拟速度分别是RTL模拟和CCS的227倍和5倍,平均性能误差约为2.9%。 展开更多
关键词 体系结构模拟器 VLIW 多核向量处理器模型 性能模型 节拍精准模拟器
下载PDF
长向量处理器高效RNN推理方法
5
作者 苏华友 陈抗抗 杨乾明 《国防科技大学学报》 EI CAS CSCD 北大核心 2024年第1期121-130,共10页
模型深度的不断增加和处理序列长度的不一致对循环神经网络在不同处理器上的性能优化提出巨大挑战。针对自主研制的长向量处理器FT-M7032,实现了一个高效的循环神经网络加速引擎。该引擎采用行优先矩阵向量乘算法和数据感知的多核并行方... 模型深度的不断增加和处理序列长度的不一致对循环神经网络在不同处理器上的性能优化提出巨大挑战。针对自主研制的长向量处理器FT-M7032,实现了一个高效的循环神经网络加速引擎。该引擎采用行优先矩阵向量乘算法和数据感知的多核并行方式,提高矩阵向量乘的计算效率;采用两级内核融合优化方法降低临时数据传输的开销;采用手写汇编优化多种算子,进一步挖掘长向量处理器的性能潜力。实验表明,长向量处理器循环神经网络推理引擎可获得较高性能,相较于多核ARM CPU以及Intel Golden CPU,类循环神经网络模型长短记忆网络可获得最高62.68倍和3.12倍的性能加速。 展开更多
关键词 多核DSP 长向量处理器 循环神经网络 并行优化
下载PDF
NM-SpMM:面向国产异构向量处理器的半结构化稀疏矩阵乘算法
6
作者 姜晶菲 何源宏 +2 位作者 许金伟 许诗瑶 钱希福 《计算机工程与科学》 CSCD 北大核心 2024年第7期1141-1150,共10页
深度神经网络在自然语言处理、计算机视觉等领域取得了优异的成果,由于智能应用处理数据规模的增长和大模型的快速发展,对深度神经网络的推理性能要求越来越高,N∶M半结构化稀疏化技术成为平衡算力需求和应用效果的热点技术之一。国产... 深度神经网络在自然语言处理、计算机视觉等领域取得了优异的成果,由于智能应用处理数据规模的增长和大模型的快速发展,对深度神经网络的推理性能要求越来越高,N∶M半结构化稀疏化技术成为平衡算力需求和应用效果的热点技术之一。国产异构向量处理器FT-M7032为智能模型处理中的数据并行和指令并行开发提供了较大空间。针对N∶M半结构化稀疏模型计算稀疏模式多样性,提出了一种面向FT-M7032的可灵活配置的稀疏矩阵乘算法NM-SpMM。NM-SpMM设计了一种高效的压缩偏移地址稀疏编码格式COA,避免了半结构化参数配置对稀疏数据访存计算的影响。基于COA编码,NM-SpMM对不同维度稀疏矩阵计算进行了细粒度优化。在FT-M7032单核上的实验结果表明,相较于稠密矩阵乘,NM-SpMM能获得1.73~21.00倍的加速,相较于采用CuSPARSE稀疏计算库的NVIDIA V100 GPU,能获得0.04~1.04倍的加速。 展开更多
关键词 深度神经网络 图形处理器 向量处理器 稀疏矩阵乘 流水线
下载PDF
基于无裁剪图形流水线的三维图形处理器
7
作者 赵皓宇 王重熙 +1 位作者 宋鹏皓 章隆兵 《高技术通讯》 CAS 北大核心 2024年第7期681-691,共11页
传统的三维图形处理器通过裁剪操作获取三角形的可见区域。然而,裁剪操作的延迟长且硬件开销高,大量的裁剪操作会降低图形处理器的性能。本文设计了一款基于OpenGL ES 2.0标准的三维图形处理器芯片,采用了统一渲染架构。该图形处理器采... 传统的三维图形处理器通过裁剪操作获取三角形的可见区域。然而,裁剪操作的延迟长且硬件开销高,大量的裁剪操作会降低图形处理器的性能。本文设计了一款基于OpenGL ES 2.0标准的三维图形处理器芯片,采用了统一渲染架构。该图形处理器采用高效的无裁剪图形流水线结构,消除了裁剪所带来的硬件开销和性能损耗。此外,本文为该图形处理器设计了一个符合IEEE-754标准的三维向量内积(DP3)计算单元,用于固定功能流水线,以提高图形处理器的性能,并消除图形渲染过程中浮点乘加操作的误差,增强了图形处理器的图形渲染鲁棒性。该三维图形处理器每秒能够处理500 M个顶点和8 G个纹素,功耗为1000 mW,采用了28 nm工艺,面积为7.92 mm^(2)。实现结果表明,与之前的工作相比,本文设计的图形处理器的性能-功耗比提高了27.8%。 展开更多
关键词 三维图形处理器 图形流水线 裁剪 向量内积
下载PDF
面向国产高性能加速器的LLVM编译器设计及优化
8
作者 宋强 唐俊龙 +4 位作者 陈照云 时洋 谭期轩 肖紫阳 邹望辉 《计算机工程》 CAS CSCD 北大核心 2024年第4期321-331,共11页
国防科技大学自主研制的高性能加速器采用中央处理器(CPU)+通用数字信号处理器(GPDSP)的片上异构融合架构,使用超长指令集(VLIW)+单指令多数据流(SIMD)的向量化结构的GPDSP是峰值性能主要支撑的加速核。主流编译器在密集的数据计算指令... 国防科技大学自主研制的高性能加速器采用中央处理器(CPU)+通用数字信号处理器(GPDSP)的片上异构融合架构,使用超长指令集(VLIW)+单指令多数据流(SIMD)的向量化结构的GPDSP是峰值性能主要支撑的加速核。主流编译器在密集的数据计算指令排布、为指令静态分配硬件执行单元、GPDSP特有的向量指令等方面不能很好地支持高性能加速器。基于低级虚拟器(LLVM)编译框架,在前寄存器分配调度阶段,结合峰值寄存器压力感知方法(PERP)、蚁群优化(ACO)算法与GPDSP结构特点,优化代价模型,设计支持寄存器压力感知的指令调度模块;在后寄存器分配阶段提出支持静态功能单元分配的指令调度策略,通过冲突检测机制保证功能单元分配的正确性,为指令并行执行提供软件基础;在后端封装一系列丰富且规整的向量指令接口,实现对GPDSP向量指令的支持。实验结果表明,所提出的LLVM编译架构优化方法从功能和性能上实现了对GPDSP的良好支撑,GCC testsuite测试整体性能平均加速比为4.539,SPEC CPU 2017浮点测试整体性能平均加速比为4.49,SPEC CPU 2017整型测试整体性能平均加速比为3.24,使用向量接口的向量程序实现了平均97.1%的性能提升率。 展开更多
关键词 通用数字信号处理器 低级虚拟器 编译器 指令调度 向量指令接口
下载PDF
基于向量表的RISC-V处理器普通中断与NMI优化设计
9
作者 高嘉轩 刘鸿瑾 +2 位作者 施博 年嘉伟 高鑫 《微电子学与计算机》 2024年第4期112-122,共11页
针对有实时性需求的精简指令集计算机(Reduced Instruction Set Computer,RISC)-V处理器中断响应延迟过长的问题,本文改进了中断响应中中断服务程序跳转地址计算的方式,扩展了不可屏蔽中断(Non-Maskable Interrupt,NMI)响应时的控制寄存... 针对有实时性需求的精简指令集计算机(Reduced Instruction Set Computer,RISC)-V处理器中断响应延迟过长的问题,本文改进了中断响应中中断服务程序跳转地址计算的方式,扩展了不可屏蔽中断(Non-Maskable Interrupt,NMI)响应时的控制寄存器,提出了硬件矢量中断以及NMI相关控制寄存器扩展。硬件矢量中断提高了中断的响应速度,减少了中断响应的延迟。NMI扩展控制寄存器减少了NMI的响应延迟,减少了软件需要进行的保存现场操作。利用VCS仿真验证了中断优化的正确性以及性能。仿真结果表明,硬件矢量中断响应时间缩短了84.4%,响应速度提高为原本的6倍,NMI扩展控制寄存器减少了31个时钟周期的响应时间以及32个时钟周期的返回时间。 展开更多
关键词 RISC-V 处理器 中断优化 向量表 控制寄存器 NMI
下载PDF
面向YHFT-M7002平台图像中值滤波算法的优化实现 被引量:1
10
作者 王梦园 柴晓楠 +1 位作者 陈云 商建东 《计算机应用与软件》 北大核心 2023年第9期205-210,241,共7页
随着FT系列处理器在图像处理领域的广泛应用,目前缺少可以充分发挥FT平台优势的高性能图形图像处理库。针对上述问题,在完成FT-M7002平台中OpenCV2.4.9图像库移植的基础上,提出面向该平台的中值滤波算法优化实现。通过分析FT-M7002的体... 随着FT系列处理器在图像处理领域的广泛应用,目前缺少可以充分发挥FT平台优势的高性能图形图像处理库。针对上述问题,在完成FT-M7002平台中OpenCV2.4.9图像库移植的基础上,提出面向该平台的中值滤波算法优化实现。通过分析FT-M7002的体系结构特点与中值滤波算法特性,使用手工向量化、循环展开、双缓冲等手段进行程序优化,充分利用该平台向量运算单元以及向量寄存器资源,提升该算法的数据级与指令级并行性。测试结果表明,相对于中值滤波算法的串行实现,其优化实现能在保证正确性的基础上获得5~16倍的加速效果。 展开更多
关键词 中值滤波 高性能处理器 向量化
下载PDF
面向飞腾迈创数字处理器的内核代码自动生成框架
11
作者 赵宵磊 陈照云 +2 位作者 时洋 文梅 张春元 《计算机研究与发展》 EI CSCD 北大核心 2023年第6期1232-1245,共14页
数字信号处理器(digital signal processor,DSP)通常采用超长指令字(very long instruction word,VLIW)和单指令多数据(single instruction multiple data,SIMD)的架构来提升处理器整体计算性能,从而适用于高性能计算、图像处理、嵌入... 数字信号处理器(digital signal processor,DSP)通常采用超长指令字(very long instruction word,VLIW)和单指令多数据(single instruction multiple data,SIMD)的架构来提升处理器整体计算性能,从而适用于高性能计算、图像处理、嵌入式系统等各个领域.飞腾迈创数字处理器(FT-Matrix)作为国防科技大学自主研制的高性能通用数字信号处理器,其极致计算性能的体现依赖于对VLIW与SIMD架构特点的充分挖掘.不止是飞腾迈创系列,绝大多数处理器上高度优化的内核代码或核心库函数都依赖于底层汇编级工具或手工开发.然而,手工编写内核算子的开发方法总是需要大量的时间和人力开销来充分释放硬件的性能潜力.尤其是VLIW+SIMD的处理器,专家级汇编开发的难度更为突出.针对这些问题,提出一种面向飞腾迈创数字处理器的高性能的内核代码自动生成框架(automatic kernel code-generation framework on FT-Matrix),将飞腾迈创处理器的架构特性引入到多层次的内核代码优化方法中.该框架包括3层优化组件:自适应循环分块、标向量协同的自动向量化和细粒度的指令级优化.该框架可以根据硬件的内存层次结构和内核的数据布局自动搜索最优循环分块参数,并进一步引入标量-向量单元协同的自动向量化指令选择与数据排布,以提高内核代码执行时的数据复用和并行性.此外,该框架提供了类汇编的中间表示,以应用各种指令级优化来探索更多指令级并行性(ILP)的优化空间,同时也为其他硬件平台提供了后端快速接入和自适应代码生成的模块,以实现高效内核代码开发的敏捷设计.实验表明,该框架生成的内核基准测试代码的平均性能是目标-数字信号处理器(DSP)--的手工函数库的3.25倍,是使用普通向量C语言编写的内核代码的20.62倍. 展开更多
关键词 内核代码生成 超长指令字-单指令多数据 循环分块 标量-向量协同 数字信号处理器
下载PDF
基于视频阵列处理器的3D-HEVC视差估计算法并行设计与实现 被引量:1
12
作者 蒋林 冯茹 《计算机应用与软件》 北大核心 2023年第7期260-265,281,共7页
三维高效视频编码(3D High Efficiency Video Coding,3D-HEVC)中视差估计算法存在处理数据量大、运算时间长和资源消耗大的问题,进一步提高算法执行效率对于3D-HEVC的推广应用具有十分重要的意义。在深入分析视差估计算法的并行性的基础... 三维高效视频编码(3D High Efficiency Video Coding,3D-HEVC)中视差估计算法存在处理数据量大、运算时间长和资源消耗大的问题,进一步提高算法执行效率对于3D-HEVC的推广应用具有十分重要的意义。在深入分析视差估计算法的并行性的基础上,基于项目组开发的视频阵列处理器(DPR-CODEC),提出一种新的并行实现方案。在可重构阵列结构中完成了视差估计算法的并行映射、功能仿真和FPGA测试,显著减少了视差估计算法的执行时间。实验结果表明,所提出的并行实现方案相比于串行单PE执行时间节省了59%,基于可编程可重构阵列的并行实现在具有较高的执行效率的同时也具有较好的灵活性。 展开更多
关键词 三维高效率视频编码 并行性 视差矢量 阵列处理器
下载PDF
System Architecture of Godson-3 Multi-Core Processors 被引量:7
13
作者 高翔 陈云霁 +2 位作者 王焕东 唐丹 胡伟武 《Journal of Computer Science & Technology》 SCIE EI CSCD 2010年第2期181-191,共11页
Godson-3 is the latest generation of Godson microprocessor family. It takes a scalable multi-core architecture with hardware support for accelerating applications including X86 emulation and signal processing. This pa... Godson-3 is the latest generation of Godson microprocessor family. It takes a scalable multi-core architecture with hardware support for accelerating applications including X86 emulation and signal processing. This paper introduces the system architecture of Godson-3 from various aspects including system scalability, organization of memory hierarchy, network-on-chip, inter-chip connection and I/O subsystem. 展开更多
关键词 multi-core processor scalable interconnection cache coherent non-uniform memory access/non-uniform cache access (CC-NUMA/NUCA) MESH CROSSBAR cache coherence reliability availability and serviceability (RAS)
原文传递
A Quantitative Evaluation of Vector Transcendental Functions on ARMv8-Based Processors
14
作者 沈洁 龙标 黄春 《Journal of Computer Science & Technology》 SCIE EI CSCD 2023年第3期686-701,共16页
Transcendental functions are important functions in various high performance computing applications.Because these functions are time-consuming and the vector units on modern processors become wider and more scalable,t... Transcendental functions are important functions in various high performance computing applications.Because these functions are time-consuming and the vector units on modern processors become wider and more scalable,there is an increasing demand for developing and using vector transcendental functions in such performance-hungry applications.However,the performance of vector transcendental functions as well as their accuracy remain largely unexplored.To address this issue,we perform a comprehensive evaluation of two Single Instruction Multiple Data(SIMD)intrinsics based vector math libraries on two ARMv8 compatible processors.We first design dedicated microbenchmarks that help us understand the performance behavior of vector transcendental functions.Then,we propose a piecewise,quantitative evaluation method with a set of meaningful metrics to quantify their performance and accuracy.By analyzing the experimental results,we find that vector transcendental functions achieve good performance speedups thanks to the vectorization and algorithm optimization.Moreover,vector math libraries can replace scalar math libraries in many cases because of improved performance and satisfactory accuracy.Despite this,the implementations of vector math libraries are still immature,which means further optimization is needed,and our evaluation reveals feasible optimization solutions for future vector math libraries. 展开更多
关键词 transcendental function vector math library piecewise quantitative evaluation microbenchmarking ARMv8-based processor
原文传递
Parallel computing of discrete element method on multi-core processors 被引量:6
15
作者 Yusuke Shigeto Mikio Sakai 《Particuology》 SCIE EI CAS CSCD 2011年第4期398-405,共8页
This paper describes parallel simulation techniques for the discrete element method (DEM) on multi-core processors. Recently, multi-core CPU and GPU processors have attracted much attention in accelerating computer ... This paper describes parallel simulation techniques for the discrete element method (DEM) on multi-core processors. Recently, multi-core CPU and GPU processors have attracted much attention in accelerating computer simulations in various fields. We propose a new algorithm for multi-thread parallel computation of DEM, which makes effective use of the available memory and accelerates the computation. This study shows that memory usage is drastically reduced by using this algorithm. To show the practical use of DEM in industry, a large-scale powder system is simulated with a complicated drive unit. We compared the performance of the simulation between the latest GPU and CPU processors with optimized programs for each processor. The results show that the difference in performance is not substantial when using either GPUs or CPUs with a multi-thread parallel algorithm. In addition, DEM algorithm is shown to have high scalabilitv in a multi-thread parallel computation on a CPU. 展开更多
关键词 Discrete element method Parallel computing multi-core processor GPGPU
原文传递
基于国产PuDianNao芯片的向量函数库优化
16
作者 杨指政 杜子东 文渊博 《郑州大学学报(工学版)》 CAS 北大核心 2023年第1期31-37,共7页
目前国产人工智能处理器PuDianNao芯片上的向量数学函数只能依靠循环调用标量函数来实现,该方法性能比较低。基于PuDianNao芯片提出了3种优化方法。方法一为插值方法;方法二为SIMD加掩码方法;方法三基于PuDianNao的硬件阵列结构,使用VLI... 目前国产人工智能处理器PuDianNao芯片上的向量数学函数只能依靠循环调用标量函数来实现,该方法性能比较低。基于PuDianNao芯片提出了3种优化方法。方法一为插值方法;方法二为SIMD加掩码方法;方法三基于PuDianNao的硬件阵列结构,使用VLIW指令操作阵列中的每个处理单元,封装出SIMT编程模型,提出了暴露分支范围和分支扁平化的编程方法。对以上3种方法进行精度和性能测试,对比实验结果表明,方法三具有最好的精度和性能。使用方法三实现基于国产PuDianNao芯片的向量数学函数库PuDianNao-VecMath,解决了数学函数多分支结构难以向量化的难题。该函数库精度性能较好、功能稳定、运行正确,提供的接口包括取整函数、超越函数、比较函数、激活函数等常见基础数学库函数。在精度上,将函数定义域区间全数据作为输入,运算结果和标量函数在CPU i7运行的结果进行对比。结果表明,单精度版本最大ULP值为2,半精度版本最大ULP值为1。性能与使用标量循环相比有较大提高,单精度版本相对于标量循环平均加速比平均值为18.26,最大加速比为35.90;半精度版本平均加速比平均值为15.65,最大加速比为30.11。 展开更多
关键词 向量化函数 PuDianNao-VecMath 国产人工智能处理器 暴露分支范围和分支扁平化
下载PDF
基于DSP28379的改进线性自抗扰电动机矢量控制实验设计
17
作者 盛春阳 王庆辉 +1 位作者 宋诗斌 苏涛 《实验室研究与探索》 CAS 北大核心 2023年第3期77-82,共6页
为了提升交流电动机的响应速度和抑制中频扰动的能力,设计并搭建了基于TMS320F28379D的改进线性自抗扰三相电动机矢量控制实验平台。硬件以双核DSP芯片为控制核心,系统时钟频率200 MHz;采用高集成度智能功率模块驱动电动机,无需光耦隔... 为了提升交流电动机的响应速度和抑制中频扰动的能力,设计并搭建了基于TMS320F28379D的改进线性自抗扰三相电动机矢量控制实验平台。硬件以双核DSP芯片为控制核心,系统时钟频率200 MHz;采用高集成度智能功率模块驱动电动机,无需光耦隔离驱动;利用旋转变压器测量电动机转速和位置信息,搭配解码芯片可实现16 bit高精度测量。软件以传统线性自抗扰算法为基础,改变扰动反馈通道并在误差反馈通道中加入低通滤波器,从而提出一种改进线性自抗扰控制算法,并通过实验进行实物验证。结果表明:相比于传统PI和线性自抗扰控制算法,改进算法的阶跃响应速度分别提升25%和18%,受瞬间扰动影响的转速跌落分别降低4倍和2倍,并且转速恢复时间分别减少85%和70%,大幅提升了系统的动静态特性。 展开更多
关键词 矢量控制 数字信号处理器 智能功率模块 自抗扰控制
下载PDF
Energy Efficiency of a Multi-Core Processor by Tag Reduction
18
作者 郑龙 董冕雄 +3 位作者 Kaoru Ota 金海 Song Guo 马俊 《Journal of Computer Science & Technology》 SCIE EI CSCD 2011年第3期491-503,共13页
We consider the energy saving problem for caches on a multi-core processor. In the previous research on low power processors, there are various methods to reduce power dissipation. Tag reduction is one of them. This p... We consider the energy saving problem for caches on a multi-core processor. In the previous research on low power processors, there are various methods to reduce power dissipation. Tag reduction is one of them. This paper extends the tag reduction technique on a single-core processor to a multi-core processor and investigates the potential of energy saving for multi-core processors. We formulate our approach as an equivalent problem which is to find an assignment of the whole instruction pages in the physical memory to a set of cores such that the tag-reduction conflicts for each core can be mostly avoided or reduced. We then propose three algorithms using different heuristics for this assignment problem. We provide convincing experimental results by collecting experimental data from a real operating system instead of the traditional way using a processor simulator that cannot simulate operating system functions and the full memory hierarchy. Experimental results show that our proposed algorithms can save total energy up to 83.93% on an 8-core processor and 76.16% on a 4-core processor in average compared to the one that the tag-reduction is not used for. They also significantly outperform the tag reduction based algorithm on a single-core processor. 展开更多
关键词 tag reduction multi-core processor energy efficiency
原文传递
Schedule refinement for homogeneous multi-core processors in the presence of manufacturing-caused heterogeneity
19
作者 Zhi-xiang CHEN Zhao-lin LI +2 位作者 Shan CAO Fang WANG Jie ZHOU 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2015年第12期1018-1033,共16页
Multi-core homogeneous processors have been widely used to deal with computation-intensive embedded applications. However, with the continuous down scaling of CMOS technology, within-die variations in the manufacturin... Multi-core homogeneous processors have been widely used to deal with computation-intensive embedded applications. However, with the continuous down scaling of CMOS technology, within-die variations in the manufacturing process lead to a significant spread in the operating speeds of cores within homogeneous multi-core processors. Task scheduling approaches, which do not consider such heterogeneity caused by within-die variations,can lead to an overly pessimistic result in terms of performance. To realize an optimal performance according to the actual maximum clock frequencies at which cores can run, we present a heterogeneity-aware schedule refining(HASR) scheme by fully exploiting the heterogeneities of homogeneous multi-core processors in embedded domains.We analyze and show how the actual maximum frequencies of cores are used to guide the scheduling. In the scheme,representative chip operating points are selected and the corresponding optimal schedules are generated as candidate schedules. During the booting of each chip, according to the actual maximum clock frequencies of cores, one of the candidate schedules is bound to the chip to maximize the performance. A set of applications are designed to evaluate the proposed scheme. Experimental results show that the proposed scheme can improve the performance by an average value of 22.2%, compared with the baseline schedule based on the worst case timing analysis. Compared with the conventional task scheduling approach based on the actual maximum clock frequencies, the proposed scheme also improves the performance by up to 12%. 展开更多
关键词 Schedule refining multi-core processor HETEROGENEITY Representative chip operating point
原文传递
Thread Private Variable Access Optimization Technique for Sunway High-Performance Multi-core Processors
20
作者 Jinying Kong Kai Nie +2 位作者 Qinglei Zhou Jinlong Xu Lin Han 《国际计算机前沿大会会议论文集》 2021年第1期180-189,共10页
The primary way to achieve thread-level parallelism on the Sunwayhigh-performance multicore processor is to use the OpenMP programming technique.To address the problem of low parallelism efficiency caused by slow acce... The primary way to achieve thread-level parallelism on the Sunwayhigh-performance multicore processor is to use the OpenMP programming technique.To address the problem of low parallelism efficiency caused by slow accessto thread private variables in the compilation of Sunway OpenMP programs, thispaper proposes a thread private variable access technique based on privilegedinstructions. The privileged instruction-based thread-private variable access techniquecentralizes the implementation of thread-private variables at the compilerlevel, eliminating the model switching overhead of invoking OS core processingand improving the speed of accessing thread-private variables. On the Sunway1621 server platform, NPB3.3-OMP and SPEC OMP2012 achieved 6.2% and6.8% running efficiency gains, respectively. The results show that the techniquesproposed in this paper can provide technical support for giving full play to theadvantages of Sunway’s high-performance multi-core processors. 展开更多
关键词 Sunway high-performance multi-core processors OpenMP programming technique Privileged instruction-based thread-private variable access technique Sunway 1621 processor
原文传递
上一页 1 2 13 下一页 到第
使用帮助 返回顶部