期刊文献+
共找到4篇文章
< 1 >
每页显示 20 50 100
Design of a memory polynomial predistorter for wideband envelope tracking amplifiers 被引量:5
1
作者 Jing Zhang Songbai He Lu Gan 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2011年第2期193-199,共7页
Efficiency and linearity of the microwave power amplifier are critical elements for mobile communication systems. A memory polynomial baseband predistorter based on an indirect learning architecture is presented for i... Efficiency and linearity of the microwave power amplifier are critical elements for mobile communication systems. A memory polynomial baseband predistorter based on an indirect learning architecture is presented for improving the linearity of an envelope tracing (ET) amplifier with application to a wireless transmitter. To deal with large peak-to-average ratio (PAR) problem, a clipping procedure for the input signal is employed. Then the system performance is verified by simulation results. For a single carrier wideband code division multiple access (WCDMA) signal of 16-quadrature amplitude modulation (16-QAM), about 2% improvement of the error vector magnitude (EVM) is achieved at an average output power of 45.5 dBm and gain of 10.6 dB, with adjacent channel leakage ratio (ACLR) of -64.55 dBc at offset frequency of 5 MHz. Moreover, a three-carrier WCDMA signal and a third-generation (3G) long term evolution (LTE) signal are used as test signals to demonstrate the performance of the proposed linearization scheme under different bandwidth signals. 展开更多
关键词 envelope tracking memory polynomial predistorter indirect learning architecture power amplifier memory effects.
下载PDF
Approximate Similarity-Aware Compression for Non-Volatile Main Memory
2
作者 陈章玉 华宇 +2 位作者 左鹏飞 孙园园 郭云程 《Journal of Computer Science & Technology》 SCIE EI CSCD 2024年第1期63-81,共19页
Image bitmaps,i.e.,data containing pixels and visual perception,have been widely used in emerging applica-tions for pixel operations while consuming lots of memory space and energy.Compared with legacy DRAM(dynamic ra... Image bitmaps,i.e.,data containing pixels and visual perception,have been widely used in emerging applica-tions for pixel operations while consuming lots of memory space and energy.Compared with legacy DRAM(dynamic ran-dom access memory),non-volatile memories(NVMs)are suitable for bitmap storage due to the salient features of high density and intrinsic durability.However,writing NVMs suffers from higher energy consumption and latency compared with read accesses.Existing precise or approximate compression schemes in NVM controllers show limited performance for bitmaps due to the irregular data patterns and variance in bitmaps.We observe the pixel-level similarity when writing bitmaps due to the analogous contents in adjacent pixels.By exploiting the pixel-level similarity,we propose SimCom,an approximate similarity-aware compression scheme in the NVM module controller,to efficiently compress data for each write access on-the-fly.The idea behind SimCom is to compress continuous similar words into the pairs of base words with runs.The storage costs for small runs are further mitigated by reusing the least significant bits of base words.SimCom adaptively selects an appropriate compression mode for various bitmap formats,thus achieving an efficient trade-off be-tween quality and memory performance.We implement SimCom on GEM5/zsim with NVMain and evaluate the perfor-mance with real-world image/video workloads.Our results demonstrate the efficacy and efficiency of our SimCom with an efficient quality-performance trade-off. 展开更多
关键词 approximate computing data compression memory architecture non-volatile memory
原文传递
A compact PE memory for vision chips
3
作者 石匆 陈哲 +2 位作者 杨杰 吴南健 王志华 《Journal of Semiconductors》 EI CAS CSCD 2014年第9期104-110,共7页
This paper presents a novel compact memory in the processing element (PE) for single-instruction multiple-data (SIMD) vision chips. The PE memory is constructed with 8×8 register cells, where one latch in the... This paper presents a novel compact memory in the processing element (PE) for single-instruction multiple-data (SIMD) vision chips. The PE memory is constructed with 8×8 register cells, where one latch in the slave stage is shared by eight latches in the master stage. The memory supports simultaneous read and write on the same address in one clock cycle. Its compact area of 14.33 μm^2/bit promises a higher integration level of the processor. A prototype chip with a 64×64 PE array is fabricated in a UMC 0.18 μm CMOS technology. Five types of the PE memory cell structure are designed and compared. The testing results demonstrate that the proposed PE memory architecture well satisfies the requirement of the vision chip in high-speed real-time vision applications, such as 1000 fps edge extraction. 展开更多
关键词 vision chip PE memory architecture SIMD edge extraction
原文传递
A Study on Modeling and Optimization of Memory Systems
4
作者 Jason Liu Pedro Espina Xian-He Sun 《Journal of Computer Science & Technology》 SCIE EI CSCD 2021年第1期71-89,共19页
Accesses Per Cycle(APC),Concurrent Average Memory Access Time(C-AMAT),and Layered Performance Matching(LPM)are three memory performance models that consider both data locality and memory assess concurrency.The APC mod... Accesses Per Cycle(APC),Concurrent Average Memory Access Time(C-AMAT),and Layered Performance Matching(LPM)are three memory performance models that consider both data locality and memory assess concurrency.The APC model measures the throughput of a memory architecture and therefore reflects the quality of service(QoS)of a memory system.The C-AMAT model provides a recursive expression for the memory access delay and therefore can be used for identifying the potential bottlenecks in a memory hierarchy.The LPM method transforms a global memory system optimization into localized optimizations at each memory layer by matching the data access demands of the applications with the underlying memory system design.These three models have been proposed separately through prior efforts.This paper reexamines the three models under one coherent mathematical framework.More specifically,we present a new memorycentric view of data accesses.We divide the memory cycles at each memory layer into four distinct categories and use them to recursively define the memory access latency and concurrency along the memory hierarchy.This new perspective offers new insights with a clear formulation of the memory performance considering both locality and concurrency.Consequently,the performance model can be easily understood and applied in engineering practices.As such,the memory-centric approach helps establish a unified mathematical foundation for model-driven performance analysis and optimization of contemporary and future memory systems. 展开更多
关键词 performance modeling performance optimization memory architecture memory hierarchy concurrent average memory access time
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部