期刊文献+

智能视觉芯片 被引量:1

Smart vision chip
原文传递
导出
摘要 智能视觉芯片(视觉芯片)是一种集成视觉传感器、高能效处理器、存储器的片上系统芯片,在自动驾驶、安防监控、机器人视觉等领域都具有广泛的应用价值.硅基CMOS工艺上既可以实现大规模的视觉传感器,也可制备大规模存储器和处理器,所以基于硅基CMOS工艺,人们可以借助光电子、微电子混合集成技术实现感存算一体化融合的视觉芯片,以克服传统视觉系统中图像串行传输和处理的瓶颈,有效降低处理延迟并提高智能化水平.构建视觉芯片需要突破高性能视觉图像传感器设计、高能效视觉图像处理器设计、视觉处理算法以及传感器/存储器/处理器一体化集成等多方面的关键问题.本文首先介绍视觉芯片的概念和架构,然后分别介绍视觉芯片各项关键技术的发展现状,最后简述视觉芯片的未来发展方向. Vision chips are a fusion of image sensors,processors and data storage.They are a tight combination of image sensors and processors.Data from the image sensor could be processed in situ by a processor,which shows lower latency and lower power performance.Meanwhile,the sensor could also be evaluated continuously and configured in real time,which allows the sensor to always work in a proper status.The design of a vision chip is a nontrivial task that requires a good understanding of image sensor,processor,algorithm and integration technology.Vision chips can be multibit-based or spiking-based according to the type of sensor and processor integrated.The algorithm running on the vision chip can be a classical computational algorithm with handcrafted features or a deep neural network algorithm that relies on training.The CMOS image sensor is the best candidate for building vision chips.The basic principle behind this process is the photoelectric effect in Si materials.A photo diode is the fundamental sensing device.Inventions of pinned photodiode and 4-T pixel topology are two significant processes that greatly reduce dark current and reset noise and improve image quality.The architecture of image sensors includes column-parallel,chip-parallel and pixel-parallel fashions,among which column-parallel architectures have become very popular because of their good balance between complexity and readout parallelism.Currently,image sensors are developing toward higher resolution,higher frame rate,higher dynamic vision,3-D image and multispectrum vision capabilities,which significantly increase data size.To reduce the data burden,spiking image sensors were developed that output spiking maps instead of gray images.To process the image data in situ,two kinds of vision processors can be adopted.For gray images,a multibit-based architecture is needed.This architecture has evolved from application-specified design to flexible programmable design.Currently,programmable processors are mainstream owing to their high flexibility.To process images smartly,convolutional neural networks(CNNs)are now widely adopted.However,CNNs are both computationally intensive and storage intensive.To process CNNs on chips more efficiently,hardware and software optimization techniques are both needed.Hardware-wise,parallel computing is the basis for dealing with intensive computation.Employing quantization,sparsity and data reuse is very useful to reduce computational complexity and power consumption.For spiking images,the spiking-based processor is preferred.Some spiking processors employ brain-like cross-bar topology.Although achieving low power consumption,the hardware cost increases when dealing with complex algorithms.Others adopt a time-multiplexing method to design a spiking processor that is cost effective.Software-wise,classical computer vision(CV)algorithms use handcrafted features and behave well in specific applications.However,CNNs show more robust and accurate performance.As CNNs need more computational capacity,neural network pruning and effective quantization methods are important to reduce the computational burden.For various spiking-based processors,a neural network operating on it can be built up using transfer learning.To hook up the image sensor and image processor,plain integration and 3-D integration could be adopted.Compared to plain integration,3-D integration allows the optimization of sensors and processors with suitable fabrication technologies.3-D integration also allows cramming more memories on chip.In the future,mixed-signal processing and computing in memory techniques could be employed for more efficient computing,and novel 2-D materials may open new ways for sensing-storagecomputing fusion in a single device.
作者 刘力源 冯鹏 杨旭 于双铭 窦润江 刘剑 吴南健 Liyuan Liu;Peng Feng;Xu Yang;Shuangming Yu;Runjiang Dou;Jian Liu;Nanjian Wu(State Key Laboratory of Superlattices and Microstructures,Institute of Semiconductors,Chinese Academy of Sciences,Beijing 100083,China;School of Electronic,Electrical and Communication Engineering,University of Chinese Academy of Sciences,Beijing 100049,China;College of Materials Sciences and Opto-Electronic Technology,University of Chinese Academy of Sciences,Beijing 100049,China)
出处 《科学通报》 EI CAS CSCD 北大核心 2023年第35期4844-4861,共18页 Chinese Science Bulletin
基金 国家重点研发计划(2019YFB2204300) 北京市自然科学基金(Z220005) 北京市科技计划(Z221100007722028)资助。
关键词 视觉芯片 CMOS图像传感器 图像处理器 边缘计算 感存算一体化 vision chip CMOS image sensor image processor edge computing fusion of sensoring storage and computing
  • 相关文献

参考文献4

二级参考文献6

共引文献5

同被引文献9

引证文献1

二级引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部