期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
Analysis and Application of Multiple-Precision Computation and Round-off Error for Nonlinear Dynamical Systems 被引量:4
1
作者 王鹏飞 黄刚 王在志 《Advances in Atmospheric Sciences》 SCIE CAS CSCD 2006年第5期758-766,共9页
This research reveals the dependency of floating point computation in nonlinear dynamical systems on machine precision and step-size by applying a multiple-precision approach in the Lorenz nonlinear equations. The pap... This research reveals the dependency of floating point computation in nonlinear dynamical systems on machine precision and step-size by applying a multiple-precision approach in the Lorenz nonlinear equations. The paper also demoastrates the procedures for obtaining a real numerical solution in the Lorenz system with long-time integration and a new multiple-precision-based approach used to identify the maximum effective computation time (MECT) and optimal step-size (OS). In addition, the authors introduce how to analyze round-off error in a long-time integration in some typical cases of nonlinear systems and present its approximate estimate expression. 展开更多
关键词 multiple-precision numerical calculation round-off error nonlinear dynamical system
下载PDF
MW-DLA:a dynamic bit width deep learning accelerator 被引量:1
2
作者 Li Zhen Zhi Tian +2 位作者 Liu Enhe Liu Shaoli Chen Tianshi 《High Technology Letters》 EI CAS 2020年第2期145-151,共7页
Deep learning algorithms are the basis of many artificial intelligence applications.Those algorithms are both computationally intensive and memory intensive,making them difficult to deploy on embedded systems.Thus var... Deep learning algorithms are the basis of many artificial intelligence applications.Those algorithms are both computationally intensive and memory intensive,making them difficult to deploy on embedded systems.Thus various deep learning accelerators(DLAs)are proposed and applied to achieve better performance and lower power consumption.However,most deep learning accelerators are unable to support multiple data formats.This research proposes the MW-DLA,a deep learning accelerator supporting dynamic configurable data-width.This work analyzes the data distribution of different data types in different layers and trains a typical network with per-layer representation.As a result,the proposed MW-DLA achieves 2X performance and more than 50%memory requirement for AlexNet with less than 5.77%area overhead. 展开更多
关键词 deep learning accelerator(DLA) per-layer representation multiple-precision ARITHMETIC unit
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部