Neuromorphic computing,inspired by the human brain,uses memristor devices for complex tasks.Recent studies show that self-organizing random nanowires can implement neuromorphic information processing,enabling data ana...Neuromorphic computing,inspired by the human brain,uses memristor devices for complex tasks.Recent studies show that self-organizing random nanowires can implement neuromorphic information processing,enabling data analysis.This paper presents a model based on these nanowire networks,with an improved conductance variation profile.We suggest using these networks for temporal information processing via a reservoir computing scheme and propose an efficient data encoding method using voltage pulses.The nanowire network layer generates dynamic behaviors for pulse voltages,allowing time series prediction analysis.Our experiment uses a double stochastic nanowire network architecture for processing multiple input signals,outperforming traditional reservoir computing in terms of fewer nodes,enriched dynamics and improved prediction accuracy.Experimental results confirm the high accuracy of this architecture on multiple real-time series datasets,making neuromorphic nanowire networks promising for physical implementation of reservoir computing.展开更多
There are a wide variety of intelligence accelerators with promising performance and energy efficiency,deployed in a broad range of applications such as computer vision and speech recognition.However,programming produ...There are a wide variety of intelligence accelerators with promising performance and energy efficiency,deployed in a broad range of applications such as computer vision and speech recognition.However,programming productivity hinders the deployment of deep learning accelerators.The low-level library invoked in the high-level deep learning framework which supports the end-to-end execution with a given model,is designed to reduce the programming burden on the intelligence accelerators.Unfortunately,it is inflexible for developers to build a network model for every deep learning application,which probably brings unnecessary repetitive implementation.In this paper,a flexible and efficient programming framework for deep learning accelerators,FlexPDA,is proposed,which provides more optimization opportunities than the low-level library and realizes quick transplantation of applications to intelligence accelerators for fast upgrades.We evaluate FlexPDA by using 10 representative operators selected from deep learning algorithms and an end-to-end network.The experimental results validate the effectiveness of FlexPDA,which achieves an end-to-end performance improvement of 1.620x over the low-level library.展开更多
基金Project supported by the National Natural Science Foundation of China (Grant Nos. U20A20227,62076208, and 62076207)Chongqing Talent Plan “Contract System” Project (Grant No. CQYC20210302257)+3 种基金National Key Laboratory of Smart Vehicle Safety Technology Open Fund Project (Grant No. IVSTSKL-202309)the Chongqing Technology Innovation and Application Development Special Major Project (Grant No. CSTB2023TIAD-STX0020)College of Artificial Intelligence, Southwest UniversityState Key Laboratory of Intelligent Vehicle Safety Technology
文摘Neuromorphic computing,inspired by the human brain,uses memristor devices for complex tasks.Recent studies show that self-organizing random nanowires can implement neuromorphic information processing,enabling data analysis.This paper presents a model based on these nanowire networks,with an improved conductance variation profile.We suggest using these networks for temporal information processing via a reservoir computing scheme and propose an efficient data encoding method using voltage pulses.The nanowire network layer generates dynamic behaviors for pulse voltages,allowing time series prediction analysis.Our experiment uses a double stochastic nanowire network architecture for processing multiple input signals,outperforming traditional reservoir computing in terms of fewer nodes,enriched dynamics and improved prediction accuracy.Experimental results confirm the high accuracy of this architecture on multiple real-time series datasets,making neuromorphic nanowire networks promising for physical implementation of reservoir computing.
基金the National Natural Science Foundation of China(No.52074138)the Fundamental Research Project of Yunnan Province,China(No.202001AS070030)the Analysis and Testing Foundation of Kunming University of Science and Technology,China(No.2018M20162101102).
基金This work was supported by the National Key Research and Development Program of China under Grant No.2017YFB1003103the Natural Science Research Foundation of Jilin Province of China under Grant No.20190201193JCthe Fundamental Research Funds for the Central Universities,JLU.
文摘There are a wide variety of intelligence accelerators with promising performance and energy efficiency,deployed in a broad range of applications such as computer vision and speech recognition.However,programming productivity hinders the deployment of deep learning accelerators.The low-level library invoked in the high-level deep learning framework which supports the end-to-end execution with a given model,is designed to reduce the programming burden on the intelligence accelerators.Unfortunately,it is inflexible for developers to build a network model for every deep learning application,which probably brings unnecessary repetitive implementation.In this paper,a flexible and efficient programming framework for deep learning accelerators,FlexPDA,is proposed,which provides more optimization opportunities than the low-level library and realizes quick transplantation of applications to intelligence accelerators for fast upgrades.We evaluate FlexPDA by using 10 representative operators selected from deep learning algorithms and an end-to-end network.The experimental results validate the effectiveness of FlexPDA,which achieves an end-to-end performance improvement of 1.620x over the low-level library.