期刊文献+
共找到3篇文章
< 1 >
每页显示 20 50 100
Preparation of TiO2 Nanocrystals/Graphene Composite and Its Photocatalytic Performance
1
作者 Ling-juan Deng Yuan-zi Gu +1 位作者 wei-xia xu Zhan-ying Ma 《Chinese Journal of Chemical Physics》 SCIE CAS CSCD 2014年第3期321-326,共6页
TiO2 nanocrystals/graphene (TiO2/GR) composite are prepared by combining flocculation and hydrothermal reduction technology using graphite oxide and TiO2 colloid as precursors. The obtained materials are examined by... TiO2 nanocrystals/graphene (TiO2/GR) composite are prepared by combining flocculation and hydrothermal reduction technology using graphite oxide and TiO2 colloid as precursors. The obtained materials are examined by scanning electron microscopy, transition electron microscopy, X-ray diffraction, N2 adsorption desorption, and ultraviolet-visible diffuse spectroscopy. The results suggest that the presence of TiO2 nanocrystals with diameter of about 15 nm prevents GR nanosheets from agglomeration. Owing to the uniform distribution of TiO2 nanocrystals on the GR nanosheets, TiO2/GR composite exhibits stronger light absorption in the visible region, higher adsorption capacity to methylene blue and higher efficiency of charge separation and transportation compared with pure TiO2. Moreover, the TiO2/GR composite with a GR content of 30% shows higher photocatalytic removal efficiency of MB from water than that of pure TiO2 and commercial P25 under both UV and sunlight irradiation. 展开更多
关键词 TiO2 nanocrystals/graphene composite PHOTOCATALYST Chemical adsorptivity Extended light absorption Efficient charge separation
下载PDF
Path-Based Multicast Routing for Network-on-Chip of the Neuromorphic Processor
2
作者 康子扬 李石明 +5 位作者 王世英 曲连华 龚锐 石伟 徐炜遐 王蕾 《Journal of Computer Science & Technology》 SCIE EI CSCD 2023年第5期1098-1112,共15页
Network-on-Chip(NoC)is widely adopted in neuromorphic processors to support communication between neurons in spiking neural networks(SNNs).However,SNNs generate enormous spiking packets due to the one-to-many traffic ... Network-on-Chip(NoC)is widely adopted in neuromorphic processors to support communication between neurons in spiking neural networks(SNNs).However,SNNs generate enormous spiking packets due to the one-to-many traffic pattern.The spiking packets may cause communication pressure on NoC.We propose a path-based multicast routing method to alleviate the pressure.Firstly,all destination nodes of each source node on NoC are divided into several clusters.Secondly,multicast paths in the clusters are created based on the Hamiltonian path algorithm.The proposed routing can reduce the length of path and balance the communication load of each router.Lastly,we design a lightweight microarchitecture of NoC,which involves a customized multicast packet and a routing function.We use six datasets to verify the proposed multicast routing.Compared with unicast routing,the running time of path-based multicast routing achieves 5.1x speedup,and the number of hops and the maximum transmission latency of path-based multicast routing are reduced by 68.9%and 77.4%,respectively.The maximum length of path is reduced by 68.3%and 67.2%compared with the dual-path(DP)and multi-path(MP)multicast routing,respectively.Therefore,the proposed multicast routing has improved performance in terms of average latency and throughput compared with the DP or MP multicast routing. 展开更多
关键词 neuromorphic processor spiking neural network(SNN) Network-on-Chip(NoC) path-based multicast
原文传递
SIES:A Novel Implementation of Spiking Convolutional Neural Network Inference Engine on Field-Programmable Gate Array
3
作者 Shu-Quan Wang Lei Wang +5 位作者 Yu Deng Zhi-Jie Yang Sha-Sha Guo Zi-Yang Kang Yu-Feng Guo wei-xia xu 《Journal of Computer Science & Technology》 SCIE EI CSCD 2020年第2期475-489,共15页
Neuromorphic computing is considered to be the future of machine learning,and it provides a new way of cognitive computing.Inspired by the excellent performance of spiking neural networks(SNNs)on the fields of low-pow... Neuromorphic computing is considered to be the future of machine learning,and it provides a new way of cognitive computing.Inspired by the excellent performance of spiking neural networks(SNNs)on the fields of low-power consumption and parallel computing,many groups tried to simulate the SNN with the hardware platform.However,the efficiency of training SNNs with neuromorphic algorithms is not ideal enough.Facing this,Michael et al.proposed a method which can solve the problem with the help of DNN(deep neural network).With this method,we can easily convert a well-trained DNN into an SCNN(spiking convolutional neural network).So far,there is a little of work focusing on the hardware accelerating of SCNN.The motivation of this paper is to design an SNN processor to accelerate SNN inference for SNNs obtained by this DNN-to-SNN method.We propose SIES(Spiking Neural Network Inference Engine for SCNN Accelerating).It uses a systolic array to accomplish the task of membrane potential increments computation.It integrates an optional hardware module of max-pooling to reduce additional data moving between the host and the SIES.We also design a hardware data setup mechanism for the convolutional layer on the SIES with which we can minimize the time of input spikes preparing.We implement the SIES on FPGA XCVU440.The number of neurons it supports is up to 4000 while the synapses are 256000.The SIES can run with the working frequency of 200 MHz,and its peak performance is 1.5625 TOPS. 展开更多
关键词 SPIKING NEURAL network(SNN) field-programmable gate array(FPGA) neuromorphic SYSTOLIC ARRAY SPIKING convolutional NEURAL network(SCNN) integrete and fire(I&F)model hardware ACCELERATOR
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部