TiO2 nanocrystals/graphene (TiO2/GR) composite are prepared by combining flocculation and hydrothermal reduction technology using graphite oxide and TiO2 colloid as precursors. The obtained materials are examined by...TiO2 nanocrystals/graphene (TiO2/GR) composite are prepared by combining flocculation and hydrothermal reduction technology using graphite oxide and TiO2 colloid as precursors. The obtained materials are examined by scanning electron microscopy, transition electron microscopy, X-ray diffraction, N2 adsorption desorption, and ultraviolet-visible diffuse spectroscopy. The results suggest that the presence of TiO2 nanocrystals with diameter of about 15 nm prevents GR nanosheets from agglomeration. Owing to the uniform distribution of TiO2 nanocrystals on the GR nanosheets, TiO2/GR composite exhibits stronger light absorption in the visible region, higher adsorption capacity to methylene blue and higher efficiency of charge separation and transportation compared with pure TiO2. Moreover, the TiO2/GR composite with a GR content of 30% shows higher photocatalytic removal efficiency of MB from water than that of pure TiO2 and commercial P25 under both UV and sunlight irradiation.展开更多
Network-on-Chip(NoC)is widely adopted in neuromorphic processors to support communication between neurons in spiking neural networks(SNNs).However,SNNs generate enormous spiking packets due to the one-to-many traffic ...Network-on-Chip(NoC)is widely adopted in neuromorphic processors to support communication between neurons in spiking neural networks(SNNs).However,SNNs generate enormous spiking packets due to the one-to-many traffic pattern.The spiking packets may cause communication pressure on NoC.We propose a path-based multicast routing method to alleviate the pressure.Firstly,all destination nodes of each source node on NoC are divided into several clusters.Secondly,multicast paths in the clusters are created based on the Hamiltonian path algorithm.The proposed routing can reduce the length of path and balance the communication load of each router.Lastly,we design a lightweight microarchitecture of NoC,which involves a customized multicast packet and a routing function.We use six datasets to verify the proposed multicast routing.Compared with unicast routing,the running time of path-based multicast routing achieves 5.1x speedup,and the number of hops and the maximum transmission latency of path-based multicast routing are reduced by 68.9%and 77.4%,respectively.The maximum length of path is reduced by 68.3%and 67.2%compared with the dual-path(DP)and multi-path(MP)multicast routing,respectively.Therefore,the proposed multicast routing has improved performance in terms of average latency and throughput compared with the DP or MP multicast routing.展开更多
Neuromorphic computing is considered to be the future of machine learning,and it provides a new way of cognitive computing.Inspired by the excellent performance of spiking neural networks(SNNs)on the fields of low-pow...Neuromorphic computing is considered to be the future of machine learning,and it provides a new way of cognitive computing.Inspired by the excellent performance of spiking neural networks(SNNs)on the fields of low-power consumption and parallel computing,many groups tried to simulate the SNN with the hardware platform.However,the efficiency of training SNNs with neuromorphic algorithms is not ideal enough.Facing this,Michael et al.proposed a method which can solve the problem with the help of DNN(deep neural network).With this method,we can easily convert a well-trained DNN into an SCNN(spiking convolutional neural network).So far,there is a little of work focusing on the hardware accelerating of SCNN.The motivation of this paper is to design an SNN processor to accelerate SNN inference for SNNs obtained by this DNN-to-SNN method.We propose SIES(Spiking Neural Network Inference Engine for SCNN Accelerating).It uses a systolic array to accomplish the task of membrane potential increments computation.It integrates an optional hardware module of max-pooling to reduce additional data moving between the host and the SIES.We also design a hardware data setup mechanism for the convolutional layer on the SIES with which we can minimize the time of input spikes preparing.We implement the SIES on FPGA XCVU440.The number of neurons it supports is up to 4000 while the synapses are 256000.The SIES can run with the working frequency of 200 MHz,and its peak performance is 1.5625 TOPS.展开更多
文摘TiO2 nanocrystals/graphene (TiO2/GR) composite are prepared by combining flocculation and hydrothermal reduction technology using graphite oxide and TiO2 colloid as precursors. The obtained materials are examined by scanning electron microscopy, transition electron microscopy, X-ray diffraction, N2 adsorption desorption, and ultraviolet-visible diffuse spectroscopy. The results suggest that the presence of TiO2 nanocrystals with diameter of about 15 nm prevents GR nanosheets from agglomeration. Owing to the uniform distribution of TiO2 nanocrystals on the GR nanosheets, TiO2/GR composite exhibits stronger light absorption in the visible region, higher adsorption capacity to methylene blue and higher efficiency of charge separation and transportation compared with pure TiO2. Moreover, the TiO2/GR composite with a GR content of 30% shows higher photocatalytic removal efficiency of MB from water than that of pure TiO2 and commercial P25 under both UV and sunlight irradiation.
基金supported by the National Key Research and Development Program of China under Grant Nos.2018YFB2202-603and2020AAA0104602.
文摘Network-on-Chip(NoC)is widely adopted in neuromorphic processors to support communication between neurons in spiking neural networks(SNNs).However,SNNs generate enormous spiking packets due to the one-to-many traffic pattern.The spiking packets may cause communication pressure on NoC.We propose a path-based multicast routing method to alleviate the pressure.Firstly,all destination nodes of each source node on NoC are divided into several clusters.Secondly,multicast paths in the clusters are created based on the Hamiltonian path algorithm.The proposed routing can reduce the length of path and balance the communication load of each router.Lastly,we design a lightweight microarchitecture of NoC,which involves a customized multicast packet and a routing function.We use six datasets to verify the proposed multicast routing.Compared with unicast routing,the running time of path-based multicast routing achieves 5.1x speedup,and the number of hops and the maximum transmission latency of path-based multicast routing are reduced by 68.9%and 77.4%,respectively.The maximum length of path is reduced by 68.3%and 67.2%compared with the dual-path(DP)and multi-path(MP)multicast routing,respectively.Therefore,the proposed multicast routing has improved performance in terms of average latency and throughput compared with the DP or MP multicast routing.
基金The work was supported by the HeGaoJi Program of China under Grant Nos.2017ZX01028103-002 and 2017ZX01038104-002the National Natural Science Foundation of China under Grant No.61472432.
文摘Neuromorphic computing is considered to be the future of machine learning,and it provides a new way of cognitive computing.Inspired by the excellent performance of spiking neural networks(SNNs)on the fields of low-power consumption and parallel computing,many groups tried to simulate the SNN with the hardware platform.However,the efficiency of training SNNs with neuromorphic algorithms is not ideal enough.Facing this,Michael et al.proposed a method which can solve the problem with the help of DNN(deep neural network).With this method,we can easily convert a well-trained DNN into an SCNN(spiking convolutional neural network).So far,there is a little of work focusing on the hardware accelerating of SCNN.The motivation of this paper is to design an SNN processor to accelerate SNN inference for SNNs obtained by this DNN-to-SNN method.We propose SIES(Spiking Neural Network Inference Engine for SCNN Accelerating).It uses a systolic array to accomplish the task of membrane potential increments computation.It integrates an optional hardware module of max-pooling to reduce additional data moving between the host and the SIES.We also design a hardware data setup mechanism for the convolutional layer on the SIES with which we can minimize the time of input spikes preparing.We implement the SIES on FPGA XCVU440.The number of neurons it supports is up to 4000 while the synapses are 256000.The SIES can run with the working frequency of 200 MHz,and its peak performance is 1.5625 TOPS.