Deep neural networks(DNNs)have drawn great attention as they perform the state-of-the-art results on many tasks.Compared to DNNs,spiking neural networks(SNNs),which are considered as the new generation of neural netwo...Deep neural networks(DNNs)have drawn great attention as they perform the state-of-the-art results on many tasks.Compared to DNNs,spiking neural networks(SNNs),which are considered as the new generation of neural networks,fail to achieve comparable performance especially on tasks with large problem sizes.Many previous work tried to close the gap between DNNs and SNNs but used small networks on simple tasks.This work proposes a simple but effective way to construct deep spiking neural networks(DSNNs)by transferring the learned ability of DNNs to SNNs.DSNNs achieve comparable accuracy on large networks and complex datasets.展开更多
Spiking neural networks(SNNs) are widely used in many fields because they work closer to biological neurons.However,due to its computational complexity,many SNNs implementations are limited to computer programs.First,...Spiking neural networks(SNNs) are widely used in many fields because they work closer to biological neurons.However,due to its computational complexity,many SNNs implementations are limited to computer programs.First,this paper proposes a multi-synaptic circuit(MSC) based on memristor,which realizes the multi-synapse connection between neurons and the multi-delay transmission of pulse signals.The synapse circuit participates in the calculation of the network while transmitting the pulse signal,and completes the complex calculations on the software with hardware.Secondly,a new spiking neuron circuit based on the leaky integrate-and-fire(LIF) model is designed in this paper.The amplitude and width of the pulse emitted by the spiking neuron circuit can be adjusted as required.The combination of spiking neuron circuit and MSC forms the multi-synaptic spiking neuron(MSSN).The MSSN was simulated in PSPICE and the expected result was obtained,which verified the feasibility of the circuit.Finally,a small SNN was designed based on the mathematical model of MSSN.After the SNN is trained and optimized,it obtains a good accuracy in the classification of the IRIS-dataset,which verifies the practicability of the design in the network.展开更多
The purpose of this study is to analyze and then model, using neural network models, the performance of the Web server in order to improve them. In our experiments, the parameters taken into account are the number of ...The purpose of this study is to analyze and then model, using neural network models, the performance of the Web server in order to improve them. In our experiments, the parameters taken into account are the number of instances of clients simultaneously requesting the same Web page that contains the same SQL queries, the number of tables queried by the SQL, the number of records to be displayed on the requested Web pages, and the type of used database server. This work demonstrates the influences of these parameters on the results of Web server performance analyzes. For the MySQL database server, it has been observed that the mean response time of the Web server tends to become increasingly slow as the number of client connection occurrences as well as the number of records to display increases. For the PostgreSQL database server, the mean response time of the Web server does not change much, although there is an increase in the number of clients and/or size of information to be displayed on Web pages. Although it has been observed that the mean response time of the Web server is generally a little faster for the MySQL database server, it has been noted that this mean response time of the Web server is more stable for PostgreSQL database server.展开更多
AI development has brought great success to upgrading the information age.At the same time,the large-scale artificial neural network for building AI systems is thirsty for computing power,which is barely satisfied by ...AI development has brought great success to upgrading the information age.At the same time,the large-scale artificial neural network for building AI systems is thirsty for computing power,which is barely satisfied by the conventional computing hardware.In the post-Moore era,the increase in computing power brought about by the size reduction of CMOS in very large-scale integrated circuits(VLSIC)is challenging to meet the growing demand for AI computing power.To address the issue,technical approaches like neuromorphic computing attract great attention because of their feature of breaking Von-Neumann architecture,and dealing with AI algorithms much more parallelly and energy efficiently.Inspired by the human neural network architecture,neuromorphic computing hardware is brought to life based on novel artificial neurons constructed by new materials or devices.Although it is relatively difficult to deploy a training process in the neuromorphic architecture like spiking neural network(SNN),the development in this field has incubated promising technologies like in-sensor computing,which brings new opportunities for multidisciplinary research,including the field of optoelectronic materials and devices,artificial neural networks,and microelectronics integration technology.The vision chips based on the architectures could reduce unnecessary data transfer and realize fast and energy-efficient visual cognitive processing.This paper reviews firstly the architectures and algorithms of SNN,and artificial neuron devices supporting neuromorphic computing,then the recent progress of in-sensor computing vision chips,which all will promote the development of AI.展开更多
基金the National Natural Science Foundation of China(No.61732007)Strategic Priority Research Program of Chinese Academy of Sciences(XDB32050200,XDC01020000).
文摘Deep neural networks(DNNs)have drawn great attention as they perform the state-of-the-art results on many tasks.Compared to DNNs,spiking neural networks(SNNs),which are considered as the new generation of neural networks,fail to achieve comparable performance especially on tasks with large problem sizes.Many previous work tried to close the gap between DNNs and SNNs but used small networks on simple tasks.This work proposes a simple but effective way to construct deep spiking neural networks(DSNNs)by transferring the learned ability of DNNs to SNNs.DSNNs achieve comparable accuracy on large networks and complex datasets.
基金Project supported by the National Key Research and Development Program of China(Grant No.2018 YFB1306600)the National Natural Science Foundation of China(Grant Nos.62076207,62076208,and U20A20227)the Science and Technology Plan Program of Yubei District of Chongqing(Grant No.2021-17)。
文摘Spiking neural networks(SNNs) are widely used in many fields because they work closer to biological neurons.However,due to its computational complexity,many SNNs implementations are limited to computer programs.First,this paper proposes a multi-synaptic circuit(MSC) based on memristor,which realizes the multi-synapse connection between neurons and the multi-delay transmission of pulse signals.The synapse circuit participates in the calculation of the network while transmitting the pulse signal,and completes the complex calculations on the software with hardware.Secondly,a new spiking neuron circuit based on the leaky integrate-and-fire(LIF) model is designed in this paper.The amplitude and width of the pulse emitted by the spiking neuron circuit can be adjusted as required.The combination of spiking neuron circuit and MSC forms the multi-synaptic spiking neuron(MSSN).The MSSN was simulated in PSPICE and the expected result was obtained,which verified the feasibility of the circuit.Finally,a small SNN was designed based on the mathematical model of MSSN.After the SNN is trained and optimized,it obtains a good accuracy in the classification of the IRIS-dataset,which verifies the practicability of the design in the network.
文摘The purpose of this study is to analyze and then model, using neural network models, the performance of the Web server in order to improve them. In our experiments, the parameters taken into account are the number of instances of clients simultaneously requesting the same Web page that contains the same SQL queries, the number of tables queried by the SQL, the number of records to be displayed on the requested Web pages, and the type of used database server. This work demonstrates the influences of these parameters on the results of Web server performance analyzes. For the MySQL database server, it has been observed that the mean response time of the Web server tends to become increasingly slow as the number of client connection occurrences as well as the number of records to display increases. For the PostgreSQL database server, the mean response time of the Web server does not change much, although there is an increase in the number of clients and/or size of information to be displayed on Web pages. Although it has been observed that the mean response time of the Web server is generally a little faster for the MySQL database server, it has been noted that this mean response time of the Web server is more stable for PostgreSQL database server.
基金Project supported in part by the National Key Research and Development Program of China(Grant No.2021YFA0716400)the National Natural Science Foundation of China(Grant Nos.62225405,62150027,61974080,61991443,61975093,61927811,61875104,62175126,and 62235011)+2 种基金the Ministry of Science and Technology of China(Grant Nos.2021ZD0109900 and 2021ZD0109903)the Collaborative Innovation Center of Solid-State Lighting and Energy-Saving ElectronicsTsinghua University Initiative Scientific Research Program.
文摘AI development has brought great success to upgrading the information age.At the same time,the large-scale artificial neural network for building AI systems is thirsty for computing power,which is barely satisfied by the conventional computing hardware.In the post-Moore era,the increase in computing power brought about by the size reduction of CMOS in very large-scale integrated circuits(VLSIC)is challenging to meet the growing demand for AI computing power.To address the issue,technical approaches like neuromorphic computing attract great attention because of their feature of breaking Von-Neumann architecture,and dealing with AI algorithms much more parallelly and energy efficiently.Inspired by the human neural network architecture,neuromorphic computing hardware is brought to life based on novel artificial neurons constructed by new materials or devices.Although it is relatively difficult to deploy a training process in the neuromorphic architecture like spiking neural network(SNN),the development in this field has incubated promising technologies like in-sensor computing,which brings new opportunities for multidisciplinary research,including the field of optoelectronic materials and devices,artificial neural networks,and microelectronics integration technology.The vision chips based on the architectures could reduce unnecessary data transfer and realize fast and energy-efficient visual cognitive processing.This paper reviews firstly the architectures and algorithms of SNN,and artificial neuron devices supporting neuromorphic computing,then the recent progress of in-sensor computing vision chips,which all will promote the development of AI.