摘要
Google’s AlphaGo represents the impressive performance of deep learning and the backbone of deep learning is the workhorse of highly versatile neural networks. Each network is made up of layers of interconnected neurons and the nonlinear activation function inside each neuron is one of the key factors that account for the unprecedented achievement of deep learning. Learning how to create quantum neural networks has been a long time pursuit since 1990’s from many researchers, unfortunately without much success. The main challenge is to know how to design a nonlinear activation function inside the quantum neuron, because the laws in quantum mechanics require the operations on quantum neurons be unitary and linear. A recent discovery uses a special quantum circuit technique called repeat-until-success to make a nonlinear activation function inside a quantum neuron, which is the hard part of creating this neuron. However, the activation function used in that work is based on the periodic tangent function. Because of this periodicity, the input to this function has to be restricted to the range of [0, π/2), which is a serious constraint for its applications in real world problems. The function’s periodicity also makes its neurons not suited for being trained with gradient descent as its derivatives oscillate. The purpose of our study is to propose a new nonlinear activation function that is not periodic so it can take any real numbers and its neurons can be trained with efficient gradient descent. Our quantum neuron offers the full benefit as a quantum entity to support superposition, entanglement, interference, while also enjoys the full benefit as a classical entity to take any real numbers as its input and can be trained with gradient descent. The performance of the quantum neurons with our new activation function is analyzed on IBM’s 5Q quantum computer and IBM’s quantum simulator.
Google’s AlphaGo represents the impressive performance of deep learning and the backbone of deep learning is the workhorse of highly versatile neural networks. Each network is made up of layers of interconnected neurons and the nonlinear activation function inside each neuron is one of the key factors that account for the unprecedented achievement of deep learning. Learning how to create quantum neural networks has been a long time pursuit since 1990’s from many researchers, unfortunately without much success. The main challenge is to know how to design a nonlinear activation function inside the quantum neuron, because the laws in quantum mechanics require the operations on quantum neurons be unitary and linear. A recent discovery uses a special quantum circuit technique called repeat-until-success to make a nonlinear activation function inside a quantum neuron, which is the hard part of creating this neuron. However, the activation function used in that work is based on the periodic tangent function. Because of this periodicity, the input to this function has to be restricted to the range of [0, π/2), which is a serious constraint for its applications in real world problems. The function’s periodicity also makes its neurons not suited for being trained with gradient descent as its derivatives oscillate. The purpose of our study is to propose a new nonlinear activation function that is not periodic so it can take any real numbers and its neurons can be trained with efficient gradient descent. Our quantum neuron offers the full benefit as a quantum entity to support superposition, entanglement, interference, while also enjoys the full benefit as a classical entity to take any real numbers as its input and can be trained with gradient descent. The performance of the quantum neurons with our new activation function is analyzed on IBM’s 5Q quantum computer and IBM’s quantum simulator.