期刊文献+

Square Neurons, Power Neurons, and Their Learning Algorithms

Square Neurons, Power Neurons, and Their Learning Algorithms
下载PDF
导出
摘要 In this paper, we introduce the concepts of square neurons, power neu-rons, and new learning algorithms based on square neurons, and power neurons. First, we briefly review the basic idea of the Boltzmann Machine, specifically that the invariant distributions of the Boltzmann Machine generate Markov chains. We further review ABM (Attrasoft Boltzmann Machine). Next, we review the θ-transformation and its completeness, i.e. any function can be expanded by θ-transformation. The invariant distribution of the ABM is a θ-transformation;therefore, an ABM can simulate any distribution. We review the linear neurons and the associated learning algorithm. We then discuss the problems of the exponential neurons used in ABM, which are unstable, and the problems of the linear neurons, which do not discriminate the wrong answers from the right answers as sharply as the exponential neurons. Finally, we introduce the concept of square neurons and power neurons. We also discuss the advantages of the learning algorithms based on square neurons and power neurons, which have the stability of the linear neurons and the sharp discrimination of the exponential neurons. In this paper, we introduce the concepts of square neurons, power neu-rons, and new learning algorithms based on square neurons, and power neurons. First, we briefly review the basic idea of the Boltzmann Machine, specifically that the invariant distributions of the Boltzmann Machine generate Markov chains. We further review ABM (Attrasoft Boltzmann Machine). Next, we review the θ-transformation and its completeness, i.e. any function can be expanded by θ-transformation. The invariant distribution of the ABM is a θ-transformation;therefore, an ABM can simulate any distribution. We review the linear neurons and the associated learning algorithm. We then discuss the problems of the exponential neurons used in ABM, which are unstable, and the problems of the linear neurons, which do not discriminate the wrong answers from the right answers as sharply as the exponential neurons. Finally, we introduce the concept of square neurons and power neurons. We also discuss the advantages of the learning algorithms based on square neurons and power neurons, which have the stability of the linear neurons and the sharp discrimination of the exponential neurons.
作者 Ying Liu
出处 《American Journal of Computational Mathematics》 2018年第4期296-313,共18页 美国计算数学期刊(英文)
关键词 AI BOLTZMANN Machine MARKOV Chain INVARIANT Distribution COMPLETENESS Deep Neural Network AI Boltzmann Machine Markov Chain Invariant Distribution Completeness Deep Neural Network
  • 相关文献

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部