摘要
Neural Networks (NN) are the functional unit of Deep Learning and are known to mimic the behavior of the human brain to solve complex data-driven problems. Whenever we train our own neural networks, we need to take care of something called the generalization of the neural network. The performance of Artificial Neural Networks (ANN) mostly depends upon its generalization capability. In this paper, we propose an innovative approach to enhance the generalization capability of artificial neural networks (ANN) using structural redundancy. A novel perspective on handling input data prototypes and their impact on the development of generalization, which could improve to ANN architectures accuracy and reliability is described.
Neural Networks (NN) are the functional unit of Deep Learning and are known to mimic the behavior of the human brain to solve complex data-driven problems. Whenever we train our own neural networks, we need to take care of something called the generalization of the neural network. The performance of Artificial Neural Networks (ANN) mostly depends upon its generalization capability. In this paper, we propose an innovative approach to enhance the generalization capability of artificial neural networks (ANN) using structural redundancy. A novel perspective on handling input data prototypes and their impact on the development of generalization, which could improve to ANN architectures accuracy and reliability is described.