This paper proposes a kernel-blending connection approximated by a neural network(KBNN)for image classification.A kernel mapping connection structure,guaranteed by the function approximation theorem,is devised to blen...This paper proposes a kernel-blending connection approximated by a neural network(KBNN)for image classification.A kernel mapping connection structure,guaranteed by the function approximation theorem,is devised to blend feature extraction and feature classification through neural network learning.First,a feature extractor learns features from the raw images.Next,an automatically constructed kernel mapping connection maps the feature vectors into a feature space.Finally,a linear classifier is used as an output layer of the neural network to provide classification results.Furthermore,a novel loss function involving a cross-entropy loss and a hinge loss is proposed to improve the generalizability of the neural network.Experimental results on three well-known image datasets illustrate that the proposed method has good classification accuracy and generalizability.展开更多
基金the National Natural Science Foundation of China(Grant Nos.61972227 and 61672018)the Natural Science Foundation of Shandong Province(Grant No.ZR2019MF051)+1 种基金the Primary Research and Development Plan of Shandong Province(Grant No.2018GGX101013)the Fostering Project of Dominant Discipline and Talent Team of Shandong Province Higher Education Institutions。
文摘This paper proposes a kernel-blending connection approximated by a neural network(KBNN)for image classification.A kernel mapping connection structure,guaranteed by the function approximation theorem,is devised to blend feature extraction and feature classification through neural network learning.First,a feature extractor learns features from the raw images.Next,an automatically constructed kernel mapping connection maps the feature vectors into a feature space.Finally,a linear classifier is used as an output layer of the neural network to provide classification results.Furthermore,a novel loss function involving a cross-entropy loss and a hinge loss is proposed to improve the generalizability of the neural network.Experimental results on three well-known image datasets illustrate that the proposed method has good classification accuracy and generalizability.