This paper studies the generalization capability of feedforward neural networks (FNN).The mechanism of FNNs for classification is investigated from the geometric and probabilistic viewpoints. It is pointed out that th...This paper studies the generalization capability of feedforward neural networks (FNN).The mechanism of FNNs for classification is investigated from the geometric and probabilistic viewpoints. It is pointed out that the outputs of the output layer in the FNNs for classification correspond to the estimates of posteriori probability of the input pattern samples with desired outputs 1 or 0. The theorem for the generalized kernel function in the radial basis function networks (RBFN) is given. For an 2-layer perceptron network (2-LPN). an idea of using extended samples to improve generalization capability is proposed. Finally. the experimental results of radar target classification are given to verify the generaliztion capability of the RBFNs.展开更多
文摘This paper studies the generalization capability of feedforward neural networks (FNN).The mechanism of FNNs for classification is investigated from the geometric and probabilistic viewpoints. It is pointed out that the outputs of the output layer in the FNNs for classification correspond to the estimates of posteriori probability of the input pattern samples with desired outputs 1 or 0. The theorem for the generalized kernel function in the radial basis function networks (RBFN) is given. For an 2-layer perceptron network (2-LPN). an idea of using extended samples to improve generalization capability is proposed. Finally. the experimental results of radar target classification are given to verify the generaliztion capability of the RBFNs.