This paper proposes a Back Propagation (BP) neural network with momentum enhancement aiming to achieving the smooth convergence for aggregate volumetric estimation purpose. Network inputs are first selected by optical...This paper proposes a Back Propagation (BP) neural network with momentum enhancement aiming to achieving the smooth convergence for aggregate volumetric estimation purpose. Network inputs are first selected by optically measuring the eight geometry-related parameters from the given particle image. To simplify the network structure, principal component analysis technique is applied to reduce the input dimension. The specific network structure is finalized based on both empirical expertise and analysis on selecting the appropriate number of neurons in hidden layer. The network is trained using the finite number of randomly-picked particles. The training and test results suggest that, compared to the generic BP network, the training duration of the proposed neural network is greatly attenuated, the complexity of the network structure is largely reduced, and the estimation precision is within 2%, being sufficiently up to technical satisfaction.展开更多
基金Funded by Ningbo Natural Science Foundation (No. 2006A610016)Foundation of National Education Ministry for Returned Overseas Chinese Students & Scholars (SRF for ROCS, SEM. No.2006699)
文摘This paper proposes a Back Propagation (BP) neural network with momentum enhancement aiming to achieving the smooth convergence for aggregate volumetric estimation purpose. Network inputs are first selected by optically measuring the eight geometry-related parameters from the given particle image. To simplify the network structure, principal component analysis technique is applied to reduce the input dimension. The specific network structure is finalized based on both empirical expertise and analysis on selecting the appropriate number of neurons in hidden layer. The network is trained using the finite number of randomly-picked particles. The training and test results suggest that, compared to the generic BP network, the training duration of the proposed neural network is greatly attenuated, the complexity of the network structure is largely reduced, and the estimation precision is within 2%, being sufficiently up to technical satisfaction.