摘要
为解决卷积神经网络在进行语音识别时通过样本训练神经网络所花费的时间过长的问题,提出了采用分数阶的理论处理卷积神经网络中的节点函数Sigmoid函数,使Sigmoid函数的收敛速度加快,而在不影响卷积神经网络进行语音识别的正确率的前提下,从而达到了减少训练所需时间提高整个神经网络的训练效率的目的 .实验结果表明:在保证正确率的前提下采用分数阶进行处理有效的减少了训练所花的时间.
In order to solve the problem that too much time is spent in speech recognition of eonvolutional neural network by sample training of the neural network, we put forward the theory of fractional order to process the node function Sigmoid of the convolutional neural network. It can accelerate the convergence rate of Sigmoid function but does not affect the accuracy of the speech recognition of the convolutional neural network. Moreover, it could reduce the training time and improve the training efficiency of the entire neural network. The experiment result shows that it could reduce the training time effectively when using the fractional order which is under the premise of accuracy.
出处
《哈尔滨理工大学学报》
CAS
北大核心
2016年第3期34-38,共5页
Journal of Harbin University of Science and Technology
基金
国家自然科学基金(61403109)
黑龙江省自然科学基金(F201240)
黑龙江省教育厅科学技术研究项目(12531571)
关键词
语音识别
卷积神经网络
分数阶
SIGMOID函数
speech recognition
convolutional neural networks
fractional order
Sigmoid function