摘要
针对深度学习模型在对小样本进行训练时会出现过拟合现象,提出随机退出优化方法和随机下降连接优化方法.这两种方法针对深度学习模型的微调阶段进行改进,最大限度减少由于训练数据量较少使得深层网络模型训练出现过拟合现象,并且使权值的更新过程更具有独立性,而不是依赖于有固定关系的隐层节点间的作用,同时可以降低识别错误率.对自建孤立语音词汇库进行了训练和识别,结果表明,在深度信念网络的基础上引入随机退出优化方法和随机下降连接优化方法可以提升识别率,缓解过拟合现象.
Deep learning models in the training with small samples appeared over-fitting phenomenon. Two optimization methods called dropout and dropconnect based on deep learning were proposed. The two methods intended to improve the fine-tune stage of deep learning models, which could reduce the amount of training data, and made the update process more independent, rather than depended on the hidden layer nodes. Moreover,the error rate could be reduced. Then the experimental methods and the models were used to train and identify the MNIST handwritten digit data set and the isolated speech vocabulary database. The results showed that the two methods could improve the recognition rates, and ease the phe-nomenon of over-fitting.
作者
彭玉青
刘帆
高晴晴
张媛媛
闫倩
PENG Yuqing LIU Fan GAO Qingqing ZHANG Yuanyuan YAN Qian(School of Computer Science and Software, Hebei University of Technology, Tianjin 300401, China)
出处
《郑州大学学报(理学版)》
CAS
北大核心
2016年第4期30-35,共6页
Journal of Zhengzhou University:Natural Science Edition
基金
国家自然科学基金资助项目(51175145)
河北省高等学校科学技术研究重点项目(ZD2014030)
关键词
深度学习
语音识别
神经网络
深度信念网络
deep learning
speech recognition
neural network
deep belief network