期刊文献+

一种基于Curv-SAE特征融合的人脸降维和识别方法 被引量:4

Method of Face Recognition and Dimension Reduction Based on Curv-SAE Feature Fusion
下载PDF
导出
摘要 相比于传统的降维算法,深度学习中的栈式自编码器(Stacked Autoencoder,SAE)能够有效地学习特征并实现高效降维,然而对输入特征极其敏感。第二代离散曲波变换(Discrete Curvelet Transform,DCT)能够提取出人脸的各向信息(包含边缘和概貌特征),确保SAE的输入特征充分,从而弥补了其不足。因此,提出了一种基于Curv-SAE特征融合的人脸识别降维算法,即对人脸图像进行DCT得到特征脸并将其作为SAE的输入特征进行训练,特征融合后将其输入到分类器中进行识别。在ORL和FERET人脸数据库上的实验表明,与小波变换相比,曲波的特征信息更丰富;与传统的降维算法相比,SAE的特征表达更充分且识别精度更高。 Compared with the traditional dimension reduction algorithm,stacked autoencoders(SAE)in deep learning can effectively learn the features and achieve efficient dimension reduction,but its performance depends on the input characteristics.The second generation discrete curvelet transform can extract the information of human faces,including edge and overview features,and ensure that the input features of SAE are sufficient,thus making up for the shortages of SAE.Therefore,a new recognition and dimension reduction algorithm based on Curv-SAE feature fusion was proposed.Firstly,the face images are processed by DCT to generate the Curv-faces,which are trained as input characteristics of SAE.And then different layers of features are used for the final classification of identification.Experimental results on ORL and FERET face databases show that the feature information of curvelet transform is more abundant than the wavelet transform.Compared with the traditional dimension reduction algorithms,the feature expression of SAE is more complete and the recognition accuracy is higher.
作者 张志禹 刘思媛 ZHANG Zhi-yu;LIU Si-yuan(School of Automation and Information Engineering,Xi’an University of Technology,Xi’an 710048,China)
出处 《计算机科学》 CSCD 北大核心 2018年第10期267-271,305,共6页 Computer Science
基金 国家自然科学基金资助重大项目(41390454)资助
关键词 深度学习 人脸识别 第二代离散曲波变换 栈式自编码器 降维 Deep learning Face recognition The Second generation discrete curvelet transform Stacked autoencoders Dimension reduction
  • 相关文献

参考文献5

二级参考文献108

  • 1Tenenbaum J B, Silva V de, and Langford J C. A global geometric framework for nonlinear dimensionality reduction [J]. Science, 2000, 290: 2319-2323.
  • 2Roweis S T and Saul L K. Nonlinear dimensionality reduction by locally linear embedding [J]. Science, 2000, 290: 2323-2326.
  • 3Rahimi A, Recht B, and Darrell T. Learning to transform time series with a few examples [J]. IEEE Trans. on Pattern Analysis and Machine Intelligence, 2007, 29(10): 1759-1775.
  • 4Wang Liang and Suter D. Learning and matching of dynamic shape manifolds for human action recognition [J]. IEEE Trans. on Image Processing, 2007, 16(6): 1646-1661.
  • 5Hinton G E and Salakhutdinov R R. Reducing the dimensionality of data with neural networks [J]. Science, 2006, 313: 504-507.
  • 6Zeng Xian-Hua, Luo Si-Wei, and Wang Jiao. Auto-associative neural network system for recognition [C]. Proceedings of the sixth international conference on machine learning and cybernetics, Hong Kong, 2007: 2885-2890.
  • 7Hinton G E. Training products of experts by minimizing contrastive divergence. Neural Computation[J]. 2000, 14(8): 1771-1800.
  • 8Chen H and Murray A F. A continuous restricted Boltzmann machine with hardware-amenable learning algorithm [C]. Proceedings of 12th Int. Conf. on Artificial neural networks (ICANN2002), Madrid, Spain, 2002: 358-363.
  • 9Chen H and Murray A F. Continuous restricted Boltzmann machine with an implementable training algorithm [J]. IEE Proceedings of Vision, Image and Signal Processing, 2003, 150(3): 153-158.
  • 10Hinton G E. Training products of experts by minimizing contrastive divergence. London: Gatsby Computational Neuroscience Unit, Technical Report: GCNU TR 2000-004, 2000.

共引文献82

同被引文献38

引证文献4

二级引证文献20

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部