期刊文献+

新的连续自编码网络流形学习研究

Manifold learning about novel continuous autoencoder network
下载PDF
导出
摘要 发现高维观测数据空间的低维流形结构,是流形学习的主要目标。在前人利用神经网络进行非线性降维的基础上,提出一种新的连续自编码(Continuous Autoencoder,C-Autoencoder)网络,该方法特别采用CRBM(Continuous Restricted Boltzmann Machine)的网络结构,通过训练具有多个中间层的双向深层神经网络可将高维连续数据转换成低维嵌套并继而重构高维连续数据。特别地,这种连续自编码网络可以提供高维连续数据空间和低维嵌套结构的双向映射,不仅有效解决了大多数非线性降维方法所不具备的逆向映射问题,而且特别适用于高维连续数据的降维和重构。将C-Autoencoder用于人工连续数据的实验表明,C-Au-toencoder不仅能发现嵌入在高维连续数据中的非线性流形结构,也能有效地从低维嵌套中恢复原始高维连续数据。 The main goal of manifold learning is to find a low-dimensional manifold embedded in high-dimensional data space. Based on previous nonlinear dimensionality reduction methods using neural network,a novel Continuous Autoencoder (C-Autoencoder) network is put forward in this paper.The method specially uses Continuous Restricted Boltzmann Machine(CRBM) and coverts high-dimensional data to low-dimensional codes by training a neural network with multiple hidden layers,and vice versa reconstructs original high-dimensional data.In particular,the C-Autoencoder provides such a bi-directional mapping between the high-dimensional data space and the low-dimensional manifold space and is not only able to overcome the inherited deficiency of most nonlinear dimensionality reduction methods that do not have an inverse mapping but also especially suitable for dimen- sionality reduction and reconstruction of high-dimensional continuous data.The experiments on synthetic datasets show that the C- Autoencoder network not only can find the embedded manifold of high-dimensional datasets but also reconstrnet exactly the original high-dimension datasets from low-dimensional structure.
出处 《计算机工程与应用》 CSCD 北大核心 2009年第30期154-156,223,共4页 Computer Engineering and Applications
基金 国家科技攻关计划子项目No.2004BA111B01~~
关键词 连续自编码网络(C-Autoencoder) 高维数据 降维 重构 C-Autoencoder high-dimensional data dimensionality reduction reconstruction
  • 相关文献

参考文献11

  • 1Tenenbaum J B,Silva V de,Langford J C.A global geometric framework for nonlinear dimensionality reduction [J].Science, 2000,290: 2319-2323.
  • 2Roweis S T,Saul L K.Nonlinear dimensionality reduction by locally linear embedding[J].Science,2000,290:2323-2326.
  • 3Scholkopf B, Smola A, Muller K R.Nonlinear component analysis as a kernel eigenvalue problem[J].Neural Computation,1998,10(5): 1299-1319.
  • 4Belkin M,Niyogi P.Laplacian eigenmaps for dimensionality reduction and data representation[J].Neural Computation,2003,15 (6) : 1373-1396.
  • 5Ali R,Recht B,Darrell T.Learning to transform time series with a few examples[J].IEEE Transactions on pattern analysis and machine intelligence, 2007,29 ( 10 ) : 1759-1775.
  • 6Wang L,Suter D.Learning and matching of dynamic shape manifolds for human action recognition[J].IEEE Transactions on Image Processing, 2007,16 (6) : 1646-1661.
  • 7Hinton G E,Salakhutdinov R R.Reducing the dimensionality of data with neural networks[J].Science, 2006,313 : 504-507.
  • 8Hinton G E.Training products of experts by minimizing contrastive divergence[J].Neural Computation, 2000,14 ( 8 ) : 1771-1800.
  • 9Chen H,Murray A F.A continuous restricted Boltzmann machine with hardware-amenable learning algorithm[C]//Proceedings of 12th Int Conf on Artificial Neural Networks (ICANN2002),Madrid, Spain, 2002: 358-363.
  • 10Chen H,Murray A F.Continuous restricted Boltzmann machine with an implementable training algorithm[J].IEE Proceedings of Vision, Image and Signal Processing, 2003,150(3 ) : 153-158.

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部