期刊文献+

动态增殖流形学习算法 被引量:13

A Dynamically Incremental Manifold Learning Algorithm
下载PDF
导出
摘要 流形学习的主要目标是发现高维观测数据空间中的低维光滑流形.目前,流形学习已经成为机器学习和数据挖掘领域的研究热点.为了从高维数据流和大规模海量数据集中探索有价值的信息,迫切需要增殖地发现内在低维流形结构.但是,现有流形学习算法不具有增殖能力,并且不能有效处理海量数据集.针对这些问题,系统定义了增殖流形学习的概念,这有利于解释人脑中稳态感知流形的动态形成过程,且可以指导符合人脑增殖学习机理的流形学习算法的研究.以此为指导原则,提出了动态增殖流形学习算法,并在实验中验证了算法的有效性. The main goal of manifold learning is to find a smooth low-dimensional manifold embedded in high-dimensional data space. At present, manifold learning has become a hot issue in the field of machine learning and data mining. In order to seek valuable information from high-dimensional data stream and large-scale data set, it is urgently necessary to incrementally find intrinsic low-dimensional manifold structure in such observation data set. But, current manifold learning algorithms have no incremental ability and also can not process the giant data set effectively. Aiming at these problems, the concept of incremental manifold learning is firstly defined systematically in this paper. It is advantageous to interpret the dynamic process of developing a stable perception manifold and to guide the research of manifold learning algorithms which fit to incremental learning mechanism in man brain. According to the guiding principles of incremental manifold learning, a dynamically incremental manifold learning algorithm is then proposed, which can effectively process the increasing data sets and the giant data set sampled from the same manifold. The novel method can find the global low-dimensional manifold by integrating the lowdimensional coordinates of different neighborhood observation data sets. Finally, the experimental results on both synthetic "Swiss-roll" data set and real face data set show that the algorithm is feasible.
出处 《计算机研究与发展》 EI CSCD 北大核心 2007年第9期1462-1468,共7页 Journal of Computer Research and Development
基金 国家自然科学基金项目(60373029) 教育部博士学科点基金项目(20050004001)
关键词 流形学习 感知流形 低维流形 局部线性嵌入 增殖流形学习 可视化 manifold learning perception manifold low dimensional manifold locally linear embedding (LLE) incremental manifold learning visualization
  • 相关文献

参考文献13

  • 1B Scholkopf,A Smola,K R Müller.Nonlinear component analysis as a kernel eigenvalue problem[J].Neural Computation,1998,10(5):1299-1319
  • 2J Tenenbaum,V de Silva,J Langford.A global geometric framework for nonlinear dimensionality reduction[J].Science,2000,290(5500):2319-2323
  • 3M Belkin,P Niyogi.Laplacian eigenmaps for dimensionality reduction and data representation[J].Neural Computation,2003,15(6):1373-1396
  • 4S Roweis,L Saul.Nonlinear dimensionality reduction by locally linear embedding[J].Science,2000,290(5500):2323-2326
  • 5Martin H C Law,Anil K Jain.Incremental nonlinear dimensionality reduction by manifold learning[J].IEEE Trans on Pattern Analysis and Machine Intelligence,2006,28(3):337-391
  • 6Olga Kouropteva,Oleg Okun,Matti Pietiknen.Incremental locally linear embedding[J].Pattern Recognition,2005,38:1764-1767
  • 7Z Y Zhang,H Y Zha.Principal manifolds and nonlinear dimensionality reduction via tangent space alignment[J].SIAM Journal of Scientific Computing,2004,26(1):313-338
  • 8H S Seung,D D Lee.The manifold ways of perception[J].Science,2000,290(5500):2268-2269
  • 9L Saul,S Roweis,Think globally.Fit locally:Unsupervised learning of low dimensional manifolds[J].Journal of Machine Learning Research,2002,4:119-155
  • 10罗四维,温津伟.神经场整体性和增殖性研究与分析[J].计算机研究与发展,2003,40(5):668-674. 被引量:10

二级参考文献12

  • 1陈省身 陈维桓.微分几何讲义[M].北京:北京大学出版社,1980..
  • 2Jacobs, Jordan. Adaptive mixtures of local experts. Neural Computation, 1991, 2(3) : 79-87.
  • 3R E Sehapire. The strength of weak learnability. Machine Learning, 1990, 5(2): 197-227.
  • 4S Amari. Information geometry. Contemporary Mathematics,1977, 20(3): 81-95.
  • 5S Amari. Information geometry of EM and EM algorithm for neural networks. Neural Networks, 1995, 8(9): 1379-1408.
  • 6S Amari, K Kurata, H Nagaoka. Information geometry of Boltzmann machines. IEEE Trans on Neural Networks, 1992, 3(2) : 260-271.
  • 7S Amari. Dualistic geometry of the manifold of higher-order neurons. Neural Networks, 1991, 4(4): 443-451.
  • 8S Amari. Differential geometrical methods in statistics. Springer Lecture Notes in Statistics, Vol 28, New York: Springer-Verlag,1985.
  • 9L K Hansen, P Salamon. Neural network ensembles. IEEE TPAM, 1990, 12(10): 993-1001.
  • 10Jordan, Jacobs. Hiearchical mixtures of experts and the EM algorithm. Neural Computation, 1994, 2(6) : 181-214.

共引文献9

同被引文献236

引证文献13

二级引证文献46

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部