期刊文献+

基于邻域保持的流形学习算法评价模型 被引量:2

Evaluation model based on neighborhood preservation for manifold learning algorithms
下载PDF
导出
摘要 应力函数和残差只适合于评价距离严格保持的流形学习算法,dy-dx表示法又是一个定性模型。虽然距离比例方差可以比较和评价大多数的流形学习算法,但其需要计算测地线距离,具有较高的计算复杂度。为此,提出一种基于邻域保持的流形学习算法定量评价模型,该模型仅仅需要确定两个空间中每个对象的k个近邻,并计算出每个点在低维空间中的近邻保持情况,不用计算测地线距离。理论分析表明,邻域保持模型的计算复杂度远远低于距离比例方差的复杂度。在三个数据集上比较了两个模型的性能,实验结果表明,利用邻域保持模型不但可以评价同一算法在不同邻域参数下的嵌入效果,而且可以在不同的流形学习算法之间进行比较,并且其评价流形学习算法的性能优于距离比例方差。 The stress function and the residual variance are only fit to evaluate the manifold learning algorithms with strict distance preservation. And dy-dx representation is only a qualitative measure. Although the variance of distance ratios can compare and judge most of manifold learning algorithms, the geodesic distances need computing in the method which leads to high computational complexity. An evaluation model based neighborhood preservation was proposed. In the model, only k nearest neighbors needed determining and the preservation ratio of the neighborhood in the low dimensional space needed computing for each object. The geodesic distances did not need calculating in the method. The theoretical analysis shows that the computational complexity of the proposed model is great lower than that of the variance of distance ratios. The performance of the two models was compared on three data sets. Experiments demonstrate that the proposed model not only can judge results from the same method with different parameters, but also can compare results by different algorithms. And the evaluation performance of manifold learning algorithms of the model is superior to that of the variance of distance ratios.
出处 《计算机应用》 CSCD 北大核心 2012年第9期2516-2519,共4页 journal of Computer Applications
基金 天津市应用基础及前沿技术研究计划项目(10JCZDJC16000)
关键词 流形学习 应力函数 残差 距离比例方差 dy-dx表示法 manifold learning stress function residual variance variance of distance ratios dy-dx representation
  • 相关文献

参考文献18

  • 1TENENBAUM J B, de SILVA V, LANGFORD J C. A global geo- metric framework for nonlinear dimensionality reduction [ J]. Sci- ence, 2000, 290(5500): 2319-2323.
  • 2ROWEIS S T, SAUL L K. Nonlinear dimensionality reduction by lo- cally linear embedding [J]. Science, 2000, 290(5500): 2323- 2326.
  • 3BELKIN M, NIYOGI P. Laplacian eigenmaps for dimensionality re- duction and data representation [ J]. Neural Computation, 2003, 15 (6): 1373 -1396.
  • 4ZHANG Z, ZHA H. Principal manifolds and nonlinear dimensional- ity reduction via local tangent space alignment [ J]. SIAM Journal of Scientific Computing, 2004, 26(1): 313-338.
  • 5HINTON G E, SALAKHUTDINOV R R. Reducing the dimensional- ity of data with neural networks [ J]. Science, 2006, 313(5786) : 504 - 507.
  • 6de SILVA V, TENENBAUM J B. Global versus local methods in nonlinear dimensionality reduction [ C]//Proceedings of the 6th An- nual Conference on Neural Information Processing Systems. Cam- bridge: MIT Press, 2003:705-712.
  • 7TENKINS C O, MATARIC J M. A spatio-temporal extension to ISO- MAP nonlinear dimension reduction [ C]//ICML '04: Proceedings of the 21st International Conference on Machine Learning. New York: ACM Press, 2004:441-448.
  • 8ZENG X, LUO S, WANG J. Auto-associative neural network system for recognition [ C]// ICMLC '07: Proceedings of the 6th Interna- tional Conference on Machine Learning and Cybernetics. Piscat- away, NJ: IEEE Press, 2007:2885-2890.
  • 9ERHAN D, PIERRE-ANTOINE M, BENGIO Y, et al. The diffi- culty of training deep architectures and the effect of unsupervised pre-training [ C]//AISTATS '09: Proceedings of the 12th Interna- tional Conference on Artificial Intelligence and Statistics. Clearwa- ter: JMLR, 2009: 153-160.
  • 10胡昭华,宋耀良.基于Autoencoder网络的数据降维和重构[J].电子与信息学报,2009,31(5):1189-1192. 被引量:30

二级参考文献76

共引文献44

同被引文献22

  • 1LV Yuyin,YIN Chunsheng,LIU Hongyan,YI Zhongsheng,WANG Yang.3D-QSAR study on atmospheric half-lives of POPs using CoMFA and CoMSIA[J].Journal of Environmental Sciences,2008,20(12):1433-1438. 被引量:7
  • 2黄斌.基于支持向量学习机预测药物透血脑屏障的活性[J].计算机与应用化学,2009,26(2):188-190. 被引量:5
  • 3饶含兵,李泽荣,陈晓梅,李象远.基于支持向量学习机的HIV-1蛋白酶抑制剂的活性预测[J].化学学报,2007,65(3):197-202. 被引量:2
  • 4HASTIE T,TIBSHIRANI R,FRIEDMAN J. The elements of statistical learning:data mining,inference,and prediction[M].New York:Springer-Verlag,2009.
  • 5BOSER B E,GUYON I M,VAPNIK V N. A training algorithm for optimal margin classifiers[A].New York:ACM,1992.144-152.
  • 6HAYKIN S O. Neural networks and learning machines[M].Cambridge:Prentice-Hall,2008.
  • 7SETTLES B. Active learning literature survey,Computer Science [Technique Report 1648][R].Madison,WI:University of Wisconsin-Madison,2010.
  • 8TONG S,KOLLER D. Support vector machine active learning with applications to text classification[J].Journal of Machine Learning Research,2002.45-66.
  • 9OLSSON F. A literature survey of active machine learning in the context of natural language processing[SICS Technical Report T2009:06][R].Kista,Sweden:Swedish Institution Computer Science,2009.
  • 10de SA V R. Learning classification with unlabeled data[A].San Francisco:Morgan Kaufmann Publishers,1994.112-119.

引证文献2

二级引证文献4

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部