期刊文献+

基于低秩表示的标记分布学习算法 被引量:4

Label Distribution Learning Method Based on Low-Rank Representation
下载PDF
导出
摘要 针对标记分布学习算法忽略标记相关性信息及数据存在异常和噪声值的情况,文中提出基于低秩表示的标记分布学习算法(LDL-LRR).利用特征空间的基线性表示样本信息,实现对原始特征空间数据的降维.将低轶表示(LRR)迁移至标记空间,对模型施加低秩约束,把握数据的全局结构.分别使用增广拉格朗日乘子法和拟牛顿法求解LRR和目标函数,再通过最大熵模型预测标记分布.在10个数据集上的对比实验表明,LDL-LRR性能良好,效果稳定. Label correlations,noises and corruptions are ignored in label distribution learning algorithms.Aiming at this problem,a label distribution learning method based on low-rank representation(LDL-LRR)is proposed.The base of the feature space is leveraged to represent the sample information,and consequently dimensionality reduction of the data in the original feature space is achieved.To capture the global structure of the data,low-rank representation is transferred to the label space to impose low-rank constraint to the model.Augmented Lagrange method and quasi-Newton method are employed to solve the LRR and objective function,respectively.Finally,the label distribution is predicted by the maximum entropy model.Experiments on 10 datasets show that LDL-LRR produces good performance and stable effect.
作者 刘睿馨 刘新媛 李晨 LIU Ruixin;LIU Xinyuan;LI Chen(School of Software Engineering,Xi′an Jiaotong University,Xi′an 710049)
出处 《模式识别与人工智能》 EI CSCD 北大核心 2021年第2期146-156,共11页 Pattern Recognition and Artificial Intelligence
基金 国家自然科学基金项目(No.61573273)资助。
关键词 标记多义性 单标记学习(SLL) 多标记学习(MLL) 标记分布学习(LDL) 低秩表示(LRR) Label Ambiguity Single-Label Learning Multi-label Learning(MLL) Label Distribution Learning(LDL) Low-Rank Representation(LRR)
  • 相关文献

参考文献8

二级参考文献135

  • 1JIANG Xu-dong. Linear subspace learning-based dimensionality reduction [ J ]. I EEE Signal Processing Magazine,2011,28 ( 2 ) : 16- 26.
  • 2DONOHO D L. Compressed sensing[ J]. IEEE Trans on Information Theory,2006,52 (4) : 1289-1306.
  • 3CANDES E J, MICHAEL W. An introduction to compressive sampling [J]. IEEE Signal Processing Magazine,2008,25(2) :21-30.
  • 4WRIGHT J, ALLEN Y, GANESH A, et al. Robust face recognition via sparse representation[ J]. IEEE Trans on Pattern Analysis and Machine Intelligence ,2009,31 ( 2 ) :210-227.
  • 5WRIGHT J, MA Yi, MAIRAL J, et al. Sparse representation for computer vision and pattern recognition [ J ]. Proceedings of the IEEE,2010, 98(6) :1031-1044.
  • 6WRIGHT J, GANESH A, RAO S, et al. Robust principal component analysis : exact recovery of corrupted low-rank matrices via convex optimization [ C ]//Proc of the 24th Annual Conference on Neural Information Processing Systems. 2009 : 2080-2088.
  • 7CANDES E J, LI Xiao-dong, MA Yi, et al. Robust principal component analysis? [J]. dournal of the AOM,2011,58(3) :1-37.
  • 8XU Huan, CARAMANIS C, SANGHAVI S. Robust PCA via outlier pursuit [ J]. IEEE Trans on Information Theory, 2012,58 ( 5 ) : 3047- 3064.
  • 9CANDES E J, RECHT B. Exact matrix completion via convex optimization[J]. Foundations of Computational Mathematics,2009,9 (6) :717-772.
  • 10CANDES E J, TAO T. The power of convex relaxation : Near-optimal matrix completion[ J]. IEEE Trans on Information Theory,2010, 56 ( 5 ) :2053- 2080.

共引文献171

同被引文献40

引证文献4

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部