期刊文献+

基于Huber损失的非负矩阵分解算法 被引量:4

Huber Loss Based Nonnegative Matrix Factorization Algorithm
下载PDF
导出
摘要 非负矩阵分解(Nonnegative Matrix Factorization)算法能为原始数据找到非负的、线性的矩阵表示且保留了数据的本质特征,已被成功应用于多个领域。经典的NMF算法及其变体算法大部分使用均方误差函数来度量重建误差,在许多任务中已经显示出其有效性,但它在处理含有噪声的数据时仍然面临一些困难。Huber损失函数对较小的残差执行的惩罚与均方误差损失函数相同,对较大的残差执行的惩罚是线性增长的,因此与均方误差损失函数相比,Huber损失函数具有更强的鲁棒性;已有研究证明L_(2,1)范数稀疏正则项在机器学习的分类和聚类模型中具有特征选择作用。结合两者的优点,文中提出了一种基于Huber损失函数且融入L_(2,1)范数正则项的非负矩阵分解聚类模型,并给出了基于投影梯度更新规则的优化过程。在多组数据集上将所提算法与经典的多种聚类算法进行对比,实验结果验证了所提算法的有效性。 Non-negative matrix factorization(NMF)algorithm can find a non-negative and linear matrix representation and retains the essential characteristics of the original data,it has been successfully applied to many fields.The classical NMF algorithm and its variant algorithms mostly use the mean square error function to measure the reconstruction error,which has been shown to be effective in many tasks,but it still faces some difficulties in dealing with noise-containing data.The Huber loss function performs the same penalty for the smaller residual as the mean square error loss function,and the penalty for the larger residual is linearly grown,so the Huber loss function is more robust than the mean square error loss function.It has been proved that the L 2,1 norm sparse regularization term is a feature selection function in the classification and clustering model of machine learning.Therefore,combining the advantages of the two,a non-negative matrix factorization clustering model based on Huber loss function and incorporating L 2,1 norm regularization term is proposed,and an effective optimization procedure based on projected gradient method to update variables is given.Compared with the classical NMF multi-clustering algorithm on multiple sets of datasets,the experimental results show the effectiveness of the proposed algorithm.
作者 王丽星 曹付元 WANG Li-xing;CAO Fu-yuan(School of Computer and Information Technology,Shanxi University,Taiyuan 030006,China;Key Laboratory of Computational Intelligence and Chinese Information Processing(Shanxi University),Ministry of Education,Taiyuan 030006,China)
出处 《计算机科学》 CSCD 北大核心 2020年第11期80-87,共8页 Computer Science
基金 国家自然科学基金(61573229,61976128) 山西省重点研发计划项目(201803D31022) 山西省留学基金项目(2016-003) 山西省留学基金择优资助项目(2016-001)。
关键词 非负矩阵分解 Huber损失函数 L2 1范数 投影梯度法 Nonnegative matrix factorization Huber loss function L2 1 norm Projected gradient method
  • 相关文献

参考文献2

二级参考文献28

  • 1刘维湘,郑南宁,游屈波.非负矩阵分解及其在模式识别中的应用[J].科学通报,2006,51(3):241-250. 被引量:38
  • 2LlU Weixiang ZHENG Nanning YOU Qubo.Nonnegative matrix factorization and its applications in pattern recognition[J].Chinese Science Bulletin,2006,51(1):7-18. 被引量:21
  • 3Lee D D,Seung H S. Learning the parts of objects by non-negativematrix factorization[J].Nature,1999,(6755):788-791.
  • 4Hoyer P O. Non-negative sparse coding[A].2002.557-565.
  • 5Li S Z,Hou Xin-wen,Zhang Hong-jiang. Learning spatially localized,parts-based representation[A].2001.207-212.
  • 6Liu Wei-xiang,Zheng Nan-ning,Lu Xiao-feng. Nonnegative matrix factorization for visual coding[A].2003.293-296.
  • 7Hoyer P O. Non-negative matrix factorization with sparseness cons-traints[J].Journal of Machine Learning Research,2004,(09):1457-1469.
  • 8Wang Yuan,Jia Yun-de,Hu Chang-bo. Fisher non-negative matrix factorization for learning local features[A].2004.
  • 9Zafeiriou S,Tefas A,Buciu I. Exploiting discriminant information to frontal face verification[J].IEEE Transactions on Neural Networks,2006,(03):683-695.
  • 10Belkin M,Niyogi P. Laplacian eigenmaps for dimensionality reduction and data representation[J].Neural Computation,2003,(06):1373-1396.doi:10.1162/089976603321780317.

共引文献17

同被引文献47

引证文献4

二级引证文献4

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部