期刊文献+

Orthogonal nonnegative learning for sparse feature extraction and approximate combinatorial optimization

原文传递
导出
摘要 Nonnegativity has been shown to be a powerful principle in linear matrix decompositions,leading to sparse component matrices in feature analysis and data compression.The classical method is Lee and Seung’s Nonnegative Matrix Factorization.A standard way to form learning rules is by multiplicative updates,maintaining nonnegativity.Here,a generic principle is presented for forming multiplicative update rules,which integrate an orthonormality constraint into nonnegative learning.The principle,called Orthogonal Nonnegative Learning(ONL),is rigorously derived from the Lagrangian technique.As examples,the proposed method is applied for transforming Nonnegative Matrix Factorization(NMF)and its variant,Projective Nonnegative Matrix Factorization(PNMF),into their orthogonal versions.In general,it is well-known that orthogonal nonnegative learning can give very useful approximative solutions for problems involving non-vectorial data,for example,binary solutions.Combinatorial optimization is replaced by continuous-space gradient optimization which is often computationally lighter.It is shown how the multiplicative updates rules obtained by using the proposed ONL principle can find a nonnegative and highly orthogonal matrix for an approximated graph partitioning problem.The empirical results on various graphs indicate that our nonnegative learning algorithms not only outperform those without the orthogonality condition,but also surpass other existing partitioning approaches.
出处 《Frontiers of Electrical and Electronic Engineering in China》 CSCD 2010年第3期261-273,共13页 中国电气与电子工程前沿(英文版)
  • 相关文献

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部