摘要
针对稀疏编码学习的字典过大而导致字典冗余和计算复杂的问题,提出了一种M近邻判别性低秩字典学习(MLR)算法。该算法首先引入低秩表示,去除字典的噪声,使同类字典原子之间具有更强的线性相关性,可增强字典的紧凑性和纯粹性,提高字典的质量。然后用K-奇异值分解(KSVD)算法更新字典,保持字典的表示性能,获取最优的稀疏解。在分类中,结合M近邻思想,可得到与测试样本能量相近的字典原子,增强其聚类能力,并能提高分类的精确度。基于扩展的Yale B和AR人脸数据库上的实验结果表明,该方法用较小的字典得到更好的分类性能,并优于对比的算法。
In order to reduce the redundancy and computational complexity of dictionary, the Paper proposed a M nearest neighbor” discriminative low-rank dictionary learning method (MLR) . Low-rank representation is applied to remove the noise of the dictionary, making dictionary atoms of the same kind have stronger linear correlation, leading to a compact and pure dictionary, and improve the quality of dictionary. Then the K-Singular Value Decomposition (KSVD) algorithm was used to update the dictionary and keep its representation performance to achieve the most sparse solution. Together with the idea of M nearest neighbor, the method is expected to achieve the dictionary atoms with the same energy while increasing the ability of clustering and enhancing the classification accuracy. The experimental results on extended YaleB and AR face databases demonstrate that the method can get a better classification performance with a smaller dictionary and outperforms comparative algorithms.
出处
《计算机应用》
CSCD
北大核心
2015年第A01期93-97,共5页
journal of Computer Applications
基金
广东省科技计划项目(2011B010200045)
深圳市重点实验室提升项目(CXB201105060068A)
关键词
稀疏编码
判别性低秩字典学习
低秩表示
M近邻
字典质量
sparse coding
discriminative low-rank dictionary learning
low-rank representation
M nearest neighbor
dictionary quality