Tensor robust principal component analysis has received a substantial amount of attention in various fields.Most existing methods,normally relying on tensor nuclear norm minimization,need to pay an expensive computati...Tensor robust principal component analysis has received a substantial amount of attention in various fields.Most existing methods,normally relying on tensor nuclear norm minimization,need to pay an expensive computational cost due to multiple singular value decompositions at each iteration.To overcome the drawback,we propose a scalable and efficient method,named parallel active subspace decomposition,which divides the unfolding along each mode of the tensor into a columnwise orthonormal matrix(active subspace)and another small-size matrix in parallel.Such a transformation leads to a nonconvex optimization problem in which the scale of nuclear norm minimization is generally much smaller than that in the original problem.We solve the optimization problem by an alternating direction method of multipliers and show that the iterates can be convergent within the given stopping criterion and the convergent solution is close to the global optimum solution within the prescribed bound.Experimental results are given to demonstrate that the performance of the proposed model is better than the state-of-the-art methods.展开更多
字典学习是信号稀疏分解研究的热点问题.在稀疏分解字典学习中,初始字典的选择影响字典学习的效果.为减小初始字典对学习字典的影响,在递归最小二乘(recursive least squares,RLS)字典学习方法中引入遗忘因子的概念.比较了最优方向法、...字典学习是信号稀疏分解研究的热点问题.在稀疏分解字典学习中,初始字典的选择影响字典学习的效果.为减小初始字典对学习字典的影响,在递归最小二乘(recursive least squares,RLS)字典学习方法中引入遗忘因子的概念.比较了最优方向法、K奇异值分解方法和RLS等3种方法的字典学习效果.分析了RLS字典学习中不同的遗传因子对字典学习效果的影响,以及遗忘因子为不同函数时的字典学习效果.仿真结果表明:RLS字典学习方法减小了初始字典对学习结果的影响,故学习效果较好;而在RLS字典学习中不同遗忘因子的选择会影响字典学习效果.展开更多
基金the HKRGC GRF 12306616,12200317,12300218 and 12300519,and HKU Grant 104005583.
文摘Tensor robust principal component analysis has received a substantial amount of attention in various fields.Most existing methods,normally relying on tensor nuclear norm minimization,need to pay an expensive computational cost due to multiple singular value decompositions at each iteration.To overcome the drawback,we propose a scalable and efficient method,named parallel active subspace decomposition,which divides the unfolding along each mode of the tensor into a columnwise orthonormal matrix(active subspace)and another small-size matrix in parallel.Such a transformation leads to a nonconvex optimization problem in which the scale of nuclear norm minimization is generally much smaller than that in the original problem.We solve the optimization problem by an alternating direction method of multipliers and show that the iterates can be convergent within the given stopping criterion and the convergent solution is close to the global optimum solution within the prescribed bound.Experimental results are given to demonstrate that the performance of the proposed model is better than the state-of-the-art methods.
文摘字典学习是信号稀疏分解研究的热点问题.在稀疏分解字典学习中,初始字典的选择影响字典学习的效果.为减小初始字典对学习字典的影响,在递归最小二乘(recursive least squares,RLS)字典学习方法中引入遗忘因子的概念.比较了最优方向法、K奇异值分解方法和RLS等3种方法的字典学习效果.分析了RLS字典学习中不同的遗传因子对字典学习效果的影响,以及遗忘因子为不同函数时的字典学习效果.仿真结果表明:RLS字典学习方法减小了初始字典对学习结果的影响,故学习效果较好;而在RLS字典学习中不同遗忘因子的选择会影响字典学习效果.