摘要
低秩张量填充根据数据的低秩性来恢复所有丢失的元素来更好地描述高维数据结构,逐渐得到各领域学者的重视。先前的学者们已经提出了张量核范数的几种定义,但是可能无法正确地近似张量真正的秩,并且在优化中这种方法没有明确地使用低秩性质,基于此,有学者提出了张量截断核范数(T-TNN),但需要大量的迭代才能收敛。因此,本文提出了一种新的方法双加权截断核范数张量填充(DW-T-TNN),为张量每片分配不同的权重,以加速收敛,并获得可接受的性能。本文设计了一种简洁的梯度下降方法,代替了T-TNN第二步的迭代更新方案。在实验过程中,我们的方法有良好的视觉效果并且所用时间更少。
To better describe the high dimensional data structure, low-rank tensor completion based on the low rank property has gradually attracted the attention of scholars in various fields whose partial elements are missing. Previous scholars have proposed several definitions of tensor nuclear norm, but it may not be able to correctly approximate the real rank of tensor, and this method does not explicitly use the low rank property in optimization. Based on this, some scholars have proposed Tensor Truncated Nuclear Norm (T-TNN), but it requires numerous iterations to converge. Therefore, this paper proposes a new method, Double Weighted Truncated Nuclear Norm Regularization for Low-Rank Tensor Completion (DW-T-TNN) which assigns different weights to each piece of tensor to accelerate convergence and obtain acceptable performance. In this paper, an efficient gradient descent strategy is designed to replace the iterative update scheme of the second step of T-TNN. In the experiment of real-world image, our method has good visual effect and takes less time.
出处
《应用数学进展》
2021年第10期3288-3294,共7页
Advances in Applied Mathematics