High-dimensional and sparse(HiDS)matrices commonly arise in various industrial applications,e.g.,recommender systems(RSs),social networks,and wireless sensor networks.Since they contain rich information,how to accurat...High-dimensional and sparse(HiDS)matrices commonly arise in various industrial applications,e.g.,recommender systems(RSs),social networks,and wireless sensor networks.Since they contain rich information,how to accurately represent them is of great significance.A latent factor(LF)model is one of the most popular and successful ways to address this issue.Current LF models mostly adopt L2-norm-oriented Loss to represent an HiDS matrix,i.e.,they sum the errors between observed data and predicted ones with L2-norm.Yet L2-norm is sensitive to outlier data.Unfortunately,outlier data usually exist in such matrices.For example,an HiDS matrix from RSs commonly contains many outlier ratings due to some heedless/malicious users.To address this issue,this work proposes a smooth L1-norm-oriented latent factor(SL-LF)model.Its main idea is to adopt smooth L1-norm rather than L2-norm to form its Loss,making it have both strong robustness and high accuracy in predicting the missing data of an HiDS matrix.Experimental results on eight HiDS matrices generated by industrial applications verify that the proposed SL-LF model not only is robust to the outlier data but also has significantly higher prediction accuracy than state-of-the-art models when they are used to predict the missing data of HiDS matrices.展开更多
In recent years,the nuclear norm minimization(NNM)as a convex relaxation of the rank minimization has attracted great research interest.By assigning different weights to singular values,the weighted nuclear norm minim...In recent years,the nuclear norm minimization(NNM)as a convex relaxation of the rank minimization has attracted great research interest.By assigning different weights to singular values,the weighted nuclear norm minimization(WNNM)has been utilized in many applications.However,most of the work on WNNM is combined with the l 2-data-fidelity term,which is under additive Gaussian noise assumption.In this paper,we introduce the L1-WNNM model,which incorporates the l 1-data-fidelity term and the regularization from WNNM.We apply the alternating direction method of multipliers(ADMM)to solve the non-convex minimization problem in this model.We exploit the low rank prior on the patch matrices extracted based on the image non-local self-similarity and apply the L1-WNNM model on patch matrices to restore the image corrupted by impulse noise.Numerical results show that our method can effectively remove impulse noise.展开更多
基金supported in part by the National Natural Science Foundation of China(61702475,61772493,61902370,62002337)in part by the Natural Science Foundation of Chongqing,China(cstc2019jcyj-msxmX0578,cstc2019jcyjjqX0013)+1 种基金in part by the Chinese Academy of Sciences“Light of West China”Program,in part by the Pioneer Hundred Talents Program of Chinese Academy of Sciencesby Technology Innovation and Application Development Project of Chongqing,China(cstc2019jscx-fxydX0027)。
文摘High-dimensional and sparse(HiDS)matrices commonly arise in various industrial applications,e.g.,recommender systems(RSs),social networks,and wireless sensor networks.Since they contain rich information,how to accurately represent them is of great significance.A latent factor(LF)model is one of the most popular and successful ways to address this issue.Current LF models mostly adopt L2-norm-oriented Loss to represent an HiDS matrix,i.e.,they sum the errors between observed data and predicted ones with L2-norm.Yet L2-norm is sensitive to outlier data.Unfortunately,outlier data usually exist in such matrices.For example,an HiDS matrix from RSs commonly contains many outlier ratings due to some heedless/malicious users.To address this issue,this work proposes a smooth L1-norm-oriented latent factor(SL-LF)model.Its main idea is to adopt smooth L1-norm rather than L2-norm to form its Loss,making it have both strong robustness and high accuracy in predicting the missing data of an HiDS matrix.Experimental results on eight HiDS matrices generated by industrial applications verify that the proposed SL-LF model not only is robust to the outlier data but also has significantly higher prediction accuracy than state-of-the-art models when they are used to predict the missing data of HiDS matrices.
基金supported by the National Natural Science Foundation of China under grants U21A20455,61972265,11871348 and 11701388by the Natural Science Foundation of Guangdong Province of China under grant 2020B1515310008by the Educational Commission of Guangdong Province of China under grant 2019KZDZX1007.
文摘In recent years,the nuclear norm minimization(NNM)as a convex relaxation of the rank minimization has attracted great research interest.By assigning different weights to singular values,the weighted nuclear norm minimization(WNNM)has been utilized in many applications.However,most of the work on WNNM is combined with the l 2-data-fidelity term,which is under additive Gaussian noise assumption.In this paper,we introduce the L1-WNNM model,which incorporates the l 1-data-fidelity term and the regularization from WNNM.We apply the alternating direction method of multipliers(ADMM)to solve the non-convex minimization problem in this model.We exploit the low rank prior on the patch matrices extracted based on the image non-local self-similarity and apply the L1-WNNM model on patch matrices to restore the image corrupted by impulse noise.Numerical results show that our method can effectively remove impulse noise.