Collaborative filtering is the most popular and successful information recommendation technique. However, it can suffer from data sparsity issue in cases where the systems do not have sufficient domain information. Tr...Collaborative filtering is the most popular and successful information recommendation technique. However, it can suffer from data sparsity issue in cases where the systems do not have sufficient domain information. Transfer learning, which enables information to be transferred from source domains to target domain, presents an unprecedented opportunity to alleviate this issue. A few recent works focus on transferring user-item rating information from a dense domain to a sparse target domain, while almost all methods need that each rating matrix in source domain to be extracted should be complete. To address this issue, in this paper we propose a novel multiple incomplete domains transfer learning model for cross-domain collaborative filtering. The transfer learning process consists of two steps. First, the user-item ratings information in incomplete source domains are compressed into multiple informative compact cluster-level matrixes, which are referred as codebooks. Second, we reconstruct the target matrix based on the codebooks. Specifically, for the purpose of maximizing the knowledge transfer, we design a new algorithm to learn the rating knowledge efficiently from multiple incomplete domains. Extensive experiments on real datasets demonstrate that our proposed approach significantly outperforms existing methods.展开更多
Recommendation Information Systems(RIS)are pivotal in helping users in swiftly locating desired content from the vast amount of information available on the Internet.Graph Convolution Network(GCN)algorithms have been ...Recommendation Information Systems(RIS)are pivotal in helping users in swiftly locating desired content from the vast amount of information available on the Internet.Graph Convolution Network(GCN)algorithms have been employed to implement the RIS efficiently.However,the GCN algorithm faces limitations in terms of performance enhancement owing to the due to the embedding value-vanishing problem that occurs during the learning process.To address this issue,we propose a Weighted Forwarding method using the GCN(WF-GCN)algorithm.The proposed method involves multiplying the embedding results with different weights for each hop layer during graph learning.By applying the WF-GCN algorithm,which adjusts weights for each hop layer before forwarding to the next,nodes with many neighbors achieve higher embedding values.This approach facilitates the learning of more hop layers within the GCN framework.The efficacy of the WF-GCN was demonstrated through its application to various datasets.In the MovieLens dataset,the implementation of WF-GCN in LightGCN resulted in significant performance improvements,with recall and NDCG increasing by up to+163.64%and+132.04%,respectively.Similarly,in the Last.FM dataset,LightGCN using WF-GCN enhanced with WF-GCN showed substantial improvements,with the recall and NDCG metrics rising by up to+174.40%and+169.95%,respectively.Furthermore,the application of WF-GCN to Self-supervised Graph Learning(SGL)and Simple Graph Contrastive Learning(SimGCL)also demonstrated notable enhancements in both recall and NDCG across these datasets.展开更多
基金supported by the National Natural Science Foundation of China (No. 91546111, 91646201)the Key Project of Beijing Municipal Education Commission (No. KZ201610005009)the General Project of Beijing Municipal Education Commission (No. KM201710005023)
文摘Collaborative filtering is the most popular and successful information recommendation technique. However, it can suffer from data sparsity issue in cases where the systems do not have sufficient domain information. Transfer learning, which enables information to be transferred from source domains to target domain, presents an unprecedented opportunity to alleviate this issue. A few recent works focus on transferring user-item rating information from a dense domain to a sparse target domain, while almost all methods need that each rating matrix in source domain to be extracted should be complete. To address this issue, in this paper we propose a novel multiple incomplete domains transfer learning model for cross-domain collaborative filtering. The transfer learning process consists of two steps. First, the user-item ratings information in incomplete source domains are compressed into multiple informative compact cluster-level matrixes, which are referred as codebooks. Second, we reconstruct the target matrix based on the codebooks. Specifically, for the purpose of maximizing the knowledge transfer, we design a new algorithm to learn the rating knowledge efficiently from multiple incomplete domains. Extensive experiments on real datasets demonstrate that our proposed approach significantly outperforms existing methods.
基金This work was supported by the Kyonggi University Research Grant 2022.
文摘Recommendation Information Systems(RIS)are pivotal in helping users in swiftly locating desired content from the vast amount of information available on the Internet.Graph Convolution Network(GCN)algorithms have been employed to implement the RIS efficiently.However,the GCN algorithm faces limitations in terms of performance enhancement owing to the due to the embedding value-vanishing problem that occurs during the learning process.To address this issue,we propose a Weighted Forwarding method using the GCN(WF-GCN)algorithm.The proposed method involves multiplying the embedding results with different weights for each hop layer during graph learning.By applying the WF-GCN algorithm,which adjusts weights for each hop layer before forwarding to the next,nodes with many neighbors achieve higher embedding values.This approach facilitates the learning of more hop layers within the GCN framework.The efficacy of the WF-GCN was demonstrated through its application to various datasets.In the MovieLens dataset,the implementation of WF-GCN in LightGCN resulted in significant performance improvements,with recall and NDCG increasing by up to+163.64%and+132.04%,respectively.Similarly,in the Last.FM dataset,LightGCN using WF-GCN enhanced with WF-GCN showed substantial improvements,with the recall and NDCG metrics rising by up to+174.40%and+169.95%,respectively.Furthermore,the application of WF-GCN to Self-supervised Graph Learning(SGL)and Simple Graph Contrastive Learning(SimGCL)also demonstrated notable enhancements in both recall and NDCG across these datasets.