Session-based recommendation aims to predict the next item based on a user’s limited interactions within a short period.Existing approaches use mainly recurrent neural networks(RNNs)or graph neural networks(GNNs)to m...Session-based recommendation aims to predict the next item based on a user’s limited interactions within a short period.Existing approaches use mainly recurrent neural networks(RNNs)or graph neural networks(GNNs)to model the sequential patterns or the transition relationships between items.However,such models either ignore the over-smoothing issue of GNNs,or directly use cross-entropy loss with a softmax layer for model optimization,which easily results in the over-fitting problem.To tackle the above issues,we propose a self-supervised graph learning with target-adaptive masking(SGL-TM)method.Specifically,we first construct a global graph based on all involved sessions and subsequently capture the self-supervised signals from the global connections between items,which helps supervise the model in generating accurate representations of items in the ongoing session.After that,we calculate the main supervised loss by comparing the ground truth with the predicted scores of items adjusted by our designed target-adaptive masking module.Finally,we combine the main supervised component with the auxiliary self-supervision module to obtain the final loss for optimizing the model parameters.Extensive experimental results from two benchmark datasets,Gowalla and Diginetica,indicate that SGL-TM can outperform state-of-the-art baselines in terms of Recall@20 and MRR@20,especially in short sessions.展开更多
文摘Session-based recommendation aims to predict the next item based on a user’s limited interactions within a short period.Existing approaches use mainly recurrent neural networks(RNNs)or graph neural networks(GNNs)to model the sequential patterns or the transition relationships between items.However,such models either ignore the over-smoothing issue of GNNs,or directly use cross-entropy loss with a softmax layer for model optimization,which easily results in the over-fitting problem.To tackle the above issues,we propose a self-supervised graph learning with target-adaptive masking(SGL-TM)method.Specifically,we first construct a global graph based on all involved sessions and subsequently capture the self-supervised signals from the global connections between items,which helps supervise the model in generating accurate representations of items in the ongoing session.After that,we calculate the main supervised loss by comparing the ground truth with the predicted scores of items adjusted by our designed target-adaptive masking module.Finally,we combine the main supervised component with the auxiliary self-supervision module to obtain the final loss for optimizing the model parameters.Extensive experimental results from two benchmark datasets,Gowalla and Diginetica,indicate that SGL-TM can outperform state-of-the-art baselines in terms of Recall@20 and MRR@20,especially in short sessions.