摘要
序列化推荐任务根据用户历史行为序列,预测下一时刻即将交互的物品。大量研究表明:预测物品对用户历史行为序列的依赖是多层次的。已有的多尺度方法是针对隐式表示空间的启发式设计,不能显式地推断层次结构。为此,该文提出动态层次Transformer,来同时学习多尺度隐式表示与显式层次树。动态层次Transformer采用多层结构,自底向上根据当前层近邻注意力机制判断需要融合的块,动态生成块掩码。多尺度层次结构中,每层的组合结构由该层的块掩码矩阵推断出,每层的隐式表示由动态块掩码与自注意力机制融合得到。该文提出的算法的预测准确度在MovieLens-100k和Amazon Movies and TV两个公共数据集上分别比当前最先进的基准方法提升了2.09%和5.43%。定性分析的结果表明,该文模型学习到的多尺度结构是符合直觉的。
Sequential recommendation aims at predicting the items to be interacted next time according to the user’s historical behavior sequences. Most related studies have shown that items to be interacted depend on different scales of blocks in historical sequences. Implicit multi-scale representation space is often heuristically designed, from which we cannot infer an explicit hierarchy. Therefore, we propose a Dynamic hierarchical Transformer to learn multi-scale implicit representation and explicit hierarchy simultaneously. The Dynamic hierarchical Transformer adopts a multi-layer structure with dynamically generated mask matrices from neighbor block attention per layer in a bottom-up manner. In the derived multi-scale hierarchy, the composition structure per layer is inferred from the block mask matrix and the implicit representation per scale is obtained by the dynamic block mask and self-attention mechanism. Experimental results on two benchmark datasets(MovieLens-100 k and Amazon Movies and TV) show that our proposed model improves the precision by 2.09% and 5.43% respectively over the state-of-the-art baselines. Furthermore, the derived multi-scale hierarchy agrees with our intuition through the case study.
作者
袁涛
牛树梓
李会元
YUAN Tao;NIU Shuzi;LI Huiyuan(University of Chinese Academy of Sciences,Beijing 100049,China;Institute of Software,Chinese Academy of Sciences,Beijing 100190,China)
出处
《中文信息学报》
CSCD
北大核心
2022年第1期117-126,共10页
Journal of Chinese Information Processing
基金
国家自然科学基金(62072447,11871145)。