期刊文献+

知识图谱嵌入中的自适应筛选 被引量:2

Knowledge graph embedding with adaptive sampling
原文传递
导出
摘要 针对知识图谱数据类别不平衡与训练难度不同,随机进行训练数据采样可能导致嵌入模型不能快速收敛的问题,提出了一种自适应的筛选训练数据方法。对训练数据按照关系类别进行分组,采样过程中首先根据概率选择关系类别,然后从选定的分组中随机选择一个实例进行训练。根据训练效果对每组实例被选择的概率进行自适应调整。实验结果表明:自适应的分组筛选在链接预测任务上取得了更好的结果,使嵌入模型更快、更好地收敛。 Due to the imbalance of KG data and the difficulty of training,that random sampling of training data may make it difficult for embedded models to converge rapidly. Therefore,in this paper,an adaptive method for sampling of training data is proposed. The training data are grouped according to the different relationships. In the sampling process,a group is determined according to the probability,and then an instance is randomly selected from the determined group for training. At the same time,according to the training effect,the probability of each selected instance is adjusted adaptively. Experimental results show that adaptive grouping filter achieves better results in link prediction tasks,and enables the embedded model to converge faster and better.
作者 欧阳丹彤 马骢 雷景佩 冯莎莎 OUYANG Dan-tong;MA Cong;LEI Jing-pei;FENG Sha-sha(College of Computer Science and Technology Jilin University,Changchun 130012,China;Symbolic Computation and Knowledge Engineering of Ministry of Education,Jilin University Changchun 130012,China)
出处 《吉林大学学报(工学版)》 EI CAS CSCD 北大核心 2020年第2期685-691,共7页 Journal of Jilin University:Engineering and Technology Edition
基金 国家自然科学基金项目(61872159,61672261,61502199).
关键词 人工智能 知识图谱嵌入 基于翻译的嵌入模型 自适应筛选 链接预测 artificial intelligence knowledge graph embedding translation-based embedding models adaptive sampling link prediction
  • 相关文献

参考文献2

二级参考文献85

  • 1Miller G A. WordNet: A lexical database for English [J]. Communications of the ACM, 1995, 38(11): 39-41.
  • 2Bollacker K, Evans C, Paritosh P, et al. Freebase: A collaboratively created graph database for structuring human knowledge [C] //Proe of KDD. New York: ACM, 2008: 1247-1250.
  • 3Miller E. An introduction to the resource description framework [J]. Bulletin of the American Society for Information Science and Technology, 1998, 25(1): 15-19.
  • 4Bengio Y. Learning deep architectures for AI [J]. Foundations and Trends in Machine Learning, 2099, 2 (1) 1-127.
  • 5Bengio Y, Courville A, Vincent P. Representation learning: A review and new perspectives [J]. IEEE Trans on Pattern Analysis and Machine Intelligence, 2013, 35(8): 1798-1828.
  • 6Turian J, Ratinov L, Bengio Y. Word representations: A simple and general method for semi-supervised learning [C]// Proc of ACL. Stroudsburg, PA: ACL, 2010:384-394.
  • 7Manning C D, Raghavan P, Schutze H. Introduction to Information Retrieval [M]. Cambridge, UK: Cambridge University Press, 2008.
  • 8Mikolov T, Sutskever I, Chen K, et al. Distributed representations of words and phrases and their eompositionality [C] //Proe of NIPS. Cambridge, MA: MIT Press, 2013:3111-3119.
  • 9Zhao Y, Liu Z, Sun M. Phrase type sensitive tensor indexing model for semantic composition [C] //Proc of AAAI. Menlo Park, CA: AAAI, 2015: 2195-2202.
  • 10Zhao Y, Liu Z, Sun M. Representation learning for measuring entity relatedness with rich information [C] //Proc of IJCAI. San Francisco, CA: Morgan Kaufmann, 2015: 1412-1418.

共引文献263

同被引文献10

引证文献2

二级引证文献14

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部