期刊文献+

基于超图神经网络以多角度概念特征融合的概念先决学习

Concept Prerequisite Learning with Multi-View Concept Feature Fusion Based on Hypergraph Neural Network
下载PDF
导出
摘要 概念之间的先决条件关系是智慧教育领域开展个性化学习相关工作的基础性任务,具有至关重要的作用。现有研究中基于特征计算的方法依赖于手工特征提取,受限于本文结构,基于二元图结构方法则忽略了概念和文档对象两两之间的复杂高阶关系。为了解决以上问题,该文提出HyperCPRL,基于超图编码高阶拓扑结构的能力,从三个角度构造基于概念结构、概念语义距离和文档-概念隶属关系三个超图,以捕捉建模对象之间具有的复杂关系特征,再融合三个超图结点表征,并利用自注意力机制在概念全域进一步挖掘先决关系,利用孪生网络实现先决关系预测。在四个真实数据集上进行了大量实验对比,HyperCPRL取得了较好的效果,且对包含低度概念样本的识别能力更强。 Prerequisite relations among concepts play a crucial role as a foundational task related to individualized learning in wisdom education field.In the existing research,the feature-based methods depend on manual features extraction,while the binary graph-based methods ignore the complex high-order relations between concepts and documents.The HyperCPRL is proposed to solve the above problems,which uses hypergraph to encode high-order topology.The model construct three hypergraphs based on conceptual structure,conceptual semantic distance and document concept membership relationship from three perspectives.Then,the model fuse the representations of three hypergraph nodes,and use the self-attention mechanism to further mine the relations in the whole concept domain.At last,siamese network is used to predict prerequisite relations.Extensive experiments on four datasets demonstrate the efficacy of HyperCPRL for its recognition ability for samples with low-degree concepts.
作者 张鹏 杜洪霞 代劲 ZHANG Peng;DU Hongxia;DAI Jin(Department of Software Engineering,Intelligent Information Technology and Service Innovation Laboratory,Chongqing University of Posts and Telecommunications,Chongqing 400065,China)
出处 《中文信息学报》 CSCD 北大核心 2023年第12期155-166,共12页 Journal of Chinese Information Processing
基金 国家自然科学基金(61936001) 重庆市自然科学基金(cstc2021jcyj-msxmX0849)。
关键词 先决关系 概念依赖 超图 关系识别 prerequisite relation concept dependency hypergraph relationship recognition
  • 相关文献

参考文献4

二级参考文献26

  • 1Mihalcea R, Tarau P. TextRank: Bringing order into texts[C]//Proceedings bf Association for Computa- tional Linguistics. 2004.
  • 2Erkan G, Radev D R. LexPageRank: Prestige in Multi-Document Text Summarization[C]//Proceedings of EMNLP. 2004, 4: 365-371.
  • 3Wan X, Yang J, Xiao j. Towards an iterative rein- forcement approach for simultaneous document sum- marization and keyword extraction[C]//Proceedings of Annual Meeting-Association for Computational Lin- guistics. 2007, 45(1): 552.
  • 4Hovy E, Lin C Y. Automated text summarization and the SUMMARIST system [C]//Proceedings of a workshop on held at Baltimore, Maryland: October 13-15, Association for Computational Linguistics, 1998: 197-214.
  • 5Lin C Y, Hovy E. The automated acquisition of topic signatures for text summarization[C]//Proceedings of the 18th Conference on Computational Linguistics-Vol- ume 1. Association for Computational Linguistics, 2000: 495-501.
  • 6Nomoto T, Matsumoto Y. A new approach to unsu- pervised text summarization[C]//Proceedings of the 24th Annual international ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, 2001: 26-34.
  • 7Kupiec J, Pedersen J, Chen F. A trainable document summarizer[C]//Proceedings of the 18th Annual In- ternational ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, 1995: 68-73.
  • 8Conroy J M, O'leary D P. Text summarization via hid- den markov models[C]//Proceedings of the 24th An- nual International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, 2001: 406-407.
  • 9Carbonell J, Goldstein J. The use of MMR, diversity- based reranking for reordering documents and produ- cing summaries[C]//Proceedings of the 21st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, 1998: 335-336.
  • 10Gong Y, Liu X. Generic text summarization usingrelevance measure,and latent semantic analysis [C]// Proceedings of the 24th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, 2001: 19-25.

共引文献24

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部