摘要
教育是人工智能的重要应用领域,探索教育中人工智能的可解释性,是让人工智能在教育领域中更具“责任感”的重要议题。该文从教育中人工智能应用的现实问题出发,重点回应教育中人工智能的可解释性是什么,做了什么,以及未来走向三个问题。首先,以数据、任务、模型、人四个关键要素为切入点,分析阐述教育中人工智能的可解释性内涵;之后纵观教育中人工智能的可解释工作的演进过程,分析得出已有工作在教育意义注入、模型趋于复杂以及单向解释信息传递等方面的局限性;最后,从知识联邦、模型融生、人在回路三个角度,阐释教育中人工智能可解释性的未来发展方向。
Education is an important application field of artificial intelligence. Exploring the interpretability of Artificial Intelligence in education is an important issue that makes artificial intelligence more “responsible” in the field of education. This article starts from the practical problems of artificial intelligence application in education, and focuses on three questions about what is the interpretability of artificial intelligence in education, what has been done, and the future direction. First, the four key elements of data,tasks, models, and people are used as the starting point to analyze and explain the explainable connotation of Artificial Intelligence in education;The work has limitations in terms of the injection of educational meaning, the complexity of models, and the transmission of one-way interpretation information;Finally, the future development of artificial intelligence interpretability in education is explained from the three perspectives of knowledge federation, model integration, and human-in-the-loop.
作者
刘桐
顾小清
Liu Tong;Gu Xiaoqing(Department of Education Information and Technology,East China Normal University,Shanghai 200062)
出处
《中国电化教育》
CSSCI
北大核心
2022年第5期82-90,共9页
China Educational Technology
基金
2020年上海市科学技术委员会科研计划项目“教育数据治理与智能教育大脑关键技术研究及典型应用”(项目编号:20511101600)研究成果。
关键词
教育人工智能
可解释性
“黑盒”模型
人在回路
educational Artificial Intelligence
interpretability
“black box”model
human in the loop