期刊文献+

慕课学生互评误差纠正方法及其比较 被引量:4

A Comparison of Methods to Correct Errors in Peer Assessment Ratings in Massive Open Online Courses
下载PDF
导出
摘要 学生互评是广泛用于慕课的一种评价方法,然而学生评估者本身存在比较大的评分误差。本文着重介绍和比较可用于纠正慕课学生互评误差的方法。这些方法总体分为两大类,即对学生评估者进行前期纠正的方法和对评分结果进行后期纠正的方法。文中总结的绝大部分方法目前都还没有被实际运用在慕课学生互评中。希望通过本文对慕课学生互评以及纠正学生评分误差方法的介绍,可以让更多的教育研究者参与对慕课的评价系统进行改善的研究。 Peer assessment is one of the most important assessment methods in Massive Open Online Courses(MOOCs), especially for open-ended assignments or projects. However, for the purpose of summative evaluation,peer assessment results are generally not trusted. This is because peer raters, who are novices, would produce more random errors and systematic biases in ratings than would expert raters, due to peer raters' lack of content expertise and rating experience. In this paper, two major approaches that are designed to improve the accuracy of peer assessment results are reviewed and compared. The first approach is designed to calibrate accuracy of individual peer raters before actual peer assessments so that differential weights can be assigned to raters based on accuracy.The second approach is designed to remedy peer rating errors post hoc. Differences in assumptions,parameterization and estimation methods, and implementation issues are discussed. The development of methods to improve MOOCs peer assessment results is still in its infancy. Most of the methods reviewed in this paper have yet to be implemented and evaluated in real- life applications. We hope the discussion and comparison of different methods in this paper will provide some theoretical and methodological background for further research into MOOC peer assessment.
作者 熊瑶 孙开键
出处 《中国考试》 2016年第1期7-15,共9页 journal of China Examinations
关键词 慕课 学生互评 误差纠正 MOOCs Peer Assessment Error Correction
  • 相关文献

参考文献27

  • 1Balfour, S. P. Assessing writing in MOOCs: Automated Essay Scor- ing and Calibrated Peer ReviewS[J]. Journal of Research & Practice in Assessment, 2013 (8): 40-48.
  • 2Basturka, R. Applying the many-facet Rasch model to evaluate Pow- er Point presentation performance in higher education[J]. Assess- ment & Evaluation in Higher Education, 2008, 33(4): 431-444.
  • 3DeCarlo, L. T., Kim, Y., & Johnson, M. S. A hierarchical rater mod- el for constructed responses, with a signal detection rater model[J]. Journal of Educational Measurement, 2011, 48(3): 333-356.
  • 4Farrokhi, F., & Esfandiari, R. A many-facet Rasch model to detect halo effect in three types of raters[J]. Theory and Practice in Lan- gnage Studies, 2011, 1( 11 ): 1531-1540.
  • 5Goldin, I. M. Accounting for peer reviewer bias with bayesian mod- els: Workshop on Intelligent Support for Learning Groups at the l lth International Conference on Intelligent Tutoring Systems[C]. Chania, Greece, 2012.
  • 6Graesser, A. C., & McNamara, D. S. Automated analysis of essays and open-ended verbal responses[M]//APA handbook of research methods in psychology. Washington, DC: American Psychological Association, 2012.
  • 7Hambleton, R. K., & Swaminathan, H. Item response theory: Princi- ples and applications[M]. Hingham, MA: Kluwer Nijhoff Publishing, 1985.
  • 8Hollands, F. M., & Tirthali, D. Resource requirements and costs of developing and delivering MOOCs[J]. The International Review of Research in Open and Distributed Learning, 2014,15(5): 113-133.
  • 9Jordan, K. MOOC completion rates[EB/OL]. [2015-11-10]. http:// www.katyjordan.com/MOOCproject.html.
  • 10Li, H., Xiong, Y., Zang, X., Kornhaber, M., Lyu, Y., Chung, K. S., & Suen, H. K. Peer assessment in the digital age: A meta-analysis comparing peer and teacher ratings[J]. Assessment & Evaluation in Higher Education, in press. (preprint available at http://www. tandfonline.com/doi/full/10.1080/02602938.2014.999746).

同被引文献34

引证文献4

二级引证文献45

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部