期刊文献+

测量中的生成解释与选择解释——对美国加州学生和中国香港学生的研究结果 被引量:1

Generated-versus Selected-Explanations:Results from California and Hong Kong
下载PDF
导出
摘要 解释可以用来检验学生的推理在多大程度上基于科学知识和证据,本文研究的是学生在两类经典多选题中的解释是否可以测量他们的科学推理以及测量的有效性问题。本研究在传统多选题中附加了两类解释项目——生成解释和选择解释,用以了解学生做出选择的原因。这项研究测试了794个美国加州学生和916个中国香港学生,检验了三类题目(传统多项选择、生成解释、选择解释)的心理测量学特征,并且比较了美国加州学生和中国香港学生在三类试题上的表现。为了检测不同国家的潜在偏见,本文还进行了题目功能差异分析。研究结果显示美国加州学生的分数更高,并且无论对美国加州学生还是对中国香港学生,生成解释比另外两类题目难度都要大。本文最后从评估和课程的角度对表现上的差异进行了讨论。 Explanations can be used to examine the extent to which students' reasoning is based on scientific knowledge and evidence.This study investigates whether and how effectively explanations added to typical multiple choice(MC) items can measure student scientific reasoning.Two types of explanation items were used and compared:generated -explanation(GE ) and selected-explanation(SE ) items.Both explanation items were added to traditional MC items in eliciting student justification of their choices on the MC items.The assessment was administered to 794 Californian students and 916 Hong Kong students.We examined the psychometric properties of the three item types(MC,GE,and SE ) and compared student performance on these three item types between Californian and Hong Kong students.A differential item functioning analysis was conducted to detect items with potential bias.Results show that GE items are more difficult than the other two item types for both Californian and Hong Kong students and tend to favor Californian students.We discuss the performance difference from assessment and curriculum perspectives.
出处 《北京大学教育评论》 CSSCI 北大核心 2013年第1期11-28,189-190,共18页 Peking University Education Review
  • 相关文献

参考文献45

  • 1Alonzo, A. C. , & Steedle, J. T. (2009). Developing and assessing a force and motion learning progression. Science Education, 93, 389-421.
  • 2Baldi, S. , Jin, Y. , Skemer, M. , Green, P. , Herget, D. , & Xie, H. (2007). High- lights from Pisa 2006: Performance of US 15-year-old students in science and mathematics literacy in an international context. IES National Center for Education Statistic.
  • 3Baxter, G. P. , & Glaser, R. (1998). Investigating the cognitive complexity of science assessments. Educational Measurement : Issues and Practice, 17(3 ), 37-45.
  • 4Briggs, D. C. , Alonzo, A. C. , Schwab, C. , & Wilson, M. (2006). Diagnostic assess- ment with ordered multiple-choice items. Educational Assessment, 11 (1) , 33-63.
  • 5Chi, M. , Leeuw, N. D. , Chiu, M.-H. , & Lavancher, C. (1994). Eliciting self-expla- nations improves understanding. Cognitive Science, 18, 439-477.
  • 6Chi, M. T. H. , & Ohlsson, S. (2005). Complex declarative learning. K. J. Holyoak & R. G. Morrison (Eds.). The cambridge handbook of thinking and reasoning (pp. 371- 399). Cambridge University Press.
  • 7Clark, D. , & Linn, M. C. (2003). Designing for knowledge integration: The impact of instructional time. The Journal of the Learning Sciences, 12 (4) , 451-493.
  • 8Coleman, E. B. (1998). Using explanatory knowledge during collaborative problem sol- ving in science. The Journal of the Learning Sciences, 7(3&4) , 387-427.
  • 9Frederiksen, N. (1984). The real test bias: Influence of testing on teaching and learning. American Psychologist, 39 ( 3 ) , 193-202.
  • 10Frederiksen, N. (1994). The integration of testing with teaching: Applications of cogni- tive psychology in instruction. American Journal of Education, 102, 527-564.

引证文献1

二级引证文献6

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部