摘要
本研究的目的是描述一个用于测量写作能力的多面Rasch(FACETS)模型。该FACETS模型是Rasch测量模型的多元变量拓展,它可为写作测评中的校标评分员和写作题目提供框架。本文展示了如何应用FACETS模型解决大型写作测评中遇到的测量问题。参加全州写作考试的1000个随机抽取的学生样本被用来显示该FACETS模型。数据表明即使经过强化训练,评分员的严格度有显著区别。同时,本研究还发现,写作题目难度的区分,虽然微小,却具有统计意义上的显著性。该FACETS模型为解决以作文测评写作能力的大型考试遇到的测量问题提供了一个有前景的途径。
The purpose of this study is to describe a Many-Faceted Rasch(FACETS) model for the measurement of writing ability.The FACETS model is a multivariate extension of Rasch measurement models that can be used to provide a framework for calibrating both raters and writing tasks within the context of writing assessment.The use of the FACETS model for solving measurement problems encountered in the large-scale assessment of writing ability is presented here.A random sample of 1,000 students from a statewide assessment of writing ability is used to illustrate the FACETS model.The data suggest that there are significant differences in rater severity,even after extensive training.Small,but statistically significant,differences in writing task difficulty were also found.The FACETS model offers a promising approach for addressing measurement problems encountered in the large-scale assessment of writing ability through written compositions.
出处
《教育与考试》
2007年第4期72-79,共8页
Education and Examinations