期刊文献+

计算机化多阶段自适应测验研究述评 被引量:3

Research Progress in Computerized Multistage Adaptive Testing
下载PDF
导出
摘要 计算机化多阶段自适应测验是基于计算机技术的测验形式,它将题目集合作为测试单元,通过多阶段自适应的形式对被试进行测试和评分。近年来通过研究各种测验形式,发现其比计算机化自适应测验和纸笔测验突显出更大优势。与纸笔测验相比,其具有参数不变性、能力估计更精确等优势。与计算机化自适应测验相比,其具有可控制题目特性、被试可检查题目等优势。如何减小测量误差,使其应用更加便捷、有效,是未来研究的发展方向。 Computerized multistage adaptive testing (MST) is a kind of test format based on computerized technology, consisting of sets of items scored and administered as a unit. These sets of items are called modules or testlets. They are a number of short linear tests, which provide a certain percentage of test information to reduce the measurement errors. Items in a module may centre on one or several common stems, such as a paragraph and a diagram, or they may have no relevance with each other. In MST, adaptations occur at the items sets level, based on the cumulative performance of previous items, then the next module is selected. MST has fewer adaptations than item level computerized adaptive testing (CAT), but more adaptations than conventional paper-and-pencil (P&P) testing. It combines the components of conventional P&P with the adaptive characteristic of CAT. And the advantage of these two test forms combined can overcome their individual disadvantages. Thus, there is no doubt that it is a compromise of the two tests forms How to build a MST is the first thing that test developers should consider. The number of stages, the modules in every stage, and the items in every module, all these must have been decided before the test has been built. Target statistics, and qualitative specification also should be considered before the test has been built. The ways of scoring, adapting and assembling the test are the components as vital as the ones listed earlier. After the test has been set up but before it is executed, the test developers can check the items for non-statistical properties, including content balance, ordering and the potential for context effects, cognitive level, item format, answer key position, word count, and any other characteristics of interest or concern in developing the modules. MST may assure the item response theory (IRT) assumptions of local independence and unidimensionality among modules. Items in one stem which violates local independence assumptions are treated as polytomous ones. Therefore, all modules should be allocated optimally. When subjects take the test, they can preview and review items in a module, and modify the false ones. Then, the subjects may operate the modules optimally. Both the test developers and subjects could operate the module optimally in order to obtain a better result in the exam. MST appeared to provide the opportunity to improve the quality of examinations. It has already been used in many large evaluation tests, such as the Uniform CPA Examination and the Graduate Record Examination (GRE). Along with the study of various tests, we can find that compared with the conventional P&P and CAT, MST is obviously superior. Compared with the conventional P&P, its advantages are the parameter invariance, time saving, timely feedback, accurate estimation, and so on. Compared with the CAT, its advantages include the control of non-statistical properties and item exposure, the opportunity to check the items, etc. The direction of future research is how to minimize measurement errors in order to make the application of MST more convenient and effective.
出处 《心理科学》 CSSCI CSCD 北大核心 2015年第2期452-456,共5页 Journal of Psychological Science
关键词 计算机化多阶段自适应测验 纸笔测验 计算机化自适应测验 阶段 模块 computerized multistage adaptive testing (MST), paper-and-pencil test (P&P), computerized adaptive test (CAT), stage, module
  • 相关文献

参考文献31

  • 1关丹丹,刘庆思.计算机自适应序列考试概述[J].中国考试,2011(1):29-35. 被引量:9
  • 2刘庆思,关丹丹.PETS-CAST的效度研究[J].中国考试,2013(9):3-10. 被引量:2
  • 3Armstrong, R. D., Jones, D. H., Koppel, N. B., & Pashley, P. J. (2004). Computerized adaptive testing with multiple-form structures. Applled Psychological Measurement, 28, 147-164.
  • 4Armstrong, R. D., Kung, M. T., & Roussos, L. A. (2010). Determining targets for multi-stage adaptive tests using integer programming. European Journal of Operational Research, 205, 709-718.
  • 5Bimbaum, A. (1969). Statistical theory for logistic mental test models with a prier distribution of ability.Journal of MatbemaScalPsychology, 6, 258-276.
  • 6Bock, R. D. (1972). Estimating item parameters and latent ability when responses are scored in two or more nominal categories. Psyehnmettika, 37, 29-51.
  • 7Breithaupt, K., & Hare, D. R. (2007). Automated simultaneous assembly of multistage testlets for a high-stakes licensing examination. Education and Psycbologieal Measurement, 67, 5-20.
  • 8Chuah, S. C., Drasgow, F., & Luecht, R. (2006). How big is big enough? Sample size requirements for CAST item parameter estimation. Applied Measurement in Educagon, 19, 241-255.
  • 9Crotts, K., Sireci, G. S., & Zenisky, A. (2012). Evaluating the content validity of multistage-adaptive tests.Journal of Applied Testing Technology, 13, 1-26.
  • 10Edwards, M. C., Flora, D. B., & Thissen, D. (2012). Multistage computerized adaptive testing with uniform item exposure. Applied Measurement in Educagon, 25, 118-141.

二级参考文献15

  • 1Wainer, H. Introduction and history. In H.Wainer (ED.), Computer Adaptive Testing: A Primer. (pp.1-21). New Jersey: Lawrance Erlbaum. 1990.
  • 2Luechl, R. M. & Nungester, R.J. Some practical examples of computer-adaptive sequential testing. Journal of Educational Measurement, 1998,35, 229-249.
  • 3Luecht, R. M., & Nungester, R. J. Computer-adaptive Sequential Testing. In W. J. van der Linden and C. A. W. Glas (Ed.), Computerized Adaptive Testing: Theocy and Practices. (pp.117-128). Netherlands: Kluwer Academic Publishers. 2003.
  • 4Luecht, R. M. Computer-assisted test assembly using optimization heuristics. Applied Psychological Measurement, 1998, 22, 224-236.
  • 5Luecht, R. M., Brumfield, T. & Breithaupt, K. A Testlel Assembly Design for Adaptive Multislage Tests. Applied Measurement in Education, 2006,19(3), 189-202.
  • 6NBME. Author The 1996 Step 2 Field Test Study of a Computerized System for USMLE. The National Board Examiner, 43 (4). Philadelphia, PA: National Board of Medical Examiners. 1996.
  • 7NBME. Author. Summary of the 1997 USMLE Step 1 Computerized Fieht Test. The National Board Examiner, 44 (4). Philadelphia, PA: National Board of Medieal Examiners. 1997.
  • 8Bougbtxm, K. A. & Gierl, M. J. Automated Test Assembly Prcedures for Criterion-Referenced Testing Using Optimization Heuristies. Paper Presented at the Annual Meeting of the American Educational Research Assoeiation (AERA), New Orleans, LA. 2000, April.
  • 9Jodoin, M. G. , Zenisky, A.,&Hambleton, R. K. Comparison of the psychometric properties of several computer-based test designs for eredentialing exams With Multiple Purposes. Applied Measurement in Education, 2006,19(3), 203-220.
  • 10Hambleton, R. K. & Xing, D. Optimal and Nonoptimal Computer-Based Test Designs for Making Pass Fail Decisions. Applied Measurement in Education, 2006,19(3), 221-239.

共引文献8

同被引文献13

引证文献3

二级引证文献6

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部