The main objective of this paper was to investigate the test method effect of two writing tests on the performances of sixty sophomore English majors from China University of Mining and Technology, who were divided ra...The main objective of this paper was to investigate the test method effect of two writing tests on the performances of sixty sophomore English majors from China University of Mining and Technology, who were divided randomly into two experiment groups and took a reading-integrated writing test and a timed impromptu essay test respectively. Their essays were rated anonymously by two independent raters using the same rating scale with categories that measured content, organization, accuracy and vocabulary. Besides examination of the reliability, Many-Faceted Rasch analysis was applied to probe into the influence of domain difficulty and rater severity. Results of the comparison revealed four important findings. First, both two tests were reliable to be fair measures for assessing writing abilities. Second, significant difference was found between two groups in content, organization, and vocabulary, but no difference was observed in accuracy. Third, the reading tasks facilitated participants in generating ideas, organizing essays and using vocabularies. Finally, compared with the participants in the reading-integrated writing test, participants in the timed impromptu essay test met difficulties in using their vocabularies in the writing process. Findings implicated that 1) a reading-integrated writing test could be an alternative to a timed impromptu essay test in academic contexts, and 2) much more investigation was still needed to probe into the writing process and read-to-write process.展开更多
文摘The main objective of this paper was to investigate the test method effect of two writing tests on the performances of sixty sophomore English majors from China University of Mining and Technology, who were divided randomly into two experiment groups and took a reading-integrated writing test and a timed impromptu essay test respectively. Their essays were rated anonymously by two independent raters using the same rating scale with categories that measured content, organization, accuracy and vocabulary. Besides examination of the reliability, Many-Faceted Rasch analysis was applied to probe into the influence of domain difficulty and rater severity. Results of the comparison revealed four important findings. First, both two tests were reliable to be fair measures for assessing writing abilities. Second, significant difference was found between two groups in content, organization, and vocabulary, but no difference was observed in accuracy. Third, the reading tasks facilitated participants in generating ideas, organizing essays and using vocabularies. Finally, compared with the participants in the reading-integrated writing test, participants in the timed impromptu essay test met difficulties in using their vocabularies in the writing process. Findings implicated that 1) a reading-integrated writing test could be an alternative to a timed impromptu essay test in academic contexts, and 2) much more investigation was still needed to probe into the writing process and read-to-write process.