期刊文献+

随机干预试验——影响评估的前沿方法 被引量:17

Randomized Controlled Trails:A state-of-art impact evaluation method
原文传递
导出
摘要 在国际上,影响评估已经逐步成为项目设计的一部分,其最大的挑战是如何判断结果变化确实是由于项目干预或政策实施所致。影响评估的方法可以分为非试验性影响评估方法和试验性影响评估方法两类。非试验性影响评估的主要方法有事前事后评估比较法、倍差分析法、匹配法、工具变量法、断点回归法等;而试验性评估方法即随机干预试验,其最大的优点是避免了非试验性评估的选择误差问题。本文在介绍2类影响评估方法特点的基础上,重点介绍了随机干预试验的试验设计和操作步骤。开展随机干预试验首先要进行因果链分析,其次确定干预的单位和随机的方法,然后进一步分析其他可能对结果变量产生影响的因素,并在试验设计时加以控制,最后计算试验规模的大小。随机干预试验的执行过程:第一步是基线调查,第二步随机选择样本实施干预,第三步是评估调查。本文以婴幼儿营养健康和教育的影响评估项目为典型案例,对随机干预试验在中国的实践应用进行了介绍。随机干预试验在影响评估领域具有广阔的应用前景。 Internationally,impact evaluation has become an essential part of program design.However,the challenge on impact evaluation is how to attribute,or how to find out the true impact of program intervention.In general there are two broad types of evaluation methods: non-experimental and experimental.The objectives of this paper are:(1) to briefly compare basic characteristics of each and different evaluation methods,and(2) to discuss in more detail the basic principles and major implementation issues of Randomized Controlled Trails(RCTs).Then the paper uses a current evaluation project of baby nutrition intervention carried out by Rural Education Action Program(REAP) to further elaborate how to evaluate the "true" project impacts using RCTs.Methods belong to non-experimental types include simple comparison of before and after treatment,DID,PSM,IV,RDD,etc.RCT as a rigorous impact evaluation method has its key advantage of avoiding selection bias: the issue that could not be easily solved using other methods.There are five key components in designing an RCT,theory based casual chain analysis to determine what type of interventions in order to achieve target impact,the next is to determine intervention unit and also the way and level of randomizing.A careful examination on likelihood of other factors which may have impact on the outcome should be carefully examined so to be built into intervention design.The last but very important work is to determine project size through power calculation.The implementation of an RCT need to go through "three steps": first is baseline survey,then to randomly assign a group of sample as intervention group while the other half as control group.The last step is evaluation survey.Only with both baseline and evaluation data,one can use then to measure the true impact of project intervention.Like any other methods,RCTs also has its implementation challenges,such as spillover effects,cross contamination,non-compliance or attrition.As a cutting-edge impact evaluation method,Randomized Control Trail(RCT) has a great potential to be widely used in rigorous impact evaluation.
作者 张林秀
出处 《地理科学进展》 CSCD 北大核心 2013年第6期843-851,共9页 Progress in Geography
基金 国家自然科学基金重大国际合作项目(71110107028)
关键词 随机干预试验 科学影响评估 农村教育行动计划 Randomized Controlled Trails impact evaluation Rural EducationAction Program(REAP)
  • 相关文献

参考文献11

  • 1Banerjee A V, Duflo E. 2009. The experimental approach to development economics. Annual Review of Economics, 1 (1): 151-178.
  • 2Boswell M, Rozelle S, Zhang L X, et al. 2011. Conducting in- fluential impact evaluations in China: The experience of the Rural Education Action Project(REAP). Journal of Development Effectiveness, 3(3): 420-430.
  • 3Carvalho S, White H. 2004. Theory-based evaluation: The case of social funds. American Journal of Evaluation, 25 (2): 141-160.
  • 4Duflo E, Glennerster R, Kremer M. 2007. Chapter 61: Using randomization in development economics research: A toolkit. Handbook of Development Economics, 4: 3895-3962.
  • 5Guo S Y, Fraser M W. 2010. Propensity score analysis: Statisti- cal methods and applications. Thousand Oaks, CA: SAGE Publications.
  • 6Heckman J J, Lochner L, Taber C. 1999. General equilibrium cost benefit analysis of education and tax policies (NBER Working Paper No. 6881)//Ranis G, Raut L K. Trade, growth and development: Essays in honor of Professor T. N. Srinivasan. Amsterdam: Elsevier Science.
  • 7Imbens G W, Wooldridge J M. 2009. Recent developments in the econometrics of program evaluation. Journal of Eco- nomic Literature, 47(1): 5-86.
  • 8Rogers P J. 2009. Matching impact evaluation design to the na- ture of the intervention and the purpose of the evaluation. Journal of Development Effectiveness, 1 (3): 217-226.
  • 9Spybrook J, Raudenbush S W, Liu X F, et al. 2008. Optimal design for longitudinal and multilevel research: Docu- mentation for the "Optimal Design" software[EB/OL]. 1 June, 2013112 March, 2008]. http://people.cehd.tamu.edu/ -okwok/epsy652/OD/od-manual-20080312-v 176.pdf.
  • 10Weiss C. 1998. Evaluation: Methods for studying programs and policies. New York: Prentice Hall.

同被引文献281

引证文献17

二级引证文献117

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部