期刊文献+

Calibration of a Confidence Interval for a Classification Accuracy

Calibration of a Confidence Interval for a Classification Accuracy
下载PDF
导出
摘要 Coverage of nominal 95% confidence intervals of a proportion estimated from a sample obtained under a complex survey design, or a proportion estimated from a ratio of two random variables, can depart significantly from its target. Effective calibration methods exist for intervals for a proportion derived from a single binary study variable, but not for estimates of thematic classification accuracy. To promote a calibration of confidence intervals within the context of land-cover mapping, this study first illustrates a common problem of under and over-coverage with standard confidence intervals, and then proposes a simple and fast calibration that more often than not will improve coverage. The demonstration is with simulated sampling from a classified map with four classes, and a reference class known for every unit in a population of 160,000 units arranged in a square array. The simulations include four common probability sampling designs for accuracy assessment, and three sample sizes. Statistically significant over- and under-coverage was present in estimates of user’s (UA) and producer’s accuracy (PA) as well as in estimates of class area proportion. A calibration with Bayes intervals for UA and PA was most efficient with smaller sample sizes and two cluster sampling designs. Coverage of nominal 95% confidence intervals of a proportion estimated from a sample obtained under a complex survey design, or a proportion estimated from a ratio of two random variables, can depart significantly from its target. Effective calibration methods exist for intervals for a proportion derived from a single binary study variable, but not for estimates of thematic classification accuracy. To promote a calibration of confidence intervals within the context of land-cover mapping, this study first illustrates a common problem of under and over-coverage with standard confidence intervals, and then proposes a simple and fast calibration that more often than not will improve coverage. The demonstration is with simulated sampling from a classified map with four classes, and a reference class known for every unit in a population of 160,000 units arranged in a square array. The simulations include four common probability sampling designs for accuracy assessment, and three sample sizes. Statistically significant over- and under-coverage was present in estimates of user’s (UA) and producer’s accuracy (PA) as well as in estimates of class area proportion. A calibration with Bayes intervals for UA and PA was most efficient with smaller sample sizes and two cluster sampling designs.
作者 Steen Magnussen Steen Magnussen(Natural Resources Canada, Canadian Forest Service, Victoria, British Columbia, Canada)
出处 《Open Journal of Forestry》 2021年第1期14-36,共23页 林学期刊(英文)
关键词 Overall Accuracy Producer’s Accuracy User’s Accuracy Area Proportions Semi-Systematic Sampling Post-Stratification Stratified Random Sampling One-Stage Cluster Sampling Two-Stage Cluster Sampling Overall Accuracy Producer’s Accuracy User’s Accuracy Area Proportions Semi-Systematic Sampling Post-Stratification Stratified Random Sampling One-Stage Cluster Sampling Two-Stage Cluster Sampling
  • 相关文献

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部