摘要
Efficiency of the autofocusing algorithm implementations based on various orthogonal transforms is examined. The algorithm uses the variance of an image acquired by a sensor as a focus function. To compute the estimate of the variance we exploit the equivalence between that estimate and the image orthogonal expansion. Energy consumption of three implementations exploiting either of the following fast orthogonal transforms: the discrete cosine, the Walsh-Hadamard, and the Haar wavelet one, is evaluated and compared. Furthermore, it is conjectured that the computation precision can considerably be reduced if the image is heavily corrupted by the noise, and a simple problem of optimal word bit-length selection with respect to the signal variance is analyzed.
Efficiency of the autofocusing algorithm implementations based on various orthogonal transforms is examined. The algorithm uses the variance of an image acquired by a sensor as a focus function. To compute the estimate of the variance we exploit the equivalence between that estimate and the image orthogonal expansion. Energy consumption of three implementations exploiting either of the following fast orthogonal transforms: the discrete cosine, the Walsh-Hadamard, and the Haar wavelet one, is evaluated and compared. Furthermore, it is conjectured that the computation precision can considerably be reduced if the image is heavily corrupted by the noise, and a simple problem of optimal word bit-length selection with respect to the signal variance is analyzed.
基金
supported by the NCN grant UMO-2011/01/B/ST7/00666.