摘要
Subspace appearance models are widely used in computer vision and image processing tasks to compactly represent the appearance variations of target objects. In order to ensure algorithm performance, they are typically stored in high-precision formats; this results in a large storage footprint, rendering redistribution costly and difficult. Since for most image and vision applications, pixel values are quantized to 8 bits by the acquisition apparatuses, we show that it is possible to construct a fixed-width, effectively Iossless representation of the bases vectors, in the sense that reconstructions from the original bases and from the quantized bases never deviate by more than half of the quantization step-size. In addition to directly applying this result to Iosslessly compress individual models, we also propose an algorithm to compress appearance models by utilizing prior information on the modeled objects in the form of prior appearance subspaces. Experiments conducted on the compression of person-specific face appearance models demonstrate the effectiveness of the proposed algorithms.
Subspace appearance models are widely used in computer vision and image processing tasks to compactly represent the appearance variations of target objects. In order to ensure algorithm performance, they are typically stored in high-precision formats; this results in a large storage footprint, rendering redistribution costly and difficult. Since for most image and vision applications, pixel values are quantized to 8 bits by the acquisition apparatuses, we show that it is possible to construct a fixed-width, effectively Iossless representation of the bases vectors, in the sense that reconstructions from the original bases and from the quantized bases never deviate by more than half of the quantization step-size. In addition to directly applying this result to Iosslessly compress individual models, we also propose an algorithm to compress appearance models by utilizing prior information on the modeled objects in the form of prior appearance subspaces. Experiments conducted on the compression of person-specific face appearance models demonstrate the effectiveness of the proposed algorithms.
基金
supported by the National key Basic Research and Development (973) Program of China (No. 2013CB329006)
the National Natural Science Foundation of China (Nos. 61471220 and 61021001)
Tsinghua University Initiative Scientific Research Program, and Tsinghua-Qualcomm Joint Research Program