摘要
In this paper,we suggest an adaptive watermarkingmethod to improve both transparence and robustnessof quantization index modulation(QIM)scheme.Instead of a fixed quantization step-size,we apply astep-size adapted to image content in each 8×8block to make a balance of robust extraction andtransparent embedding.The modified step-size isdetermined by contrast masking thresholds ofWatson’s perceptual model.From a normalizedcrossed-correlation value between the original watermarkand the detected watermark,we could observethat our method is robust to attacks of additivewhite Gaussian noise(AWGN),Salt and Peppernoise and Joint Photographic Experts Group(JPEG)compression than the original QIM.By taking intoaccount the contrast insensitivity and visible thresholdsof human visual system,the suggested improvementachieves a maximum embedding strength andan appropriate quantization step-size which is consistentwith local values of a host signal.
In this paper, we suggest an adaptive watermarking method to improve both transparence and robustness of quantization index modulation (QIM) scheme. Instead of a fixed quantization step-size, we apply a step-size adapted to image content in each 8 x 8 block to make a balance of robust extraction and transparent embedding. The modified step-size is determined by contrast masking thresholds of Watson's Perceptual model. From a normalized crossed-correlation value between the original watermark and the detected watermark, we could observe that Our method is robust to attacks of additive white Gaussian noise (AWGN), Salt and Pepper noise and Joint Photographic Experts Group (JPEG) compression than the original QIM. By taking into account the contrast insensitivity and visible thresholds of human visual system, the Suggested improvement achieves a maximum embedding strength and an appropriate quantization step-size which is consistent with local values of a host signal.
基金
the supports of China NNSF (Grant No. 60472063.60325310)
GDNSF/ GDCNLF (04020074/CN200402)