期刊文献+

SemID: Blind Image Inpainting with Semantic Inconsistency Detection

原文传递
导出
摘要 Most existing image inpainting methods aim to fill in the missing content in the inside-hole region of the target image. However, the areas to be restored in realistically degraded images are unspecified. Previous studies have failed to recover the degradations due to the absence of the explicit mask indication. Meanwhile, inconsistent patterns are blended complexly with the image content. Therefore, estimating whether certain pixels are out of distribution and considering whether the object is consistent with the context is necessary. Motivated by these observations, a two-stage blind image inpainting network, which utilizes global semantic features of the image to locate semantically inconsistent regions and then generates reasonable content in the areas, is proposed. Specifically, the representation differences between inconsistent and available content are first amplified, iteratively predicting the region to be restored from coarse to fine. A confidence-driven inpainting network based on prediction masks is then used to estimate the information regarding missing regions. Furthermore, a multiscale contextual aggregation module is introduced for spatial feature transfer to refine the generated contents. Extensive experiments over multiple datasets demonstrate that the proposed method can generate visually plausible and structurally complete results that are particularly effective in recovering diverse degraded images.
出处 《Tsinghua Science and Technology》 SCIE EI CAS CSCD 2024年第4期1053-1068,共16页 清华大学学报自然科学版(英文版)
基金 supported by the Natural Science Foundation of Shandong Province of China(No.ZR2020MF140) the Major Scientific and Technological Projects of CNPC(No.ZD2019-183-004) the Fundamental Research Funds for the Central Universities(No.20CX05019A).
  • 相关文献

参考文献5

二级参考文献5

共引文献22

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部