Recently,deep learning-based image outpainting has made greatly notable improvements in computer vision field.However,due to the lack of fully extracting image information,the existing methods often generate unnatural...Recently,deep learning-based image outpainting has made greatly notable improvements in computer vision field.However,due to the lack of fully extracting image information,the existing methods often generate unnatural and blurry outpainting results in most cases.To solve this issue,we propose a perceptual image outpainting method,which effectively takes the advantage of low-level feature fusion and multi-patch discriminator.Specifically,we first fuse the texture information in the low-level feature map of encoder,and simultaneously incorporate these aggregated features reusability with semantic(or structural)information of deep feature map such that we could utilizemore sophisticated texture information to generate more authentic outpainting images.Then we also introduce a multi-patch discriminator to enhance the generated texture,which effectively judges the generated image from the different level features and concurrently impels our network to produce more natural and clearer outpainting results.Moreover,we further introduce perceptual loss and style loss to effectively improve the texture and style of outpainting images.Compared with the existing methods,our method could produce finer outpainting results.Experimental results on Places2 and Paris StreetView datasets illustrated the effectiveness of our method for image outpainting.展开更多
基金This work was supported by the Sichuan Science and Technology program(2019JDJQ0002,2019YFG0496,2021016,2020JDTD0020)partially supported by National Science Foundation of China 42075142.
文摘Recently,deep learning-based image outpainting has made greatly notable improvements in computer vision field.However,due to the lack of fully extracting image information,the existing methods often generate unnatural and blurry outpainting results in most cases.To solve this issue,we propose a perceptual image outpainting method,which effectively takes the advantage of low-level feature fusion and multi-patch discriminator.Specifically,we first fuse the texture information in the low-level feature map of encoder,and simultaneously incorporate these aggregated features reusability with semantic(or structural)information of deep feature map such that we could utilizemore sophisticated texture information to generate more authentic outpainting images.Then we also introduce a multi-patch discriminator to enhance the generated texture,which effectively judges the generated image from the different level features and concurrently impels our network to produce more natural and clearer outpainting results.Moreover,we further introduce perceptual loss and style loss to effectively improve the texture and style of outpainting images.Compared with the existing methods,our method could produce finer outpainting results.Experimental results on Places2 and Paris StreetView datasets illustrated the effectiveness of our method for image outpainting.