期刊文献+

基于自编码器生成对抗网络的可配置文本图像编辑 被引量:4

Configurable Text-based Image Editing by Autoencoder-based Generative Adversarial Networks
下载PDF
导出
摘要 基于文本的图像编辑是多媒体领域的一个研究热点并具有重要的应用价值.由于它是根据给定的文本编辑源图像,而文本和图像的跨模态差异很大,因此它是一项很具有挑战的任务.在对编辑过程的直接控制和修正上,目前方法难以有效地实现,但图像编辑是用户喜好导向的,提高可控性可以绕过或强化某些编辑模块以获得用户偏爱的结果.针对该问题,提出一种基于自动编码器的文本图像编辑模型.为了提供便捷且直接的交互配置和编辑接口,该模型在多层级生成对抗网络中引入自动编码器,该自动编码器统一多层级间高维特征空间为颜色空间,从而可以对该颜色空间下的中间编辑结果进行直接修正.其次,为了增强编辑图像细节及提高可控性,构造了对称细节修正模块,它以源图像和编辑图像为对称可交换输入,融合文本特征以对前面输入编辑图像进行修正.在MSCOCO和CUB200数据集上的实验表明,该模型可以有效地基于语言描述自动编辑图像,同时可以便捷且友好地修正编辑效果. Text-based image editing is popular in multimedia and is of great application value, which is also a challenging task as the source image is edited on the basis of a given text, and there is a large cross-modal difference between the image and text. The existing methods can hardly achieve effective direct control and correction of the editing process, but image editing is user preference-oriented, and some editing modules can be bypassed or enhanced by controllability improvement to obtain the results of user preference. Therefore, this study proposes a novel autoencoder-based image editing model according to text descriptions. In this model, an autoencoder is first introduced in stacked generative adversarial networks(SGANs) to provide convenient and direct interactive configuration and editing interfaces. The autoencoder can transform high-dimension feature space between multiple layers into color space and directly correct the intermediate editing results under the color space. Then, a symmetrical detail correction module is constructed to enhance the detail of the edited image and improve controllability, which takes the source image and the edited image as symmetrical exchangeable input to correct the previously input edited image by the fusion of text features. Experiments on the MS-COCO and CUB200 datasets demonstrate that the proposed model can effectively and automatically edit images on the basis of linguistic descriptions while providing user-friendly and convenient corrections to the editing.
作者 吴福祥 程俊 WU Fu-Xiang;CHENG Jun(Guangdong Provincial Key Laboratory of Robotics and Intelligent System,Shenzhen Institute of Advanced Technology,Chinese Academy of Sciences,Shenzhen 518055,China)
出处 《软件学报》 EI CSCD 北大核心 2022年第9期3139-3151,共13页 Journal of Software
基金 国家自然科学基金(U21A20487) 深圳市基础研究项目(JCYJ20200109113416531,JCYJ20180507182610734) 中国科学院关键技术人才项目。
关键词 基于文本的图像编辑 生成对抗网络 交互编辑 text-based image editing generative adversarial networks(GANs) interactive editing
  • 相关文献

参考文献2

二级参考文献3

共引文献88

同被引文献18

引证文献4

二级引证文献3

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部