期刊文献+
共找到3篇文章
< 1 >
每页显示 20 50 100
Implicit pairs for boosting unpaired image-to-image translation
1
作者 Yiftach Ginger Dov Danon +1 位作者 Hadar Averbuch-Elor Daniel Cohen-Or 《Visual Informatics》 EI 2020年第4期50-58,共9页
In image-to-image translation the goal is to learn a mapping from one image domain to another.In the case of supervised approaches the mapping is learned from paired samples.However,collecting large sets of image pair... In image-to-image translation the goal is to learn a mapping from one image domain to another.In the case of supervised approaches the mapping is learned from paired samples.However,collecting large sets of image pairs is often either prohibitively expensive or not possible.As a result,in recent years more attention has been given to techniques that learn the mapping from unpaired sets.In our work,we show that injecting implicit pairs into unpaired sets strengthens the mapping between the two domains,improves the compatibility of their distributions,and leads to performance boosting of unsupervised techniques by up to 12%across several measurements.The competence of the implicit pairs is further displayed with the use of pseudo-pairs,i.e.,paired samples which only approximate a real pair.We demonstrate the effect of the approximated implicit samples on image-to-image translation problems,where such pseudo-pairs may be synthesized in one direction,but not in the other.We further show that pseudo-pairs are significantly more effective as implicit pairs in an unpaired setting,than directly using them explicitly in a paired setting. 展开更多
关键词 Generative adversarial networks image-to-image translation Data augmentation Synthetic samples
原文传递
Image to Image Translation Based on Differential Image Pix2Pix Model
2
作者 Xi Zhao Haizheng Yu Hong Bian 《Computers, Materials & Continua》 SCIE EI 2023年第10期181-198,共18页
In recent years,Pix2Pix,a model within the domain of GANs,has found widespread application in the field of image-to-image translation.However,traditional Pix2Pix models suffer from significant drawbacks in image gener... In recent years,Pix2Pix,a model within the domain of GANs,has found widespread application in the field of image-to-image translation.However,traditional Pix2Pix models suffer from significant drawbacks in image generation,such as the loss of important information features during the encoding and decoding processes,as well as a lack of constraints during the training process.To address these issues and improve the quality of Pix2Pixgenerated images,this paper introduces two key enhancements.Firstly,to reduce information loss during encoding and decoding,we utilize the U-Net++network as the generator for the Pix2Pix model,incorporating denser skip-connection to minimize information loss.Secondly,to enhance constraints during image generation,we introduce a specialized discriminator designed to distinguish differential images,further enhancing the quality of the generated images.We conducted experiments on the facades dataset and the sketch portrait dataset from the Chinese University of Hong Kong to validate our proposed model.The experimental results demonstrate that our improved Pix2Pix model significantly enhances image quality and outperforms other models in the selected metrics.Notably,the Pix2Pix model incorporating the differential image discriminator exhibits the most substantial improvements across all metrics.An analysis of the experimental results reveals that the use of the U-Net++generator effectively reduces information feature loss,while the Pix2Pix model incorporating the differential image discriminator enhances the supervision of the generator during training.Both of these enhancements collectively improve the quality of Pix2Pix-generated images. 展开更多
关键词 image-to-image translation generative adversarial networks U-Net++ differential image Pix2Pix
下载PDF
Unsupervised image translation with distributional semantics awareness
3
作者 Zhexi Peng He Wang +2 位作者 Yanlin Weng Yin Yang Tianjia Shao 《Computational Visual Media》 SCIE EI CSCD 2023年第3期619-631,共13页
Unsupervised image translation(UIT)studies the mapping between two image domains.Since such mappings are under-constrained,existing research has pursued various desirable properties such as distributional matching or ... Unsupervised image translation(UIT)studies the mapping between two image domains.Since such mappings are under-constrained,existing research has pursued various desirable properties such as distributional matching or two-way consistency.In this paper,we re-examine UIT from a new perspective:distributional semantics consistency,based on the observation that data variations contain semantics,e.g.,shoes varying in colors.Further,the semantics can be multi-dimensional,e.g.,shoes also varying in style,functionality,etc.Given two image domains,matching these semantic dimensions during UIT will produce mappings with explicable correspondences,which has not been investigated previously.We propose distributional semantics mapping(DSM),the first UIT method which explicitly matches semantics between two domains.We show that distributional semantics has been rarely considered within and beyond UIT,even though it is a common problem in deep learning.We evaluate DSM on several benchmark datasets,demonstrating its general ability to capture distributional semantics.Extensive comparisons show that DSM not only produces explicable mappings,but also improves image quality in general. 展开更多
关键词 generative adversarial networks(GANs) manifold alignment unsupervised learning image-to-image translation distributional semantics
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部