期刊文献+
共找到16篇文章
< 1 >
每页显示 20 50 100
Digital image inpainting by example-based image synthesis method 被引量:1
1
作者 聂栋栋 Ma Lizhuang Xiao Shuangjiu 《High Technology Letters》 EI CAS 2006年第3期276-282,共7页
A simple and effective image inpainting method is proposed in this paper, which is proved to be suitable for different kinds of target regions with shapes from little scraps to large unseemly objects in a wide range o... A simple and effective image inpainting method is proposed in this paper, which is proved to be suitable for different kinds of target regions with shapes from little scraps to large unseemly objects in a wide range of images. It is an important improvement upon the traditional image inpainting techniques. By introducing a new bijeetive-mapping term into the matching cost function, the artificial repetition problem in the final inpainting image is practically solved. In addition, by adopting an inpainting error map, not only the target pixels are refined gradually during the inpainting process but also the overlapped target patches are combined more seamlessly than previous method. Finally, the inpainting time is dramatically decreased by using a new acceleration method in the matching process. 展开更多
关键词 INPAINTING image synthesis texture synthesis prority matching cost function example patch isophote DIFFUSION
下载PDF
A Survey of GAN Based Image Synthesis
2
作者 Jiahe Ni 《Journal of Information Hiding and Privacy Protection》 2022年第2期79-88,共10页
Image generation is a hot topic in the academic recently,and has been applied to AI drawing,which can bring Vivid AI paintings without labor costs.In image generation,we represent the image as a random vector,assuming... Image generation is a hot topic in the academic recently,and has been applied to AI drawing,which can bring Vivid AI paintings without labor costs.In image generation,we represent the image as a random vector,assuming that the images of the natural scene obey an unknown distribution,we hope to estimate its distribution through some observation samples.Especially,with the development of GAN(Generative Adversarial Network),The generator and discriminator improve the model capability through adversarial,the quality of the generated image is also increasing.The image quality generated by the existing GAN based image generation model is so well-paint that it can be passed for genuine one.Based on the brief introduction of the concept ofGAN,this paper analyzes themain ideas of image synthesis,studies the representative SOTA GAN based Image synthesis method. 展开更多
关键词 Deep learning image synthesis SOTA generative adversarial network
下载PDF
A Survey of Image Synthesis and Editing with Generative Adversarial Networks 被引量:19
3
作者 Xian Wu Kun Xu Peter Hall 《Tsinghua Science and Technology》 SCIE EI CAS CSCD 2017年第6期660-674,共15页
This paper presents a survey of image synthesis and editing with Generative Adversarial Networks(GANs). GANs consist of two deep networks, a generator and a discriminator, which are trained in a competitive way. Due... This paper presents a survey of image synthesis and editing with Generative Adversarial Networks(GANs). GANs consist of two deep networks, a generator and a discriminator, which are trained in a competitive way. Due to the power of deep networks and the competitive training manner, GANs are capable of producing reasonable and realistic images, and have shown great capability in many image synthesis and editing applications.This paper surveys recent GAN papers regarding topics including, but not limited to, texture synthesis, image inpainting, image-to-image translation, and image editing. 展开更多
关键词 image synthesis image editing constrained image synthesis generative adversarial networks imageto-image translation
原文传递
Deep image synthesis from intuitive user input:A review and perspectives 被引量:2
4
作者 Yuan Xue Yuan-Chen Guo +3 位作者 Han Zhang Tao Xu Song-Hai Zhang Xiaolei Huang 《Computational Visual Media》 SCIE EI CSCD 2022年第1期3-31,共29页
In many applications of computer graphics,art,and design,it is desirable for a user to provide intuitive non-image input,such as text,sketch,stroke,graph,or layout,and have a computer system automatically generate pho... In many applications of computer graphics,art,and design,it is desirable for a user to provide intuitive non-image input,such as text,sketch,stroke,graph,or layout,and have a computer system automatically generate photo-realistic images according to that input.While classically,works that allow such automatic image content generation have followed a framework of image retrieval and composition,recent advances in deep generative models such as generative adversarial networks(GANs),variational autoencoders(VAEs),and flow-based methods have enabled more powerful and versatile image generation approaches.This paper reviews recent works for image synthesis given intuitive user input,covering advances in input versatility,image generation methodology,benchmark datasets,and evaluation metrics.This motivates new perspectives on input representation and interactivity,cross fertilization between major image generation paradigms,and evaluation and comparison of generation methods. 展开更多
关键词 image synthesis intuitive user input deep generative models synthesized image quality evaluation
原文传递
A Comprehensive Pipeline for Complex Text-to-Image Synthesis
5
作者 Fei Fang Fei Luo +3 位作者 Hong-Pan Zhang Hua-Jian Zhou Alix L.H.Chow Chun-Xia Xiao 《Journal of Computer Science & Technology》 SCIE EI CSCD 2020年第3期522-537,共16页
Synthesizing a complex scene image with multiple objects and background according to text description is a challenging problem.It needs to solve several difficult tasks across the fields of natural language processing... Synthesizing a complex scene image with multiple objects and background according to text description is a challenging problem.It needs to solve several difficult tasks across the fields of natural language processing and computer vision.We model it as a combination of semantic entity recognition,object retrieval and recombination,and objects’status optimization.To reach a satisfactory result,we propose a comprehensive pipeline to convert the input text to its visual counterpart.The pipeline includes text processing,foreground objects and background scene retrieval,image synthesis using constrained MCMC,and post-processing.Firstly,we roughly divide the objects parsed from the input text into foreground objects and background scenes.Secondly,we retrieve the required foreground objects from the foreground object dataset segmented from Microsoft COCO dataset,and retrieve an appropriate background scene image from the background image dataset extracted from the Internet.Thirdly,in order to ensure the rationality of foreground objects’positions and sizes in the image synthesis step,we design a cost function and use the Markov Chain Monte Carlo(MCMC)method as the optimizer to solve this constrained layout problem.Finally,to make the image look natural and harmonious,we further use Poisson-based and relighting-based methods to blend foreground objects and background scene image in the post-processing step.The synthesized results and comparison results based on Microsoft COCO dataset prove that our method outperforms some of the state-of-the-art methods based on generative adversarial networks(GANs)in visual quality of generated scene images. 展开更多
关键词 image synthesis scene generation text-to-image conversion Markov Chain Monte Carlo(MCMC)
原文传递
A Novel Unsupervised MRI Synthetic CT Image Generation Framework with Registration Network
6
作者 Liwei Deng Henan Sun +2 位作者 Jing Wang Sijuan Huang Xin Yang 《Computers, Materials & Continua》 SCIE EI 2023年第11期2271-2287,共17页
In recent years,radiotherapy based only on Magnetic Resonance(MR)images has become a hot spot for radiotherapy planning research in the current medical field.However,functional computed tomography(CT)is still needed f... In recent years,radiotherapy based only on Magnetic Resonance(MR)images has become a hot spot for radiotherapy planning research in the current medical field.However,functional computed tomography(CT)is still needed for dose calculation in the clinic.Recent deep-learning approaches to synthesized CT images from MR images have raised much research interest,making radiotherapy based only on MR images possible.In this paper,we proposed a novel unsupervised image synthesis framework with registration networks.This paper aims to enforce the constraints between the reconstructed image and the input image by registering the reconstructed image with the input image and registering the cycle-consistent image with the input image.Furthermore,this paper added ConvNeXt blocks to the network and used large kernel convolutional layers to improve the network’s ability to extract features.This research used the collected head and neck data of 180 patients with nasopharyngeal carcinoma to experiment and evaluate the training model with four evaluation metrics.At the same time,this research made a quantitative comparison of several commonly used model frameworks.We evaluate the model performance in four evaluation metrics which achieve Mean Absolute Error(MAE),Root Mean Square Error(RMSE),Peak Signal-to-Noise Ratio(PSNR),and Structural Similarity(SSIM)are 18.55±1.44,86.91±4.31,33.45±0.74 and 0.960±0.005,respectively.Compared with other methods,MAE decreased by 2.17,RMSE decreased by 7.82,PSNR increased by 0.76,and SSIM increased by 0.011.The results show that the model proposed in this paper outperforms other methods in the quality of image synthesis.The work in this paper is of guiding significance to the study of MR-only radiotherapy planning. 展开更多
关键词 MRI-CT image synthesis variational auto-encoder medical image translation MRI-only based radiotherapy
下载PDF
Tight Sandstone Image Augmentation for Image Identification Using Deep Learning
7
作者 Dongsheng Li Chunsheng Li +4 位作者 Kejia Zhang Tao Liu Fang Liu Jingsong Yin Mingyue Liao 《Computer Systems Science & Engineering》 SCIE EI 2023年第10期1209-1231,共23页
Intelligent identification of sandstone slice images using deep learning technology is the development trend of mineral identification,and accurate mineral particle segmentation is the most critical step for intellige... Intelligent identification of sandstone slice images using deep learning technology is the development trend of mineral identification,and accurate mineral particle segmentation is the most critical step for intelligent identification.A typical identification model requires many training samples to learn as many distinguishable features as possible.However,limited by the difficulty of data acquisition,the high cost of labeling,and privacy protection,this has led to a sparse sample number and cannot meet the training requirements of deep learning image identification models.In order to increase the number of samples and improve the training effect of deep learning models,this paper proposes a tight sandstone image data augmentation method by combining the advantages of the data deformation method and the data oversampling method in the Putaohua reservoir in the Sanzhao Sag of the Songliao Basin as the target area.First,the Style Generative Adversarial Network(StyleGAN)is improved to generate high-resolution tight sandstone images to improve data diversity.Second,we improve the Automatic Data Augmentation(AutoAugment)algorithm to search for the optimal augmentation strategy to expand the data scale.Finally,we design comparison experiments to demonstrate that this method has obvious advantages in generating image quality and improving the identification effect of deep learning models in real application scenarios. 展开更多
关键词 Tight sandstone image synthesis generative adversarial networks data augmentation image segmentation
下载PDF
An Approach to Synthesize Diverse Underwater Image Dataset 被引量:3
8
作者 Xiaodong LIU Ben M.CHEN 《Instrumentation》 2019年第3期67-75,共9页
Images that are taken underwater mostly present color shift with hazy effects due to the special property of water.Underwater image enhancement methods are proposed to handle this issue.However,their enhancement resul... Images that are taken underwater mostly present color shift with hazy effects due to the special property of water.Underwater image enhancement methods are proposed to handle this issue.However,their enhancement results are only evaluated on a small number of underwater images.The lack of a sufficiently large and diverse dataset for efficient evaluation of underwater image enhancement methods provokes the present paper.The present paper proposes an organized method to synthesize diverse underwater images,which can function as a benchmark dataset.The present synthesis is based on the underwater image formation model,which describes the physical degradation process.The indoor RGB-D image dataset is used as the seed for underwater style image generation.The ambient light is simulated based on the statistical mean value of real-world underwater images.Attenuation coefficients for diverse water types are carefully selected.Finally,in total 14490 underwater images of 10 water types are synthesized.Based on the synthesized database,state-of-the-art image enhancement methods are appropriately evaluated.Besides,the large diverse underwater image database is beneficial in the development of learning-based methods. 展开更多
关键词 image Processing Underwater image Enhancement Underwater image synthesis
下载PDF
Practical Blind Image Denoising via Swin-Conv-UNet and Data Synthesis 被引量:4
9
作者 Kai Zhang Yawei Li +6 位作者 Jingyun Liang Jiezhang Cao Yulun Zhang Hao Tang Deng-Ping Fan Radu Timofte Luc Van Gool 《Machine Intelligence Research》 EI CSCD 2023年第6期822-836,共15页
While recent years have witnessed a dramatic upsurge of exploiting deep neural networks toward solving image denoising,existing methods mostly rely on simple noise assumptions,such as additive white Gaussian noise(AWG... While recent years have witnessed a dramatic upsurge of exploiting deep neural networks toward solving image denoising,existing methods mostly rely on simple noise assumptions,such as additive white Gaussian noise(AWGN),JPEG compression noise and camera sensor noise,and a general-purpose blind denoising method for real images remains unsolved.In this paper,we attempt to solve this problem from the perspective of network architecture design and training data synthesis.Specifically,for the network architecture design,we propose a swin-conv block to incorporate the local modeling ability of residual convolutional layer and non-local modeling ability of swin transformer block,and then plug it as the main building block into the widely-used image-to-image translation UNet architecture.For the training data synthesis,we design a practical noise degradation model which takes into consideration different kinds of noise(including Gaussian,Poisson,speckle,JPEG compression,and processed camera sensor noises)and resizing,and also involves a random shuffle strategy and a double degradation strategy.Extensive experiments on AGWN removal and real image denoising demonstrate that the new network architecture design achieves state-of-the-art performance and the new degradation model can help to significantly improve the practicability.We believe our work can provide useful insights into current denoising research.The source code is available at https://github.com/cszn/SCUNet. 展开更多
关键词 Blind image denoising real image denosing data synthesis Transformer image signal processing(ISP)pipeline
原文传递
Transformers in medical image analysis 被引量:1
10
作者 Kelei He Chen Gan +7 位作者 Zhuoyuan Li Islem Rekik Zihao Yin Wen Ji Yang Gao Qian Wang Junfeng Zhang Dinggang Shen 《Intelligent Medicine》 CSCD 2023年第1期59-78,共20页
Transformers have dominated the field of natural language processing and have recently made an impact in the area of computer vision.In the field of medical image analysis,transformers have also been successfully used... Transformers have dominated the field of natural language processing and have recently made an impact in the area of computer vision.In the field of medical image analysis,transformers have also been successfully used in to full-stack clinical applications,including image synthesis/reconstruction,registration,segmentation,detection,and diagnosis.This paper aimed to promote awareness of the applications of transformers in medical image analysis.Specifically,we first provided an overview of the core concepts of the attention mechanism built into transformers and other basic components.Second,we reviewed various transformer architectures tailored for medical image applications and discuss their limitations.Within this review,we investigated key challenges including the use of transformers in different learning paradigms,improving model efficiency,and coupling with other techniques.We hope this review would provide a comprehensive picture of transformers to readers with an interest in medical image analysis. 展开更多
关键词 TRANSFORMER Medical image analysis Deep learning Diagnosis REGISTRATION SEGMENTATION image synthesis Multi-task learning Multi-modal learning Weakly-supervised learning
原文传递
Evolution and Effectiveness of Loss Functions in Generative Adversarial Networks
11
作者 Ali Syed Saqlain Fang Fang +2 位作者 Tanvir Ahmad Liyun Wang Zain-ul Abidin 《China Communications》 SCIE CSCD 2021年第10期45-76,共32页
Recently,the evolution of Generative Adversarial Networks(GANs)has embarked on a journey of revolutionizing the field of artificial and computational intelligence.To improve the generating ability of GANs,various loss... Recently,the evolution of Generative Adversarial Networks(GANs)has embarked on a journey of revolutionizing the field of artificial and computational intelligence.To improve the generating ability of GANs,various loss functions are introduced to measure the degree of similarity between the samples generated by the generator and the real data samples,and the effectiveness of the loss functions in improving the generating ability of GANs.In this paper,we present a detailed survey for the loss functions used in GANs,and provide a critical analysis on the pros and cons of these loss functions.First,the basic theory of GANs along with the training mechanism are introduced.Then,the most commonly used loss functions in GANs are introduced and analyzed.Third,the experimental analyses and comparison of these loss functions are presented in different GAN architectures.Finally,several suggestions on choosing suitable loss functions for image synthesis tasks are given. 展开更多
关键词 loss functions deep learning machine learning unsupervised learning generative adversarial networks(GANs) image synthesis
下载PDF
One-shot Face Reenactment with Dense Correspondence Estimation
12
作者 Yunfan Liu Qi Li Zhenan Sun 《Machine Intelligence Research》 EI CSCD 2024年第5期941-953,共13页
One-shot face reenactment is a challenging task due to the identity mismatch between source and driving faces.Most existing methods fail to completely eliminate the interference of driving subjects’identity informati... One-shot face reenactment is a challenging task due to the identity mismatch between source and driving faces.Most existing methods fail to completely eliminate the interference of driving subjects’identity information,which may lead to face shape distortion and undermine the realism of reenactment results.To solve this problem,in this paper,we propose using a 3D morphable model(3DMM)for explicit facial semantic decomposition and identity disentanglement.Instead of using 3D coefficients alone for reenactment control,we take advantage of the generative ability of 3DMM to render textured face proxies.These proxies contain abundant yet compact geometric and semantic information of human faces,which enables us to compute the face motion field between source and driving images by estimating the dense correspondence.In this way,we can approximate reenactment results by warping source images according to the motion field,and a generative adversarial network(GAN)is adopted to further improve the visual quality of warping results.Extensive experiments on various datasets demonstrate the advantages of the proposed method over existing state-of-the-art benchmarks in both identity preservation and reenactment fulfillment. 展开更多
关键词 Generative adversarial networks face image manipulation face image synthesis face reenactment 3D morphable model
原文传递
A survey of the state-of-the-art in patch-based synthesis 被引量:13
13
作者 Connelly Barnes Fang-Lue Zhang 《Computational Visual Media》 CSCD 2017年第1期3-20,共18页
This paper surveys the state-of-the-art of research in patch-based synthesis. Patch-based methods synthesize output images by copying small regions from exemplar imagery. This line of research originated from an area ... This paper surveys the state-of-the-art of research in patch-based synthesis. Patch-based methods synthesize output images by copying small regions from exemplar imagery. This line of research originated from an area called "texture synthesis", which focused on creating regular or semi-regular textures from small exemplars. However, more recently, much research has focused on synthesis of larger and more diverse imagery, such as photos, photo collections, videos, and light fields. Additionally, recent research has focused on customizing the synthesis process for particular problem domains, such as synthesizing artistic or decorative brushes, synthesis of rich materials, and synthesis for 3D fabrication. This report investigates recent papers that follow these themes, with a particular emphasis on papers published since 2009,when the last survey in this area was published. This survey can serve as a tutorial for readers who are not yet familiar with these topics, as well as provide comparisons between these papers, and highlight some open problems in this area. 展开更多
关键词 TEXTURE PATCH image synthesis texture synthesis
原文传递
Hair Image Generation Using Connected Texels
14
作者 张晓鹏 陈彦云 吴恩华 《Journal of Computer Science & Technology》 SCIE EI CSCD 2001年第4期341-350,共10页
Generation of photo-realistic images of human hair is a challenging topic in computer graphics. The difficulty in solving the Problem in this aspect comes mainly from the extremely large number of hairs and the high c... Generation of photo-realistic images of human hair is a challenging topic in computer graphics. The difficulty in solving the Problem in this aspect comes mainly from the extremely large number of hairs and the high complexity of the hair shapes. Regarding to the modeling and rendering of hair-type objects, Kajiya proposed a so-called texel model for producing furry surfaces. However, Kajiya's model could be only used for the generation of short hairs. In this paper, a concise and practical approach is presented to solve the problem of rendering long hairs, and in particular the method of rendering the smooth segmental texels for the generation of long hairs is addressed. 展开更多
关键词 HAIR RENDER TEXEL volume rendering realistic image synthesis
原文传递
Time-varying clustering for local lighting and material design 被引量:1
15
作者 HUANG PeiJie GU YuanTing +2 位作者 WU XiaoLong CHEN YanYun WU EnHua 《Science in China(Series F)》 2009年第3期445-456,共12页
This paper presents an interactive graphics processing unit (GPU)-based relighting system in which local lighting condition, surface materials and viewing direction can all be changed on the fly. To support these ch... This paper presents an interactive graphics processing unit (GPU)-based relighting system in which local lighting condition, surface materials and viewing direction can all be changed on the fly. To support these changes, we simulate the lighting transportation process at run time, which is normally impractical for interactive use due to its huge computational burden. We greatly alleviate this burden by a hierarchical structure named a transportation tree that clusters similar emitting samples together within a perceptually acceptable error bound. Furthermore, by exploiting the coherence in time as well as in space, we incrementally adjust the clusters rather than computing them from scratch in each frame. With a pre-computed visibility map, we are able to efficiently estimate the indirect illumination in parallel on graphics hardware, by simply summing up the radiance shoots from cluster representatives, plus a small number of operations of merging and splitting on clusters. With relighting based on the time-varying clusters, interactive update of global illumination effects with multi-bounced indirect lighting is demonstrated in applications to material animation and scene decoration. 展开更多
关键词 photorealistic image synthesis global illumination lighting design material design time-varying clustering local lighting GPU
原文传递
High fidelity virtual try-on network via semantic adaptation and distributed componentization
16
作者 Chenghu Du Feng Yu +5 位作者 Minghua Jiang Ailing Hua Yaxin Zhao Xiong Wei Tao Peng Xinrong Hu 《Computational Visual Media》 SCIE EI CSCD 2022年第4期649-663,共15页
Image-based virtual try-on systems have significant commercial value in online garment shopping.However,prior methods fail to appropriately handle details,so are defective in maintaining the original appearance of org... Image-based virtual try-on systems have significant commercial value in online garment shopping.However,prior methods fail to appropriately handle details,so are defective in maintaining the original appearance of organizational items including arms,the neck,and in-shop garments.We propose a novel high fidelity virtual try-on network to generate realistic results.Specifically,a distributed pipeline is used for simultaneous generation of organizational items.First,the in-shop garment is warped using thin plate splines(TPS)to give a coarse shape reference,and then a corresponding target semantic map is generated,which can adaptively respond to the distribution of different items triggered by different garments.Second,organizational items are componentized separately using our novel semantic map-based image adjustment network(SMIAN)to avoid interference between body parts.Finally,all components are integrated to generatethe overall result by SMIAN.A priori dual-modalinformation is incorporated in the tail layers of SMIAN to improve the convergence rate of the network.Experiments demonstrate that the proposed method can retain better details of condition information than current methods.Our method achieves convincing quantitative and qualitative results on existing benchmark datasets. 展开更多
关键词 virtual try-on conditional image synthesis human parsing thin plate spline semantic adaptation
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部