期刊文献+
共找到11篇文章
< 1 >
每页显示 20 50 100
A Novel Variational Image Model: Towards a Unified Approach to Image Editing 被引量:3
1
作者 曾运 陈为 彭群生 《Journal of Computer Science & Technology》 SCIE EI CSCD 2006年第2期224-231,共8页
In this paper we propose a unified variational image editing model. It interprets image editing as a variational problem concerning the adaptive adjustments to the zero- and first-derivatives of the images which corre... In this paper we propose a unified variational image editing model. It interprets image editing as a variational problem concerning the adaptive adjustments to the zero- and first-derivatives of the images which correspond to the color and gradient items. By varying the definition domain of each of the two items as well as applying diverse operators, the new model is capable of tackling a variety of image editing tasks. It achieves visually better seamless image cloning effects than existing approaches. It also induces a new and efficient solution to adjusting the color of an image interactively and locally. Other image editing tasks such as stylized processing, local illumination enhancement and image sharpening, can be accomplished within the unified variational framework. Experimental results verify the high flexibility and efficiency of the proposed model. 展开更多
关键词 image editing image cloning image color repairing stylized processing image sharpening
原文传递
Image editing by object-aware optimal boundary searching and mixed-domain composition 被引量:2
2
作者 Shiming Ge Xin Jin +2 位作者 Qiting Ye Zhao Luo Qiang Li 《Computational Visual Media》 CSCD 2018年第1期71-82,共12页
When combining very different images which often contain complex objects and backgrounds,producing consistent compositions is a challenging problem requiring seamless image editing. In this paper, we propose a general... When combining very different images which often contain complex objects and backgrounds,producing consistent compositions is a challenging problem requiring seamless image editing. In this paper, we propose a general approach, called objectaware image editing, to obtain consistency in structure,color, and texture in a unified way. Our approach improves upon previous gradient-domain composition in three ways. Firstly, we introduce an iterative optimization algorithm to minimize mismatches on the boundaries when the target region contains multiple objects of interest. Secondly, we propose a mixeddomain consistency metric for measuring gradients and colors, and formulate composition as a unified minimization problem that can be solved with a sparse linear system. In particular, we encode texture consistency using a patch-based approach without searching and matching. Thirdly, we adopt an objectaware approach to separately manipulate the guidance gradient fields for objects of interest and backgrounds of interest, which facilitates a variety of seamless image editing applications. Our unified method outperforms previous state-of-the-art methods in preserving global texture consistency in addition to local structure continuity. 展开更多
关键词 seamless image editing patch-based synthesis image composition mixed-domain gradient-domain composition
原文传递
Instant Edit Propagation on Images Based on Bilateral Grid 被引量:6
3
作者 Feng Li Chaofeng Ou +1 位作者 Yan Gui Lingyun Xiang 《Computers, Materials & Continua》 SCIE EI 2019年第8期643-656,共14页
The ability to quickly and intuitively edit digital content has become increasingly important in our everyday life.However,existing edit propagation methods for editing digital images are typically based on optimizati... The ability to quickly and intuitively edit digital content has become increasingly important in our everyday life.However,existing edit propagation methods for editing digital images are typically based on optimization with high computational cost for large inputs.Moreover,existing edit propagation methods are generally inefficient and highly time-consuming.Accordingly,to improve edit efficiency,this paper proposes a novel edit propagation method using a bilateral grid,which can achieve instant propagation of sparse image edits.Firstly,given an input image with user interactions,we resample each of its pixels into a regularly sampled bilateral grid,which facilitates efficient mapping from an image to the bilateral space.As a result,all pixels with the same feature information(color,coordinates)are clustered to the same grid,which can achieve the goal of reducing both the amount of image data processing and the cost of calculation.We then reformulate the propagation as a function of the interpolation problem in bilateral space,which is solved very efficiently using radial basis functions.Experimental results show that our method improves the efficiency of color editing,making it faster than existing edit approaches,and results in excellent edited images with high quality. 展开更多
关键词 Instant edit propagation bilateral grid radial basis function image editing
下载PDF
Controllable image generation based on causal representation learning 被引量:1
4
作者 Shanshan HUANG Yuanhao WANG +3 位作者 Zhili GONG Jun LIAO Shu WANG Li LIU 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2024年第1期135-148,共14页
Artificial intelligence generated content(AIGC)has emerged as an indispensable tool for producing large-scale content in various forms,such as images,thanks to the significant role that AI plays in imitation and produ... Artificial intelligence generated content(AIGC)has emerged as an indispensable tool for producing large-scale content in various forms,such as images,thanks to the significant role that AI plays in imitation and production.However,interpretability and controllability remain challenges.Existing AI methods often face challenges in producing images that are both flexible and controllable while considering causal relationships within the images.To address this issue,we have developed a novel method for causal controllable image generation(CCIG)that combines causal representation learning with bi-directional generative adversarial networks(GANs).This approach enables humans to control image attributes while considering the rationality and interpretability of the generated images and also allows for the generation of counterfactual images.The key of our approach,CCIG,lies in the use of a causal structure learning module to learn the causal relationships between image attributes and joint optimization with the encoder,generator,and joint discriminator in the image generation module.By doing so,we can learn causal representations in image’s latent space and use causal intervention operations to control image generation.We conduct extensive experiments on a real-world dataset,CelebA.The experimental results illustrate the effectiveness of CCIG. 展开更多
关键词 image generation Controllable image editing Causal structure learning Causal representation learning
原文传递
A Survey of Image Synthesis and Editing with Generative Adversarial Networks 被引量:19
5
作者 Xian Wu Kun Xu Peter Hall 《Tsinghua Science and Technology》 SCIE EI CAS CSCD 2017年第6期660-674,共15页
This paper presents a survey of image synthesis and editing with Generative Adversarial Networks(GANs). GANs consist of two deep networks, a generator and a discriminator, which are trained in a competitive way. Due... This paper presents a survey of image synthesis and editing with Generative Adversarial Networks(GANs). GANs consist of two deep networks, a generator and a discriminator, which are trained in a competitive way. Due to the power of deep networks and the competitive training manner, GANs are capable of producing reasonable and realistic images, and have shown great capability in many image synthesis and editing applications.This paper surveys recent GAN papers regarding topics including, but not limited to, texture synthesis, image inpainting, image-to-image translation, and image editing. 展开更多
关键词 image synthesis image editing constrained image synthesis generative adversarial networks imageto-image translation
原文传递
Free Appearance-Editing with Improved Poisson Image Cloning 被引量:1
6
作者 别晓辉 黄浩达 王文成 《Journal of Computer Science & Technology》 SCIE EI CSCD 2011年第6期1011-1016,共6页
In this paper, we present a new edit tool for the user to conveniently preserve or freely edit the object appearance during seamless image composition. We observe that though Poisson image editing is effective for sea... In this paper, we present a new edit tool for the user to conveniently preserve or freely edit the object appearance during seamless image composition. We observe that though Poisson image editing is effective for seamless image composition. Its color bleeding (the color of the target image is propagated into the source image) is not always desired in applications, and it provides no way to allow the user to edit the appearance of the source image. To make it more flexible and practical, we introduce new energy terms to control the appearance change, and integrate them into the Poisson image editing framework. The new energy function could still be realized using efficient sparse linear solvers, and the user can interactively refine the constraints. With the new tool, the user can enjoy not only seamless image composition, but also the flexibility to preserve or manipulate the appearance of the source image at the same time. This provides more potential for creating new images. Experimental results demonstrate the effectiveness of our new edit tool, with similar time cost to the original Poisson image editing. 展开更多
关键词 poisson image editing appearance editing edit propagation
原文传递
Facial Image Attributes Transformation via Conditional Recycle Generative Adversarial Networks 被引量:4
7
作者 Huai-Yu Li Wei-Ming Dong Bao-Gang Hu 《Journal of Computer Science & Technology》 SCIE EI CSCD 2018年第3期511-521,共11页
This study introduces a novel conditional recycle generative adversarial network for facial attribute transfor- mation, which can transform high-level semantic face attributes without changing the identity. In our app... This study introduces a novel conditional recycle generative adversarial network for facial attribute transfor- mation, which can transform high-level semantic face attributes without changing the identity. In our approach, we input a source facial image to the conditional generator with target attribute condition to generate a face with the target attribute. Then we recycle the generated face back to the same conditional generator with source attribute condition. A face which should be similar to that of the source face in personal identity and facial attributes is generated. Hence, we introduce a recycle reconstruction loss to enforce the final generated facial image and the source facial image to be identical. Evaluations on the CelebA dataset demonstrate the effectiveness of our approach. Qualitative results show that our approach can learn and generate high-quality identity-preserving facial images with specified attributes. 展开更多
关键词 generative adversarial network image editing facial attributes transformation
原文传递
Image recoloring using geodesic distance based color harmonization 被引量:4
8
作者 Xujie Li Hanli Zhao +1 位作者 Guizhi Nie Hui Huang 《Computational Visual Media》 2015年第2期143-155,共13页
In this paper, we present a computationally simple yet effective image recoloring method based on color harmonization. Our method permits the user to obtain recolored results interactively by rotating a harmonious tem... In this paper, we present a computationally simple yet effective image recoloring method based on color harmonization. Our method permits the user to obtain recolored results interactively by rotating a harmonious template after completing color harmonization. Two main improvements are made in this paper. Firstly, we give a new strategy for finding the most harmonious scheme, in terms of finding the template which best matches the hue distribution of the input image. Secondly, in order to achieve spatially coherent harmonization, geodesic distances are used to move hues lying outside the harmonious sectors to inside them. Experiments show that our approach can produce higher-quality visually pleasing recolored images than existing methods. Moreover, our method is simple and easy to implement, and has good runtime performance. 展开更多
关键词 image editing color harmonization geodesic distance
原文传递
Feature-preserving color pencil drawings from photographs
9
作者 Dong Wang Guiqing Li +2 位作者 Chengying Gao Shengwu Fu Yun Liang 《Computational Visual Media》 SCIE EI CSCD 2023年第4期807-825,共19页
Color pencil drawing is well-loved due to its rich expressiveness.This paper proposes an approach for generating feature-preserving color pencil drawings from photographs.To mimic the tonal style of color pencil drawi... Color pencil drawing is well-loved due to its rich expressiveness.This paper proposes an approach for generating feature-preserving color pencil drawings from photographs.To mimic the tonal style of color pencil drawings,which are much lighter and have relatively lower saturation than photographs,we devise a lightness enhancement mapping and a saturation reduction mapping.The lightness mapping is a monotonically decreasing derivative function,which not only increases lightness but also preserves input photograph features.Color saturation is usually related to lightness,so we suppress the saturation dependent on lightness to yield a harmonious tone.Finally,two extremum operators are provided to generate a foreground-aware outline map in which the colors of the generated contours and the foreground object are consistent.Comprehensive experiments show that color pencil drawings generated by our method surpass existing methods in tone capture and feature preservation. 展开更多
关键词 non-photorealistic rendering pencil drawings image editing feature preservation
原文传递
Lighting transfer across multiple views through local color transforms
10
作者 Qian Zhang Pierre-Yves Laffont Terence Sim 《Computational Visual Media》 CSCD 2017年第4期315-324,共10页
We present a method for transferring lighting between photographs of a static scene. Our method takes as input a photo collection depicting a scene with varying viewpoints and lighting conditions.We cast lighting tran... We present a method for transferring lighting between photographs of a static scene. Our method takes as input a photo collection depicting a scene with varying viewpoints and lighting conditions.We cast lighting transfer as an edit propagation problem, where the transfer of local illumination across images is guided by sparse correspondences obtained through multi-view stereo. Instead of directly propagating color, we learn local color transforms from corresponding patches in pairs of images and propagate these transforms in an edge-aware manner to regions with no correspondences. Our color transforms model the large variability of appearance changes in local regions of the scene, and are robust to missing or inaccurate correspondences. The method is fully automatic and can transfer strong shadows between images. We show applications of our image relighting method for enhancing photographs, browsing photo collections with harmonized lighting, and generating synthetic time-lapse sequences. 展开更多
关键词 RELIGHTING photo collection TIME-LAPSE image editing
原文传递
Reference-guided structure-aware deep sketch colorization for cartoons 被引量:2
11
作者 Xueting Liu Wenliang Wu +2 位作者 Chengze Li Yifan Li Huisi Wu 《Computational Visual Media》 SCIE EI CSCD 2022年第1期135-148,共14页
Digital cartoon production requires extensive manual labor to colorize sketches with visually pleasant color composition and color shading.During colorization,the artist usually takes an existing cartoon image as colo... Digital cartoon production requires extensive manual labor to colorize sketches with visually pleasant color composition and color shading.During colorization,the artist usually takes an existing cartoon image as color guidance,particularly when colorizing related characters or an animation sequence.Reference-guided colorization is more intuitive than colorization with other hints,such as color points or scribbles,or text-based hints.Unfortunately,reference-guided colorization is challenging since the style of the colorized image should match the style of the reference image in terms of both global color composition and local color shading.In this paper,we propose a novel learning-based framework which colorizes a sketch based on a color style feature extracted from a reference color image.Our framework contains a color style extractor to extract the color feature from a color image,a colorization network to generate multi-scale output images by combining a sketch and a color feature,and a multi-scale discriminator to improve the reality of the output image.Extensive qualitative and quantitative evaluations show that our method outperforms existing methods,providing both superior visual quality and style reference consistency in the task of reference-based colorization. 展开更多
关键词 sketch colorization image style editing deep feature understanding reference-based image colorization
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部