The visible-light imaging system used in military equipment is often subjected to severe weather conditions, such as fog, haze, and smoke, under complex lighting conditions at night that significantly degrade the acqu...The visible-light imaging system used in military equipment is often subjected to severe weather conditions, such as fog, haze, and smoke, under complex lighting conditions at night that significantly degrade the acquired images. Currently available image defogging methods are mostly suitable for environments with natural light in the daytime, but the clarity of images captured under complex lighting conditions and spatial changes in the presence of fog at night is not satisfactory. This study proposes an algorithm to remove night fog from single images based on an analysis of the statistical characteristics of images in scenes involving night fog. Color channel transfer is designed to compensate for the high attenuation channel of foggy images acquired at night. The distribution of transmittance is estimated by the deep convolutional network DehazeNet, and the spatial variation of atmospheric light is estimated in a point-by-point manner according to the maximum reflection prior to recover the clear image. The results of experiments show that the proposed method can compensate for the high attenuation channel of foggy images at night, remove the effect of glow from a multi-color and non-uniform ambient source of light, and improve the adaptability and visual effect of the removal of night fog from images compared with the conventional method.展开更多
Based on the characteristics that human eyes are sensitive to brightness and color,the lightness information of visible image and degree of linear polarization and polarization angle were fused in hue-saturation-value...Based on the characteristics that human eyes are sensitive to brightness and color,the lightness information of visible image and degree of linear polarization and polarization angle were fused in hue-saturation-value(HSV) space. To meet the observation of human eyes,hue adjustment based on color transfer was carried out to the fused image and hue was adjusted by polynomial fitting method. Hue adjustment method was improved considering the complicated real mapping relationship between hue gray scale of fused image and reference template image. The result shows that the color fusion method presented in this paper is superior to the traditional pseudo-color method and it is helpful to recognize the target from the environment correctly. The fusion result can reflect the difference of object's polarization characteristic,and get a natural fused image effect.展开更多
The explosive growth of social media means portrait editing and retouching are in high demand.While portraits are commonly captured and stored as raster images,editing raster images is non-trivial and requires the use...The explosive growth of social media means portrait editing and retouching are in high demand.While portraits are commonly captured and stored as raster images,editing raster images is non-trivial and requires the user to be highly skilled.Aiming at developing intuitive and easy-to-use portrait editing tools,we propose a novel vectorization method that can automatically convert raster images into a 3-tier hierarchical representation.The base layer consists of a set of sparse diffusion curves(DCs)which characterize salient geometric features and low-frequency colors,providing a means for semantic color transfer and facial expression editing.The middle level encodes specular highlights and shadows as large,editable Poisson regions(PRs)and allows the user to directly adjust illumination by tuning the strength and changing the shapes of PRs.The top level contains two types of pixel-sized PRs for high-frequency residuals and fine details such as pimples and pigmentation.We train a deep generative model that can produce high-frequency residuals automatically.Thanks to the inherent meaning in vector primitives,editing portraits becomes easy and intuitive.In particular,our method supports color transfer,facial expression editing,highlight and shadow editing,and automatic retouching.To quantitatively evaluate the results,we extend the commonly used FLIP metric(which measures color and feature differences between two images)to consider illumination.The new metric,illumination-sensitive FLIP,can effectively capture salient changes in color transfer results,and is more consistent with human perception than FLIP and other quality measures for portrait images.We evaluate our method on the FFHQR dataset and show it to be effective for common portrait editing tasks,such as retouching,light editing,color transfer,and expression editing.展开更多
基金supported by a grant from the Qian Xuesen Laboratory of Space Technology, China Academy of Space Technology (Grant No. GZZKFJJ2020004)the National Natural Science Foundation of China (Grant Nos. 61875013 and 61827814)the Natural Science Foundation of Beijing Municipality (Grant No. Z190018)。
文摘The visible-light imaging system used in military equipment is often subjected to severe weather conditions, such as fog, haze, and smoke, under complex lighting conditions at night that significantly degrade the acquired images. Currently available image defogging methods are mostly suitable for environments with natural light in the daytime, but the clarity of images captured under complex lighting conditions and spatial changes in the presence of fog at night is not satisfactory. This study proposes an algorithm to remove night fog from single images based on an analysis of the statistical characteristics of images in scenes involving night fog. Color channel transfer is designed to compensate for the high attenuation channel of foggy images acquired at night. The distribution of transmittance is estimated by the deep convolutional network DehazeNet, and the spatial variation of atmospheric light is estimated in a point-by-point manner according to the maximum reflection prior to recover the clear image. The results of experiments show that the proposed method can compensate for the high attenuation channel of foggy images at night, remove the effect of glow from a multi-color and non-uniform ambient source of light, and improve the adaptability and visual effect of the removal of night fog from images compared with the conventional method.
基金Sponsored by the National High Technology Research and Development Program of China ("863"Program) (2006AA09Z207)
文摘Based on the characteristics that human eyes are sensitive to brightness and color,the lightness information of visible image and degree of linear polarization and polarization angle were fused in hue-saturation-value(HSV) space. To meet the observation of human eyes,hue adjustment based on color transfer was carried out to the fused image and hue was adjusted by polynomial fitting method. Hue adjustment method was improved considering the complicated real mapping relationship between hue gray scale of fused image and reference template image. The result shows that the color fusion method presented in this paper is superior to the traditional pseudo-color method and it is helpful to recognize the target from the environment correctly. The fusion result can reflect the difference of object's polarization characteristic,and get a natural fused image effect.
基金This project was supported by the Ministry of Education,Singapore,under its Academic Research Fund Tier 1(RG20/20)the National Natural Science Foundation of China(61872347)the Special Plan for the Development of Distinguished Young Scientists of ISCAS(Y8RC535018).
文摘The explosive growth of social media means portrait editing and retouching are in high demand.While portraits are commonly captured and stored as raster images,editing raster images is non-trivial and requires the user to be highly skilled.Aiming at developing intuitive and easy-to-use portrait editing tools,we propose a novel vectorization method that can automatically convert raster images into a 3-tier hierarchical representation.The base layer consists of a set of sparse diffusion curves(DCs)which characterize salient geometric features and low-frequency colors,providing a means for semantic color transfer and facial expression editing.The middle level encodes specular highlights and shadows as large,editable Poisson regions(PRs)and allows the user to directly adjust illumination by tuning the strength and changing the shapes of PRs.The top level contains two types of pixel-sized PRs for high-frequency residuals and fine details such as pimples and pigmentation.We train a deep generative model that can produce high-frequency residuals automatically.Thanks to the inherent meaning in vector primitives,editing portraits becomes easy and intuitive.In particular,our method supports color transfer,facial expression editing,highlight and shadow editing,and automatic retouching.To quantitatively evaluate the results,we extend the commonly used FLIP metric(which measures color and feature differences between two images)to consider illumination.The new metric,illumination-sensitive FLIP,can effectively capture salient changes in color transfer results,and is more consistent with human perception than FLIP and other quality measures for portrait images.We evaluate our method on the FFHQR dataset and show it to be effective for common portrait editing tasks,such as retouching,light editing,color transfer,and expression editing.