Low-light images suffer from low quality due to poor lighting conditions,noise pollution,and improper settings of cameras.To enhance low-light images,most existing methods rely on normal-light images for guidance but ...Low-light images suffer from low quality due to poor lighting conditions,noise pollution,and improper settings of cameras.To enhance low-light images,most existing methods rely on normal-light images for guidance but the collection of suitable normal-light images is difficult.In contrast,a self-supervised method breaks free from the reliance on normal-light data,resulting in more convenience and better generalization.Existing self-supervised methods primarily focus on illumination adjustment and design pixel-based adjustment methods,resulting in remnants of other degradations,uneven brightness and artifacts.In response,this paper proposes a self-supervised enhancement method,termed as SLIE.It can handle multiple degradations including illumination attenuation,noise pollution,and color shift,all in a self-supervised manner.Illumination attenuation is estimated based on physical principles and local neighborhood information.The removal and correction of noise and color shift removal are solely realized with noisy images and images with color shifts.Finally,the comprehensive and fully self-supervised approach can achieve better adaptability and generalization.It is applicable to various low light conditions,and can reproduce the original color of scenes in natural light.Extensive experiments conducted on four public datasets demonstrate the superiority of SLIE to thirteen state-of-the-art methods.Our code is available at https://github.com/hanna-xu/SLIE.展开更多
Low-light image enhancement methods have limitations in addressing issues such as color distortion,lack of vibrancy,and uneven light distribution and often require paired training data.To address these issues,we propo...Low-light image enhancement methods have limitations in addressing issues such as color distortion,lack of vibrancy,and uneven light distribution and often require paired training data.To address these issues,we propose a two-stage unsupervised low-light image enhancement algorithm called Retinex and Exposure Fusion Network(RFNet),which can overcome the problems of over-enhancement of the high dynamic range and under-enhancement of the low dynamic range in existing enhancement algorithms.This algorithm can better manage the challenges brought about by complex environments in real-world scenarios by training with unpaired low-light images and regular-light images.In the first stage,we design a multi-scale feature extraction module based on Retinex theory,capable of extracting details and structural information at different scales to generate high-quality illumination and reflection images.In the second stage,an exposure image generator is designed through the camera response mechanism function to acquire exposure images containing more dark features,and the generated images are fused with the original input images to complete the low-light image enhancement.Experiments show the effectiveness and rationality of each module designed in this paper.And the method reconstructs the details of contrast and color distribution,outperforms the current state-of-the-art methods in both qualitative and quantitative metrics,and shows excellent performance in the real world.展开更多
In recent years,learning-based low-light image enhancement methods have shown excellent performance,but the heuristic design adopted by most methods requires high engineering skills for developers,causing expensive in...In recent years,learning-based low-light image enhancement methods have shown excellent performance,but the heuristic design adopted by most methods requires high engineering skills for developers,causing expensive inference costs that are unfriendly to the hardware platform.To handle this issue,we propose to automatically discover an efficient architecture,called progressive attentive Retinex network(PAR-Net).We define a new attentive Retinex framework by introducing the attention mechanism to strengthen structural representation.A multi-level search space containing micro-level on the operation and macro-level on the cell is established to realize meticulous construction.To endow the searched architecture with the hardware-aware property,we develop a latency-constrained progressive search strategy that successfully improves the model capability by explicitly expressing the intrinsic relationship between different models defined in the attentive Retinex framework.Extensive quantitative and qualitative experimental results fully justify the superiority of our proposed approach against other state-of-the-art methods.A series of analytical evaluations is performed to illustrate the validity of our proposed algorithm.展开更多
Poor illumination greatly affects the quality of obtained images.In this paper,a novel convolutional neural network named DEANet is proposed on the basis of Retinex for low-light image enhancement.DEANet combines the ...Poor illumination greatly affects the quality of obtained images.In this paper,a novel convolutional neural network named DEANet is proposed on the basis of Retinex for low-light image enhancement.DEANet combines the frequency and content information of images and is divided into three subnetworks:decomposition,enhancement,and adjustment networks,which perform image decomposition;denoising,contrast enhancement,and detail preservation;and image adjustment and generation,respectively.The model is trained on the public LOL dataset,and the experimental results show that it outperforms the existing state-of-the-art methods regarding visual effects and image quality.展开更多
Sea cucumbers usually live in an environment where lighting and visibility are generally not controllable,which cause the underwater image of sea cucumbers to be distorted,blurred,and severely attenuated.Therefore,the...Sea cucumbers usually live in an environment where lighting and visibility are generally not controllable,which cause the underwater image of sea cucumbers to be distorted,blurred,and severely attenuated.Therefore,the valuable information from such an image cannot be fully extracted for further processing.To solve the problems mentioned above and improve the quality of the underwater images of sea cucumbers,pre-processing of a sea cucumber image is attracting increasing interest.This paper presents a newmethod based on contrast limited adaptive histogram equalization and wavelet transform(CLAHE-WT)to enhance the sea cucumber image quality.CLAHE was used to process the underwater image for increasing contrast based on the Rayleigh distribution,and WTwas used for de-noising based on a soft threshold.Qualitative analysis indicated that the proposed method exhibited better performance in enhancing the quality and retaining the image details.For quantitative analysis,the test with 120 underwater images showed that for the proposed method,the mean square error(MSE),peak signal to noise ratio(PSNR),and entropy were 49.2098,13.3909,and 6.6815,respectively.The proposed method outperformed three established methods in enhancing the visual quality of sea cucumber underwater gray image.展开更多
Most learning-based low-light image enhancement methods typically suffer from two problems.First,they require a large amount of paired data for training,which are difficult to acquire in most cases.Second,in the proce...Most learning-based low-light image enhancement methods typically suffer from two problems.First,they require a large amount of paired data for training,which are difficult to acquire in most cases.Second,in the process of enhancement,image noise is difficult to be removed and may even be amplified.In other words,performing denoising and illumination enhancement at the same time is difficult.As an alternative to supervised learning strategies that use a large amount of paired data,as presented in previous work,this paper presents an mixed-attention guided generative adversarial network called MAGAN for low-light image enhancement in a fully unsupervised fashion.We introduce a mixed-attention module layer,which can model the relationship between each pixel and feature of the image.In this way,our network can enhance a low-light image and remove its noise simultaneously.In addition,we conduct extensive experiments on paired and no-reference datasets to show the superiority of our method in enhancing low-light images.展开更多
A new image enhancement algorithm based on Retinex theory is proposed to solve the problem of bad visual effect of an image in low-light conditions. First, an image is converted from the RGB color space to the HSV col...A new image enhancement algorithm based on Retinex theory is proposed to solve the problem of bad visual effect of an image in low-light conditions. First, an image is converted from the RGB color space to the HSV color space to get the V channel. Next, the illuminations are respectively estimated by the guided filtering and the variational framework on the V channel and combined into a new illumination by average gradient. The new reflectance is calculated using V channel and the new illumination. Then a new V channel obtained by multiplying the new illumination and reflectance is processed with contrast limited adaptive histogram equalization(CLAHE). Finally, the new image in HSV space is converted back to RGB space to obtain the enhanced image. Experimental results show that the proposed method has better subjective quality and objective quality than existing methods.展开更多
In order to improve the visibility and contrast of low-light images and better preserve the edge and details of images,a new low-light color image enhancement algorithm is proposed in this paper.The steps of the propo...In order to improve the visibility and contrast of low-light images and better preserve the edge and details of images,a new low-light color image enhancement algorithm is proposed in this paper.The steps of the proposed algorithm are described as follows.First,the image is converted from the red,green and blue(RGB)color space to the hue,saturation and value(HSV)color space,and the histogram equalization(HE)is performed on the value component.Next,non-subsampled shearlet transform(NSST)is used on the value component to decompose the image into a low frequency sub-band and several high frequency sub-bands.Then,the low frequency sub-band and high frequency sub-bands are enhanced respectively by Gamma correction and improved guided image filtering(IGIF),and the enhanced value component is formed by inverse NSST transform.Finally,the image is converted back to the RGB color space to obtain the enhanced image.Experimental results show that the proposed method not only significantly improves the visibility and contrast,but also better preserves the edge and details of images.展开更多
Image enhancement is a monumental task in the field of computer vision and image processing.Existing methods are insufficient for preserving naturalness and minimizing noise in images.This article discusses a techniqu...Image enhancement is a monumental task in the field of computer vision and image processing.Existing methods are insufficient for preserving naturalness and minimizing noise in images.This article discusses a technique that is based on wavelets for optimizing images taken in low-light.First,the V channel is created by mapping an image’s RGB channel to the HSV color space.Second,the acquired V channel is decomposed using the dual-tree complex wavelet transform(DT-CWT)in order to recover the concentrated information within its high and low-frequency subbands.Thirdly,an adaptive illumination boost technique is used to enhance the visibility of a low-frequency component.Simultaneously,anisotropic diffusion is used to mitigate the high-frequency component’s noise impact.To improve the results,the image is reconstructed using an inverse DT-CWT and then converted to RGB space using the newly calculated V.Additionally,images are white-balanced to remove color casts.Experiments demonstrate that the proposed approach significantly improves outcomes and outperforms previously reported methods in general.展开更多
The history of computing started with analog computers consisting of physical devices performing specialized functions such as predicting the position of astronomical bodies and the trajectory of cannon balls.In moder...The history of computing started with analog computers consisting of physical devices performing specialized functions such as predicting the position of astronomical bodies and the trajectory of cannon balls.In modern times,this idea has been extended,for example,to ultrafast nonlinear optics serving as a surrogate analog computer to probe the behavior of complex phenomena such as rogue waves.Here we discuss a new paradigm where physical phenomena coded as an algorithm perform computational imaging tasks.Specifically,diffraction followed by coherent detection becomes an image enhancement tool.Vision Enhancement via Virtual diffraction and coherent Detection(VEViD)reimagines a digital image as a spatially varying metaphoric“lightfield”and then subjects the field to the physical processes akin to diffraction and coherent detection.The term“Virtual”captures the deviation from the physical world.The light field is pixelated and the propagation imparts a phase with dependence on frequency which is different from the monotonically-increasing behavior of physical diffraction.Temporal frequencies exist in three bands corresponding to the RGB color channels of a digital image.The phase of the output,not the intensity,represents the output image.VEViD is a high-performance low-light-level and color enhancement tool that emerges from this paradigm.The algorithm is extremely fast,interpretable,and reduces to a compact and intuitively-appealing mathematical expression.We demonstrate image enhancement of 4k video at over 200 frames per second and show the utility of this physical algorithm in improving the accuracy of object detection in low-light conditions by neural networks.The application of VEViD to color enhancement is also demonstrated.展开更多
基金supported by the National Natural Science Foundation of China(62276192)。
文摘Low-light images suffer from low quality due to poor lighting conditions,noise pollution,and improper settings of cameras.To enhance low-light images,most existing methods rely on normal-light images for guidance but the collection of suitable normal-light images is difficult.In contrast,a self-supervised method breaks free from the reliance on normal-light data,resulting in more convenience and better generalization.Existing self-supervised methods primarily focus on illumination adjustment and design pixel-based adjustment methods,resulting in remnants of other degradations,uneven brightness and artifacts.In response,this paper proposes a self-supervised enhancement method,termed as SLIE.It can handle multiple degradations including illumination attenuation,noise pollution,and color shift,all in a self-supervised manner.Illumination attenuation is estimated based on physical principles and local neighborhood information.The removal and correction of noise and color shift removal are solely realized with noisy images and images with color shifts.Finally,the comprehensive and fully self-supervised approach can achieve better adaptability and generalization.It is applicable to various low light conditions,and can reproduce the original color of scenes in natural light.Extensive experiments conducted on four public datasets demonstrate the superiority of SLIE to thirteen state-of-the-art methods.Our code is available at https://github.com/hanna-xu/SLIE.
基金supported by the National Key Research and Development Program Topics(Grant No.2021YFB4000905)the National Natural Science Foundation of China(Grant Nos.62101432 and 62102309)in part by Shaanxi Natural Science Fundamental Research Program Project(No.2022JM-508).
文摘Low-light image enhancement methods have limitations in addressing issues such as color distortion,lack of vibrancy,and uneven light distribution and often require paired training data.To address these issues,we propose a two-stage unsupervised low-light image enhancement algorithm called Retinex and Exposure Fusion Network(RFNet),which can overcome the problems of over-enhancement of the high dynamic range and under-enhancement of the low dynamic range in existing enhancement algorithms.This algorithm can better manage the challenges brought about by complex environments in real-world scenarios by training with unpaired low-light images and regular-light images.In the first stage,we design a multi-scale feature extraction module based on Retinex theory,capable of extracting details and structural information at different scales to generate high-quality illumination and reflection images.In the second stage,an exposure image generator is designed through the camera response mechanism function to acquire exposure images containing more dark features,and the generated images are fused with the original input images to complete the low-light image enhancement.Experiments show the effectiveness and rationality of each module designed in this paper.And the method reconstructs the details of contrast and color distribution,outperforms the current state-of-the-art methods in both qualitative and quantitative metrics,and shows excellent performance in the real world.
文摘In recent years,learning-based low-light image enhancement methods have shown excellent performance,but the heuristic design adopted by most methods requires high engineering skills for developers,causing expensive inference costs that are unfriendly to the hardware platform.To handle this issue,we propose to automatically discover an efficient architecture,called progressive attentive Retinex network(PAR-Net).We define a new attentive Retinex framework by introducing the attention mechanism to strengthen structural representation.A multi-level search space containing micro-level on the operation and macro-level on the cell is established to realize meticulous construction.To endow the searched architecture with the hardware-aware property,we develop a latency-constrained progressive search strategy that successfully improves the model capability by explicitly expressing the intrinsic relationship between different models defined in the attentive Retinex framework.Extensive quantitative and qualitative experimental results fully justify the superiority of our proposed approach against other state-of-the-art methods.A series of analytical evaluations is performed to illustrate the validity of our proposed algorithm.
基金This work was supported by the Shanghai Aerospace Science and Technology Innovation Fund(No.SAST2019-048)the Cross-Media Intelligent Technology Project of Beijing National Research Center for Information Science and Technology(BNRist)(No.BNR2019TD01022).
文摘Poor illumination greatly affects the quality of obtained images.In this paper,a novel convolutional neural network named DEANet is proposed on the basis of Retinex for low-light image enhancement.DEANet combines the frequency and content information of images and is divided into three subnetworks:decomposition,enhancement,and adjustment networks,which perform image decomposition;denoising,contrast enhancement,and detail preservation;and image adjustment and generation,respectively.The model is trained on the public LOL dataset,and the experimental results show that it outperforms the existing state-of-the-art methods regarding visual effects and image quality.
基金supported by the International Science&Technology Cooperation Program of China(2015DFA00090)Special Fund for Agro-scientific Research in the Public Interest(201203017).
文摘Sea cucumbers usually live in an environment where lighting and visibility are generally not controllable,which cause the underwater image of sea cucumbers to be distorted,blurred,and severely attenuated.Therefore,the valuable information from such an image cannot be fully extracted for further processing.To solve the problems mentioned above and improve the quality of the underwater images of sea cucumbers,pre-processing of a sea cucumber image is attracting increasing interest.This paper presents a newmethod based on contrast limited adaptive histogram equalization and wavelet transform(CLAHE-WT)to enhance the sea cucumber image quality.CLAHE was used to process the underwater image for increasing contrast based on the Rayleigh distribution,and WTwas used for de-noising based on a soft threshold.Qualitative analysis indicated that the proposed method exhibited better performance in enhancing the quality and retaining the image details.For quantitative analysis,the test with 120 underwater images showed that for the proposed method,the mean square error(MSE),peak signal to noise ratio(PSNR),and entropy were 49.2098,13.3909,and 6.6815,respectively.The proposed method outperformed three established methods in enhancing the visual quality of sea cucumber underwater gray image.
基金supported in part by the National Natural Science Foundation of China(No.62072169)Changsha Science and Technology Research Plan(No.KQ2004005)
文摘Most learning-based low-light image enhancement methods typically suffer from two problems.First,they require a large amount of paired data for training,which are difficult to acquire in most cases.Second,in the process of enhancement,image noise is difficult to be removed and may even be amplified.In other words,performing denoising and illumination enhancement at the same time is difficult.As an alternative to supervised learning strategies that use a large amount of paired data,as presented in previous work,this paper presents an mixed-attention guided generative adversarial network called MAGAN for low-light image enhancement in a fully unsupervised fashion.We introduce a mixed-attention module layer,which can model the relationship between each pixel and feature of the image.In this way,our network can enhance a low-light image and remove its noise simultaneously.In addition,we conduct extensive experiments on paired and no-reference datasets to show the superiority of our method in enhancing low-light images.
基金supported by the China Scholarship CouncilPostgraduate Research&Practice Innovation Program of Jiangsu Province(No.KYCX17_0776)the Natural Science Foundation of NUPT(No.NY214039)
文摘A new image enhancement algorithm based on Retinex theory is proposed to solve the problem of bad visual effect of an image in low-light conditions. First, an image is converted from the RGB color space to the HSV color space to get the V channel. Next, the illuminations are respectively estimated by the guided filtering and the variational framework on the V channel and combined into a new illumination by average gradient. The new reflectance is calculated using V channel and the new illumination. Then a new V channel obtained by multiplying the new illumination and reflectance is processed with contrast limited adaptive histogram equalization(CLAHE). Finally, the new image in HSV space is converted back to RGB space to obtain the enhanced image. Experimental results show that the proposed method has better subjective quality and objective quality than existing methods.
基金supported by the National Natural Science Foundation of China (61501260)the Postgraduate Research & Practice Innovation Program of Jiangsu Province (KYCX17_0776)the Research Project of Nanjing University of Posts and Telecommunications (NY218089&NY219076)
文摘In order to improve the visibility and contrast of low-light images and better preserve the edge and details of images,a new low-light color image enhancement algorithm is proposed in this paper.The steps of the proposed algorithm are described as follows.First,the image is converted from the red,green and blue(RGB)color space to the hue,saturation and value(HSV)color space,and the histogram equalization(HE)is performed on the value component.Next,non-subsampled shearlet transform(NSST)is used on the value component to decompose the image into a low frequency sub-band and several high frequency sub-bands.Then,the low frequency sub-band and high frequency sub-bands are enhanced respectively by Gamma correction and improved guided image filtering(IGIF),and the enhanced value component is formed by inverse NSST transform.Finally,the image is converted back to the RGB color space to obtain the enhanced image.Experimental results show that the proposed method not only significantly improves the visibility and contrast,but also better preserves the edge and details of images.
基金Supported by Teaching Team Project of Hubei Provincial Department of Education(203201929203)the Natural Science Foundation of Hubei Province(2021CFB316)+1 种基金New Generation Information Technology Innovation Project Ministry of Education(20202020ITA05022)Hundreds of Schools Unite with Hundreds of Counties-University Serving Rural Revitalization Science and Technology Support Action Plan(BXLBX0847)。
文摘Image enhancement is a monumental task in the field of computer vision and image processing.Existing methods are insufficient for preserving naturalness and minimizing noise in images.This article discusses a technique that is based on wavelets for optimizing images taken in low-light.First,the V channel is created by mapping an image’s RGB channel to the HSV color space.Second,the acquired V channel is decomposed using the dual-tree complex wavelet transform(DT-CWT)in order to recover the concentrated information within its high and low-frequency subbands.Thirdly,an adaptive illumination boost technique is used to enhance the visibility of a low-frequency component.Simultaneously,anisotropic diffusion is used to mitigate the high-frequency component’s noise impact.To improve the results,the image is reconstructed using an inverse DT-CWT and then converted to RGB space using the newly calculated V.Additionally,images are white-balanced to remove color casts.Experiments demonstrate that the proposed approach significantly improves outcomes and outperforms previously reported methods in general.
基金Parker Center for Cancer Immunotherapy(PICI),Grant No.20163828,and by the Office of Naval Research(ONR)Multi-disciplinary University Research Initiatives(MURI)program on Optical Computing Award Number N00014-14-1-0505.
文摘The history of computing started with analog computers consisting of physical devices performing specialized functions such as predicting the position of astronomical bodies and the trajectory of cannon balls.In modern times,this idea has been extended,for example,to ultrafast nonlinear optics serving as a surrogate analog computer to probe the behavior of complex phenomena such as rogue waves.Here we discuss a new paradigm where physical phenomena coded as an algorithm perform computational imaging tasks.Specifically,diffraction followed by coherent detection becomes an image enhancement tool.Vision Enhancement via Virtual diffraction and coherent Detection(VEViD)reimagines a digital image as a spatially varying metaphoric“lightfield”and then subjects the field to the physical processes akin to diffraction and coherent detection.The term“Virtual”captures the deviation from the physical world.The light field is pixelated and the propagation imparts a phase with dependence on frequency which is different from the monotonically-increasing behavior of physical diffraction.Temporal frequencies exist in three bands corresponding to the RGB color channels of a digital image.The phase of the output,not the intensity,represents the output image.VEViD is a high-performance low-light-level and color enhancement tool that emerges from this paradigm.The algorithm is extremely fast,interpretable,and reduces to a compact and intuitively-appealing mathematical expression.We demonstrate image enhancement of 4k video at over 200 frames per second and show the utility of this physical algorithm in improving the accuracy of object detection in low-light conditions by neural networks.The application of VEViD to color enhancement is also demonstrated.