Semantic segmentation of driving scene images is crucial for autonomous driving.While deep learning technology has significantly improved daytime image semantic segmentation,nighttime images pose challenges due to fac...Semantic segmentation of driving scene images is crucial for autonomous driving.While deep learning technology has significantly improved daytime image semantic segmentation,nighttime images pose challenges due to factors like poor lighting and overexposure,making it difficult to recognize small objects.To address this,we propose an Image Adaptive Enhancement(IAEN)module comprising a parameter predictor(Edip),multiple image processing filters(Mdif),and a Detail Processing Module(DPM).Edip combines image processing filters to predict parameters like exposure and hue,optimizing image quality.We adopt a novel image encoder to enhance parameter prediction accuracy by enabling Edip to handle features at different scales.DPM strengthens overlooked image details,extending the IAEN module’s functionality.After the segmentation network,we integrate a Depth Guided Filter(DGF)to refine segmentation outputs.The entire network is trained end-to-end,with segmentation results guiding parameter prediction optimization,promoting self-learning and network improvement.This lightweight and efficient network architecture is particularly suitable for addressing challenges in nighttime image segmentation.Extensive experiments validate significant performance improvements of our approach on the ACDC-night and Nightcity datasets.展开更多
With the continuous development of urbanization in China,the country’s growing population brings great challenges to urban development.By mastering the refined population spatial distribution in administrative units,...With the continuous development of urbanization in China,the country’s growing population brings great challenges to urban development.By mastering the refined population spatial distribution in administrative units,the quantity and agglomeration of population distribution can be estimated and visualized.It will provide a basis for a more rational urban planning.This paper takes Beijing as the research area and uses a new Luojia1-01 nighttime light image with high resolution,land use type data,Points of Interest(POI)data,and other data to construct the population spatial index system,establishing the index weight based on the principal component analysis.The comprehensive weight value of population distribution in the study area was then used to calculate the street population distribution of Beijing in 2018.Then the population spatial distribution was visualize using GIS technology.After accuracy assessments by comparing the result with the WorldPop data,the accuracy has reached 0.74.The proposed method was validated as a qualified method to generate population spatial maps.By contrast of local areas,Luojia 1-01 data is more suitable for population distribution estimation than the NPP/VIIRS(Net Primary Productivity/Visible infrared Imaging Radiometer)nighttime light data.More geospatial big data and mathematical models can be combined to create more accurate population maps in the future.展开更多
Nighttime images are difficult to process due to insufficient brightness,lots of noise,and lack of details.Therefore,they are always removed from time-lapsed image analysis.It is interesting that nighttime images have...Nighttime images are difficult to process due to insufficient brightness,lots of noise,and lack of details.Therefore,they are always removed from time-lapsed image analysis.It is interesting that nighttime images have a unique and wonderful building features that have robust and salient lighting cues from human activities.Lighting variation depicts both the statistical and individual habitation,and it has an inherent man-made repetitive structure from architectural theory.Inspired by this,we propose an automatic nighttime fa?ade recovery method that exploits the lattice structures of window lighting.First,a simple but efficient classification method is employed to determine the salient bright regions,which may be lit windows.Then we groupwindows into multiple lattice proposals with respect to fa?ades by patch matching,followed by greedily removing overlapping lattices.Using the horizon constraint,we solve the ambiguous proposals problem and obtain the correct orientation.Finally,we complete the generated fa?ades by filling in the missing windows.This method is well suited for use in urban environments,and the results can be used as a good single-view compensation method for daytime images.The method also acts as a semantic input to other learning-based 3D image reconstruction techniques.The experiment demonstrates that our method works well in nighttime image datasets,and we obtain a high lattice detection rate of 82.1%of 82 challenging images with a low mean orientation error of 12.1±4.5 degrees.展开更多
Nighttime image dehazing aims to remove the effect of haze on the images captured in nighttime,which however,raises new challenges such as severe color distortion,more complex lighting conditions,and lower contrast.In...Nighttime image dehazing aims to remove the effect of haze on the images captured in nighttime,which however,raises new challenges such as severe color distortion,more complex lighting conditions,and lower contrast.Instead of estimating the transmission map and atmospheric light that are difficult to be accurately acquired in nighttime,we propose a nighttime image dehazing method composed of a color cast removal and a dual path multi-scale fusion algorithm.We first propose a human visual system(HVS)inspired color correction model,which is effective for removing the color deviation on nighttime hazy images.Then,we propose to use dual path strategy that includes an underexposure and a contrast enhancement path for multi-scale fusion,where the weight maps are achieved by selecting appropriate exposed areas under Gaussian pyramids.Extensive experiments demonstrate that the visual effect of the hazy nighttime images in real-world datasets can be significantly improved by our method regarding contrast,color fidelity,and visibility.In addition,our method outperforms the state-of-the-art methods qualitatively and quantitatively.展开更多
基金This work is supported in part by The National Natural Science Foundation of China(Grant Number 61971078),which provided domain expertise and computational power that greatly assisted the activityThis work was financially supported by Chongqing Municipal Education Commission Grants for-Major Science and Technology Project(Grant Number gzlcx20243175).
文摘Semantic segmentation of driving scene images is crucial for autonomous driving.While deep learning technology has significantly improved daytime image semantic segmentation,nighttime images pose challenges due to factors like poor lighting and overexposure,making it difficult to recognize small objects.To address this,we propose an Image Adaptive Enhancement(IAEN)module comprising a parameter predictor(Edip),multiple image processing filters(Mdif),and a Detail Processing Module(DPM).Edip combines image processing filters to predict parameters like exposure and hue,optimizing image quality.We adopt a novel image encoder to enhance parameter prediction accuracy by enabling Edip to handle features at different scales.DPM strengthens overlooked image details,extending the IAEN module’s functionality.After the segmentation network,we integrate a Depth Guided Filter(DGF)to refine segmentation outputs.The entire network is trained end-to-end,with segmentation results guiding parameter prediction optimization,promoting self-learning and network improvement.This lightweight and efficient network architecture is particularly suitable for addressing challenges in nighttime image segmentation.Extensive experiments validate significant performance improvements of our approach on the ACDC-night and Nightcity datasets.
基金Under the auspices of Natural Science Foundation of China(No.42071342,31870713)Beijing Natural Science Foundation Program(No.8182038)Fundamental Research Funds for the Central Universities(No.2015ZCQ-LX-01,2018ZY06)。
文摘With the continuous development of urbanization in China,the country’s growing population brings great challenges to urban development.By mastering the refined population spatial distribution in administrative units,the quantity and agglomeration of population distribution can be estimated and visualized.It will provide a basis for a more rational urban planning.This paper takes Beijing as the research area and uses a new Luojia1-01 nighttime light image with high resolution,land use type data,Points of Interest(POI)data,and other data to construct the population spatial index system,establishing the index weight based on the principal component analysis.The comprehensive weight value of population distribution in the study area was then used to calculate the street population distribution of Beijing in 2018.Then the population spatial distribution was visualize using GIS technology.After accuracy assessments by comparing the result with the WorldPop data,the accuracy has reached 0.74.The proposed method was validated as a qualified method to generate population spatial maps.By contrast of local areas,Luojia 1-01 data is more suitable for population distribution estimation than the NPP/VIIRS(Net Primary Productivity/Visible infrared Imaging Radiometer)nighttime light data.More geospatial big data and mathematical models can be combined to create more accurate population maps in the future.
基金supported by the National High-tech R&D Program(2015AA016403)the National Natural Science Foundation of China(Grant Nos.61572061,61472020,61502020)the China Post-doctoral Science Foundation(2013M540039).
文摘Nighttime images are difficult to process due to insufficient brightness,lots of noise,and lack of details.Therefore,they are always removed from time-lapsed image analysis.It is interesting that nighttime images have a unique and wonderful building features that have robust and salient lighting cues from human activities.Lighting variation depicts both the statistical and individual habitation,and it has an inherent man-made repetitive structure from architectural theory.Inspired by this,we propose an automatic nighttime fa?ade recovery method that exploits the lattice structures of window lighting.First,a simple but efficient classification method is employed to determine the salient bright regions,which may be lit windows.Then we groupwindows into multiple lattice proposals with respect to fa?ades by patch matching,followed by greedily removing overlapping lattices.Using the horizon constraint,we solve the ambiguous proposals problem and obtain the correct orientation.Finally,we complete the generated fa?ades by filling in the missing windows.This method is well suited for use in urban environments,and the results can be used as a good single-view compensation method for daytime images.The method also acts as a semantic input to other learning-based 3D image reconstruction techniques.The experiment demonstrates that our method works well in nighttime image datasets,and we obtain a high lattice detection rate of 82.1%of 82 challenging images with a low mean orientation error of 12.1±4.5 degrees.
基金supported by Higher Education Scientific Research Project of Ningxia(NGY2017009)。
文摘Nighttime image dehazing aims to remove the effect of haze on the images captured in nighttime,which however,raises new challenges such as severe color distortion,more complex lighting conditions,and lower contrast.Instead of estimating the transmission map and atmospheric light that are difficult to be accurately acquired in nighttime,we propose a nighttime image dehazing method composed of a color cast removal and a dual path multi-scale fusion algorithm.We first propose a human visual system(HVS)inspired color correction model,which is effective for removing the color deviation on nighttime hazy images.Then,we propose to use dual path strategy that includes an underexposure and a contrast enhancement path for multi-scale fusion,where the weight maps are achieved by selecting appropriate exposed areas under Gaussian pyramids.Extensive experiments demonstrate that the visual effect of the hazy nighttime images in real-world datasets can be significantly improved by our method regarding contrast,color fidelity,and visibility.In addition,our method outperforms the state-of-the-art methods qualitatively and quantitatively.