The visible-light imaging system used in military equipment is often subjected to severe weather conditions, such as fog, haze, and smoke, under complex lighting conditions at night that significantly degrade the acqu...The visible-light imaging system used in military equipment is often subjected to severe weather conditions, such as fog, haze, and smoke, under complex lighting conditions at night that significantly degrade the acquired images. Currently available image defogging methods are mostly suitable for environments with natural light in the daytime, but the clarity of images captured under complex lighting conditions and spatial changes in the presence of fog at night is not satisfactory. This study proposes an algorithm to remove night fog from single images based on an analysis of the statistical characteristics of images in scenes involving night fog. Color channel transfer is designed to compensate for the high attenuation channel of foggy images acquired at night. The distribution of transmittance is estimated by the deep convolutional network DehazeNet, and the spatial variation of atmospheric light is estimated in a point-by-point manner according to the maximum reflection prior to recover the clear image. The results of experiments show that the proposed method can compensate for the high attenuation channel of foggy images at night, remove the effect of glow from a multi-color and non-uniform ambient source of light, and improve the adaptability and visual effect of the removal of night fog from images compared with the conventional method.展开更多
High-quality spatial atmospheric delay correction information is essential for achieving fast integer ambiguity resolution(AR)in precise positioning.However,traditional real-time precise positioning frameworks(i.e.,NR...High-quality spatial atmospheric delay correction information is essential for achieving fast integer ambiguity resolution(AR)in precise positioning.However,traditional real-time precise positioning frameworks(i.e.,NRTK and PPP-RTK)depend on spatial low-resolution atmospheric delay correction through the expensive and sparsely distributed CORS network.This results in limited public appeal.With the mass production of autonomous driving vehicles,more cost-effective and widespread data sources can be explored to create spatial high-resolution atmospheric maps.In this study,we propose a new GNSS positioning framework that relies on dual base stations,massive vehicle GNSS data,and crowdsourced atmospheric delay correction maps(CAM).The map is easily produced and updated by vehicles equipped with GNSS receivers in a crowd-sourced way.Specifically,the map consists of between-station single-differenced ionospheric and tropospheric delays.We introduce the whole framework of CAM initialization for individual vehicles,on-cloud CAM maintenance,and CAM-augmented user-end positioning.The map data are collected and preprocessed in vehicles.Then,the crowdsourced data are uploaded to a cloud server.The massive data from multiple vehicles are merged in the cloud to update the CAM in time.Finally,the CAM will augment the user positioning performance.This framework forms a beneficial cycle where the CAM’s spatial resolution and the user positioning performance mutually improve each other.We validate the performance of the proposed framework in real-world experiments and the applied potency at different spatial scales.We highlight that this framework is a reliable and practical positioning solution that meets the requirements of ubiquitous high-precision positioning.展开更多
基金supported by a grant from the Qian Xuesen Laboratory of Space Technology, China Academy of Space Technology (Grant No. GZZKFJJ2020004)the National Natural Science Foundation of China (Grant Nos. 61875013 and 61827814)the Natural Science Foundation of Beijing Municipality (Grant No. Z190018)。
文摘The visible-light imaging system used in military equipment is often subjected to severe weather conditions, such as fog, haze, and smoke, under complex lighting conditions at night that significantly degrade the acquired images. Currently available image defogging methods are mostly suitable for environments with natural light in the daytime, but the clarity of images captured under complex lighting conditions and spatial changes in the presence of fog at night is not satisfactory. This study proposes an algorithm to remove night fog from single images based on an analysis of the statistical characteristics of images in scenes involving night fog. Color channel transfer is designed to compensate for the high attenuation channel of foggy images acquired at night. The distribution of transmittance is estimated by the deep convolutional network DehazeNet, and the spatial variation of atmospheric light is estimated in a point-by-point manner according to the maximum reflection prior to recover the clear image. The results of experiments show that the proposed method can compensate for the high attenuation channel of foggy images at night, remove the effect of glow from a multi-color and non-uniform ambient source of light, and improve the adaptability and visual effect of the removal of night fog from images compared with the conventional method.
基金funded by the National Key R&D Program of China(NO.2022YFB3903903)the National Natural Science Foundation of China(NO.41974008,NO.42074045).
文摘High-quality spatial atmospheric delay correction information is essential for achieving fast integer ambiguity resolution(AR)in precise positioning.However,traditional real-time precise positioning frameworks(i.e.,NRTK and PPP-RTK)depend on spatial low-resolution atmospheric delay correction through the expensive and sparsely distributed CORS network.This results in limited public appeal.With the mass production of autonomous driving vehicles,more cost-effective and widespread data sources can be explored to create spatial high-resolution atmospheric maps.In this study,we propose a new GNSS positioning framework that relies on dual base stations,massive vehicle GNSS data,and crowdsourced atmospheric delay correction maps(CAM).The map is easily produced and updated by vehicles equipped with GNSS receivers in a crowd-sourced way.Specifically,the map consists of between-station single-differenced ionospheric and tropospheric delays.We introduce the whole framework of CAM initialization for individual vehicles,on-cloud CAM maintenance,and CAM-augmented user-end positioning.The map data are collected and preprocessed in vehicles.Then,the crowdsourced data are uploaded to a cloud server.The massive data from multiple vehicles are merged in the cloud to update the CAM in time.Finally,the CAM will augment the user positioning performance.This framework forms a beneficial cycle where the CAM’s spatial resolution and the user positioning performance mutually improve each other.We validate the performance of the proposed framework in real-world experiments and the applied potency at different spatial scales.We highlight that this framework is a reliable and practical positioning solution that meets the requirements of ubiquitous high-precision positioning.