The degradation of optical remote sensing images due to atmospheric haze poses a significant obstacle,profoundly impeding their effective utilization across various domains.Dehazing methodologies have emerged as pivot...The degradation of optical remote sensing images due to atmospheric haze poses a significant obstacle,profoundly impeding their effective utilization across various domains.Dehazing methodologies have emerged as pivotal components of image preprocessing,fostering an improvement in the quality of remote sensing imagery.This enhancement renders remote sensing data more indispensable,thereby enhancing the accuracy of target iden-tification.Conventional defogging techniques based on simplistic atmospheric degradation models have proven inadequate for mitigating non-uniform haze within remotely sensed images.In response to this challenge,a novel UNet Residual Attention Network(URA-Net)is proposed.This paradigmatic approach materializes as an end-to-end convolutional neural network distinguished by its utilization of multi-scale dense feature fusion clusters and gated jump connections.The essence of our methodology lies in local feature fusion within dense residual clusters,enabling the extraction of pertinent features from both preceding and current local data,depending on contextual demands.The intelligently orchestrated gated structures facilitate the propagation of these features to the decoder,resulting in superior outcomes in haze removal.Empirical validation through a plethora of experiments substantiates the efficacy of URA-Net,demonstrating its superior performance compared to existing methods when applied to established datasets for remote sensing image defogging.On the RICE-1 dataset,URA-Net achieves a Peak Signal-to-Noise Ratio(PSNR)of 29.07 dB,surpassing the Dark Channel Prior(DCP)by 11.17 dB,the All-in-One Network for Dehazing(AOD)by 7.82 dB,the Optimal Transmission Map and Adaptive Atmospheric Light For Dehazing(OTM-AAL)by 5.37 dB,the Unsupervised Single Image Dehazing(USID)by 8.0 dB,and the Superpixel-based Remote Sensing Image Dehazing(SRD)by 8.5 dB.Particularly noteworthy,on the SateHaze1k dataset,URA-Net attains preeminence in overall performance,yielding defogged images characterized by consistent visual quality.This underscores the contribution of the research to the advancement of remote sensing technology,providing a robust and efficient solution for alleviating the adverse effects of haze on image quality.展开更多
Due to the data acquired by most optical earth observation satellite such as IKONOS, QuickBird-2 and GF-1 consist of a panchromatic image with high spatial resolution and multiple multispectral images with low spatial...Due to the data acquired by most optical earth observation satellite such as IKONOS, QuickBird-2 and GF-1 consist of a panchromatic image with high spatial resolution and multiple multispectral images with low spatial resolution. Many image fusion techniques have been developed to produce high resolution multispectral image. Considering panchromatic image and multispectral images contain the same spatial information with different accuracy, using the least square theory could estimate optimal spatial information. Compared with previous spatial details injection mode, this mode is more accurate and robust. In this paper, an image fusion method using Bidimensional Empirical Mode Decomposition (BEMD) and the least square theory is proposed to merge multispectral images and panchromatic image. After multi-spectral images were transformed from RGB space into IHS space, next I component and Panchromatic are decomposed by BEMD, then using the least squares theory to evaluate optimal spatial information and inject spatial information, finally completing fusion through inverse BEMD and inverse intensity-hue-saturation transform. Two data sets are used to evaluate the proposed fusion method, GF-1 images and QuickBird-2 images. The fusion images were evaluated visually and statistically. The evaluation results show the method proposed in this paper achieves the best performance compared with the conventional method.展开更多
A new method based on resolution degradation model is proposed to improve both spatial and spectral quality of the synthetic images. Some ETM+ panchromatic and multispectral images are used to assess the new method. I...A new method based on resolution degradation model is proposed to improve both spatial and spectral quality of the synthetic images. Some ETM+ panchromatic and multispectral images are used to assess the new method. Its spatial and spectral effects are evaluated by qualitative and quantitative measures and the results are compared with those of IHS, PCA, Brovey, OWT(Orthogonal Wavelet Transform) and RWT(Redundant Wavelet Transform). The results show that the new method can keep almost the same spatial resolution as the panchromatic images, and the spectral effect of the new method is as good as those of wavelet-based methods.展开更多
According to the remote sensing image characteristics, a set oi optimized compression quahty assessment methods is proposed on the basis of generating simulative images. Firstly, a means is put forward that generates ...According to the remote sensing image characteristics, a set oi optimized compression quahty assessment methods is proposed on the basis of generating simulative images. Firstly, a means is put forward that generates simulative images by scanning aerial films taking into account the space-borne remote sensing camera characteristics (including pixel resolution, histogram dynamic range and quantization). In the course of compression quality assessment, the objective assessment considers images texture changes and mutual relationship between simulative images and decompressed ima- ges, while the synthesized estimation factor (SEF) is brought out innovatively for the first time. Subjective assessment adopts a display setup -- 0.5mrn/pixel, which considers human visual char- acteristic and mainstream monitor. The set of methods are applied in compression plan design of panchromatic camera loaded on ZY-1-02C satellite. Through systematic and comprehensive assess- ment, simulation results show that image compression quality with the compression ratio of d:l can meet the remote sensing application requirements.展开更多
IHS (Intensity, Hue and Saturation) transform is one of the most commonly used tusion algonthm. But the matching error causes spectral distortion and degradation in processing of image fusion with IHS method. A stud...IHS (Intensity, Hue and Saturation) transform is one of the most commonly used tusion algonthm. But the matching error causes spectral distortion and degradation in processing of image fusion with IHS method. A study on IHS fusion indicates that the color distortion can't be avoided. Meanwhile, the statistical property of wavelet coefficient with wavelet decomposition reflects those significant features, such as edges, lines and regions. So, a united optimal fusion method, which uses the statistical property and IHS transform on pixel and feature levels, is proposed. That is, the high frequency of intensity component Ⅰ is fused on feature level with multi-resolution wavelet in IHS space. And the low frequency of intensity component Ⅰ is fused on pixel level with optimal weight coefficients. Spectral information and spatial resolution are two performance indexes of optimal weight coefficients. Experiment results with QuickBird data of Shanghai show that it is a practical and effective method.展开更多
In order to improve the accuracy of building structure identification using remote sensing images,a building structure classification method based on multi-feature fusion of UAV remote sensing image is proposed in thi...In order to improve the accuracy of building structure identification using remote sensing images,a building structure classification method based on multi-feature fusion of UAV remote sensing image is proposed in this paper.Three identification approaches of remote sensing images are integrated in this method:object-oriented,texture feature,and digital elevation based on DSM and DEM.So RGB threshold classification method is used to classify the identification results.The accuracy of building structure classification based on each feature and the multi-feature fusion are compared and analyzed.The results show that the building structure classification method is feasible and can accurately identify the structures in large-area remote sensing images.展开更多
Considering that there is no single full reference image quality assessment method that could give the best performance in all situations, some multi-method fusion metrics were proposed. Machine learning techniques ar...Considering that there is no single full reference image quality assessment method that could give the best performance in all situations, some multi-method fusion metrics were proposed. Machine learning techniques are often involved in such multi-method fusion metrics so that its output would be more consistent with human visual perceptions. On the other hand, the robustness and generalization ability of these multi-method fusion metrics are questioned because of the scarce of images with mean opinion scores. In order to comprehensively validate whether or not the generalization ability of such multi-method fusion IQA metrics are satisfying, we construct a new image database which contains up to 60 reference images. The newly built image database is then used to test the generalization ability of different multi-method fusion IQA metrics. Cross database validation experiment indicates that in our new image database, the performances of all the multi-method fusion IQA metrics have no statistical significant different with some single-method IQA metrics such as FSIM and MAD. In the end, a thorough analysis is given to explain why the performance of multi-method fusion IQA framework drop significantly in cross database validation.展开更多
Semantic segmentation of remote sensing images is one of the core tasks of remote sensing image interpretation.With the continuous develop-ment of artificial intelligence technology,the use of deep learning methods fo...Semantic segmentation of remote sensing images is one of the core tasks of remote sensing image interpretation.With the continuous develop-ment of artificial intelligence technology,the use of deep learning methods for interpreting remote-sensing images has matured.Existing neural networks disregard the spatial relationship between two targets in remote sensing images.Semantic segmentation models that combine convolutional neural networks(CNNs)and graph convolutional neural networks(GCNs)cause a lack of feature boundaries,which leads to the unsatisfactory segmentation of various target feature boundaries.In this paper,we propose a new semantic segmentation model for remote sensing images(called DGCN hereinafter),which combines deep semantic segmentation networks(DSSN)and GCNs.In the GCN module,a loss function for boundary information is employed to optimize the learning of spatial relationship features between the target features and their relationships.A hierarchical fusion method is utilized for feature fusion and classification to optimize the spatial relationship informa-tion in the original feature information.Extensive experiments on ISPRS 2D and DeepGlobe semantic segmentation datasets show that compared with the existing semantic segmentation models of remote sensing images,the DGCN significantly optimizes the segmentation effect of feature boundaries,effectively reduces the noise in the segmentation results and improves the segmentation accuracy,which demonstrates the advancements of our model.展开更多
In order to apply the deep learning to the stereo image quality evaluation,two problems need to be solved:The first one is that we have a bit of training samples,another is how to input the dimensional image’s left v...In order to apply the deep learning to the stereo image quality evaluation,two problems need to be solved:The first one is that we have a bit of training samples,another is how to input the dimensional image’s left view or right view.In this paper,we transfer the 2D image quality evaluation model to the stereo image quality evaluation,and this method solves the first problem;use the method of principal component analysis is used to fuse the left and right views into an input image in order to solve the second problem.At the same time,the input image is preprocessed by phase congruency transformation,which further improves the performance of the algorithm.The structure of the deep convolution neural network consists of four convolution layers and three maximum pooling layers and two fully connected layers.The experimental results on LIVE3D image database show that the prediction quality score of the model is in good agreement with the subjective evaluation value.展开更多
Efficient and accurate access to coastal land cover information is of great significance for marine disaster prevention and mitigation.Although the popular and common sensors of land resource satellites provide free a...Efficient and accurate access to coastal land cover information is of great significance for marine disaster prevention and mitigation.Although the popular and common sensors of land resource satellites provide free and valuable images to map the land cover,coastal areas often encounter significant cloud cover,especially in tropical areas,which makes the classification in those areas non-ideal.To solve this problem,we proposed a framework of combining medium-resolution optical images and synthetic aperture radar(SAR)data with the recently popular object-based image analysis(OBIA)method and used the Landsat Operational Land Imager(OLI)and Phased Array type L-band Synthetic Aperture Radar(PALSAR)images acquired in Singapore in 2017 as a case study.We designed experiments to confirm two critical factors of this framework:one is the segmentation scale that determines the average object size,and the other is the classification feature.Accuracy assessments of the land cover indicated that the optimal segmentation scale was between 40 and 80,and the features of the combination of OLI and SAR resulted in higher accuracy than any individual features,especially in areas with cloud cover.Based on the land cover generated by this framework,we assessed the vulnerability of the marine disasters of Singapore in 2008 and 2017 and found that the high-vulnerability areas mainly located in the southeast and increased by 118.97 km2 over the past decade.To clarify the disaster response plan for different geographical environments,we classified risk based on altitude and distance from shore.The newly increased high-vulnerability regions within 4 km offshore and below 30 m above sea level are at high risk;these regions may need to focus on strengthening disaster prevention construction.This study serves as a typical example of using remote sensing techniques for the vulnerability assessment of marine disasters,especially those in cloudy coastal areas.展开更多
On the basis of a thorough understanding of the physical characteristics of remote sensing image, this paper employs the theories of wavelet transform and signal sampling to develop a new image fusion algorithm. The a...On the basis of a thorough understanding of the physical characteristics of remote sensing image, this paper employs the theories of wavelet transform and signal sampling to develop a new image fusion algorithm. The algorithm has been successfully applied to the image fusion of SPOT PAN and TM of Guangdong province, China. The experimental results show that a perfect image fusion can be built up by using the image analytical solution and re-construction in the image frequency domain based on the physical characteristics of the image formation. The method has demonstrated that the results of the image fusion do not change spectral characteristics of the original image.展开更多
In recent years image fusion method has been used widely in different studies to improve spatial resolution of multispectral images. This study aims to fuse high resolution satellite imagery with low multispectral ima...In recent years image fusion method has been used widely in different studies to improve spatial resolution of multispectral images. This study aims to fuse high resolution satellite imagery with low multispectral imagery in order to assist policymakers in the effective planning and management of urban forest ecosystem in Baton Rouge. To accomplish these objectives, Landsat 8 and PlanetScope satellite images were acquired from United States Geological Survey (USGS) Earth Explorer and Planet websites with pixel resolution of 30m and 3m respectively. The reference images (observed Landsat 8 and PlanetScope imagery) were acquired on 06/08/2020 and 11/19/2020. The image processing was performed in ArcMap and used 6-5-4 band combination for Landsat 8 to visually inspect healthy vegetation and the green spaces. The near-infrared (NIR) panchromatic band for PlanetScope was merged with Landsat 8 image using the Create Pan-Sharpened raster tool in ArcMap and applied the Intensity-Hue-Saturation (IHS) method. In addition, location of urban forestry parks in the study area was picked using the handheld GPS and recorded in an excel sheet. This sheet was converted into Excel (.csv) file and imported into ESRI ArcMap to identify the spatial distribution of the green spaces in East Baton Rouge parish. Results show fused images have better contrast and improve visualization of spatial features than non-fused images. For example, roads, trees, buildings appear sharper, easily discernible, and less pixelated compared to the Landsat 8 image in the fused image. The paper concludes by outlining policy recommendations in the form of sequential measurement of urban forest over time to help track changes and allows for better informed policy and decision making with respect to urban forest management.展开更多
The Aral Sea Basin in Central Asia is an important geographical environment unit in the center of Eurasia.It is of great significance to the ecological protection and sustainable development of Central Asia to carry o...The Aral Sea Basin in Central Asia is an important geographical environment unit in the center of Eurasia.It is of great significance to the ecological protection and sustainable development of Central Asia to carry out dynamic monitoring and effective evaluation of the eco-environmental quality of the Aral Sea Basin.In this study,the arid remote sensing ecological index(ARSEI)for large-scale arid areas was developed,which coupled the information of the greenness index,the salinity index,the humidity index,the heat index,and the land degradation index of arid areas.The ARSEI was used to monitor and evaluate the eco-environmental quality of the Aral Sea Basin from 2000 to 2019.The results show that the greenness index,the humidity index and the land degradation index had a positive impact on the quality of the ecological environment in the Aral Sea Basin,while the salinity index and the heat index exerted a negative impact on the quality of the ecological environment.The eco-environmental quality of the Aral Sea Basin demonstrated a trend of initial improvement,followed by deterioration,and finally further improvement.The spatial variation of these changes was significant.From 2000 to 2019,grassland and wasteland(saline alkali land and sandy land)in the central and western parts of the basin had the worst ecological environment quality.The areas with poor ecological environment quality are mainly distributed in rivers,wetlands,and cultivated land around lakes.During the period from 2000 to 2019,except for the surrounding areas of the Aral Sea,the ecological environment quality in other areas of the Aral Sea Basin has been improved in general.The correlation coefficients between the change in the eco-environmental quality and the heat index and between the change in the eco-environmental quality and the humidity index were–0.593 and 0.524,respectively.Climate conditions and human activities have led to different combinations of heat and humidity changes in the eco-environmental quality of the Aral Sea Basin.However,human activities had a greater impact.The ARSEI can quantitatively and intuitively reflect the scale and causes of large-scale and long-time period changes of the eco-environmental quality in arid areas;it is very suitable for the study of the eco-environmental quality in arid areas.展开更多
Fusing satellite(remote sensing)images is an interesting topic in processing satellite images.The result image is achieved through fusing information from spectral and panchromatic images for sharpening.In this paper,...Fusing satellite(remote sensing)images is an interesting topic in processing satellite images.The result image is achieved through fusing information from spectral and panchromatic images for sharpening.In this paper,a new algorithm based on based the Artificial bee colony(ABC)algorithm with peak signalto-noise ratio(PSNR)index optimization is proposed to fusing remote sensing images in this paper.Firstly,Wavelet transform is used to split the input images into components over the high and low frequency domains.Then,two fusing rules are used for obtaining the fused images.The first rule is“the high frequency components are fused by using the average values”.The second rule is“the low frequency components are fused by using the combining rule with parameter”.The parameter for fusing the low frequency components is defined by using ABC algorithm,an algorithm based on PSNR index optimization.The experimental results on different input images show that the proposed algorithm is better than some recent methods.展开更多
This study compares three types of classifications of satellite data to identify the most suitable for making city maps in a semi-arid region. The source of our data was GeoEye 1 satellite. To classify this data, two ...This study compares three types of classifications of satellite data to identify the most suitable for making city maps in a semi-arid region. The source of our data was GeoEye 1 satellite. To classify this data, two pro-grammes were used: an Object-Based Classification and a Pixel-Based Classification. The second classification programme was further subdi-vided into two groups. The first group included classes (buildings, streets, vacant land, vegetations) which were treated simultaneously and on a single image basis. The second, however, was where each class was identified individually, and the results of each class produced a single image and were later enhanced. The classification results were then as-sessed and compared before and after enhancement using visual then automatic assessment. The results of the evaluation showed that the pix-el-based individual classification of each class was rated the highest after enhancement, increasing the Overall Classification Accuracy by 2%, from 89% to 91.00%. The results of this classification type were adopted for mapping Jeddah’s buildings, roads, and vegetations.展开更多
Image registration is an indispensable component in multi-source remote sensing image processing. In this paper, we put forward a remote sensing image registration method by including an improved multi-scale and multi...Image registration is an indispensable component in multi-source remote sensing image processing. In this paper, we put forward a remote sensing image registration method by including an improved multi-scale and multi-direction Harris algorithm and a novel compound feature. Multi-scale circle Gaussian combined invariant moments and multi-direction gray level co-occurrence matrix are extracted as features for image matching. The proposed algorithm is evaluated on numerous multi-source remote sensor images with noise and illumination changes. Extensive experimental studies prove that our proposed method is capable of receiving stable and even distribution of key points as well as obtaining robust and accurate correspondence matches. It is a promising scheme in multi-source remote sensing image registration.展开更多
In order to improve the quality of remote sensing image fusion,a new method combining nonsubsampled Laplacian pyramid (NLP)and bidimensional empirical mode decomposition(BEMD)is proposed.First,the high resolution panc...In order to improve the quality of remote sensing image fusion,a new method combining nonsubsampled Laplacian pyramid (NLP)and bidimensional empirical mode decomposition(BEMD)is proposed.First,the high resolution panchromatic image (PAN)is decomposed using NLP until the approximate component and the low resolution multispectral image(MS)contain features with a similar scale.Then,the approximation component and the MS are decomposed by BEMD,resulting in a number of bidimensional intrinsic mode functions(BIMF)and a residue respectively.The instantaneous frequency is computed in 4 directions of the BIMFs.Considering the positive or negative coefficients in the corresponding position,a weighted algorithm is designed for fusing the high frequency details using the instantaneous frequency and the coefficient absolute value of the BIMFs as fusion feature.The fused image is then obtained through inverse BEMD and NLP.Experimental results have illustrated the advantage of this method over the IHS,DWT andà-Trous wavelet in both spectral and spatial detail qualities.展开更多
We propose an adaptive regularized algorithm for remote sensing image fusion based on variational methods. In the algorithm, we integrate the inputs using a "grey world" assumption to achieve visual uniformity. We p...We propose an adaptive regularized algorithm for remote sensing image fusion based on variational methods. In the algorithm, we integrate the inputs using a "grey world" assumption to achieve visual uniformity. We propose a fusion operator that can automatically select the total variation (TV)-LI term for edges and L2-terms for non-edges. To implement our algorithm, we use the steepest descent method to solve the corresponding Euler-Lagrange equation. Experimental results show that the proposed algorithm achieves remarkable results.展开更多
Remote sensing image fusion has come a long way from research experiments to an operational image processing technology.Having established a framework for image fusion at the end of the 90s,we now provide an overview ...Remote sensing image fusion has come a long way from research experiments to an operational image processing technology.Having established a framework for image fusion at the end of the 90s,we now provide an overview on the advances in image fusion during the past 15 years.Assembling information about new remote sensing image fusion techniques,recent technical developments and their influence on image fusion,international societies and working groups,and new journals and publications,we provide insight into new trends.It becomes clear that image fusion facilitates remote sensing image exploitation.It aims at achieving better and more reliable information to better understand complex Earth systems.The numerous publications during the last decade show that remote sensing image fusion is a well-established research field.The experiences gained foster other technological developments in terms of sensor configuration and data exploitation.Multi-modal data usage enables the implementation of the concept of Digital Earth.In order to advance in this respect,we recommend that updated guidelines and a set of commonly accepted quality assessment criteria are needed in image fusion.展开更多
The purpose of remote sensing images fusion is to produce a fused image that contains more clear,accurate and comprehensive information than any single image.A novel fusion method is proposed in this paper based on no...The purpose of remote sensing images fusion is to produce a fused image that contains more clear,accurate and comprehensive information than any single image.A novel fusion method is proposed in this paper based on nonsubsampled contourlet transform(NSCT) and region segmentation.Firstly,the multispectral image is transformed to intensity-hue-saturation(IHS) system.Secondly,the panchromatic image and the component intensity of the multispectral image are decomposed by NSCT.Then the NSCT coefficients of high and low frequency subbands are fused by different rules,respectively.For the high frequency subbands,the fusion rules are also unalike in the smooth and edge regions.The two regions are segregated in the panchromatic image,and the segmentation is based on particle swarm optimization.Finally,the fusion image can be obtained by performing inverse NSCT and inverse IHS transform.The experimental results are evaluated by both subjective and objective criteria.It is shown that the proposed method can obtain superior results to others.展开更多
基金This project is supported by the National Natural Science Foundation of China(NSFC)(No.61902158).
文摘The degradation of optical remote sensing images due to atmospheric haze poses a significant obstacle,profoundly impeding their effective utilization across various domains.Dehazing methodologies have emerged as pivotal components of image preprocessing,fostering an improvement in the quality of remote sensing imagery.This enhancement renders remote sensing data more indispensable,thereby enhancing the accuracy of target iden-tification.Conventional defogging techniques based on simplistic atmospheric degradation models have proven inadequate for mitigating non-uniform haze within remotely sensed images.In response to this challenge,a novel UNet Residual Attention Network(URA-Net)is proposed.This paradigmatic approach materializes as an end-to-end convolutional neural network distinguished by its utilization of multi-scale dense feature fusion clusters and gated jump connections.The essence of our methodology lies in local feature fusion within dense residual clusters,enabling the extraction of pertinent features from both preceding and current local data,depending on contextual demands.The intelligently orchestrated gated structures facilitate the propagation of these features to the decoder,resulting in superior outcomes in haze removal.Empirical validation through a plethora of experiments substantiates the efficacy of URA-Net,demonstrating its superior performance compared to existing methods when applied to established datasets for remote sensing image defogging.On the RICE-1 dataset,URA-Net achieves a Peak Signal-to-Noise Ratio(PSNR)of 29.07 dB,surpassing the Dark Channel Prior(DCP)by 11.17 dB,the All-in-One Network for Dehazing(AOD)by 7.82 dB,the Optimal Transmission Map and Adaptive Atmospheric Light For Dehazing(OTM-AAL)by 5.37 dB,the Unsupervised Single Image Dehazing(USID)by 8.0 dB,and the Superpixel-based Remote Sensing Image Dehazing(SRD)by 8.5 dB.Particularly noteworthy,on the SateHaze1k dataset,URA-Net attains preeminence in overall performance,yielding defogged images characterized by consistent visual quality.This underscores the contribution of the research to the advancement of remote sensing technology,providing a robust and efficient solution for alleviating the adverse effects of haze on image quality.
文摘Due to the data acquired by most optical earth observation satellite such as IKONOS, QuickBird-2 and GF-1 consist of a panchromatic image with high spatial resolution and multiple multispectral images with low spatial resolution. Many image fusion techniques have been developed to produce high resolution multispectral image. Considering panchromatic image and multispectral images contain the same spatial information with different accuracy, using the least square theory could estimate optimal spatial information. Compared with previous spatial details injection mode, this mode is more accurate and robust. In this paper, an image fusion method using Bidimensional Empirical Mode Decomposition (BEMD) and the least square theory is proposed to merge multispectral images and panchromatic image. After multi-spectral images were transformed from RGB space into IHS space, next I component and Panchromatic are decomposed by BEMD, then using the least squares theory to evaluate optimal spatial information and inject spatial information, finally completing fusion through inverse BEMD and inverse intensity-hue-saturation transform. Two data sets are used to evaluate the proposed fusion method, GF-1 images and QuickBird-2 images. The fusion images were evaluated visually and statistically. The evaluation results show the method proposed in this paper achieves the best performance compared with the conventional method.
文摘A new method based on resolution degradation model is proposed to improve both spatial and spectral quality of the synthetic images. Some ETM+ panchromatic and multispectral images are used to assess the new method. Its spatial and spectral effects are evaluated by qualitative and quantitative measures and the results are compared with those of IHS, PCA, Brovey, OWT(Orthogonal Wavelet Transform) and RWT(Redundant Wavelet Transform). The results show that the new method can keep almost the same spatial resolution as the panchromatic images, and the spectral effect of the new method is as good as those of wavelet-based methods.
基金Supported by the Civil Aerospace"The 12~(th) Five-year Plan"Advanced Research Project(No.D040103)
文摘According to the remote sensing image characteristics, a set oi optimized compression quahty assessment methods is proposed on the basis of generating simulative images. Firstly, a means is put forward that generates simulative images by scanning aerial films taking into account the space-borne remote sensing camera characteristics (including pixel resolution, histogram dynamic range and quantization). In the course of compression quality assessment, the objective assessment considers images texture changes and mutual relationship between simulative images and decompressed ima- ges, while the synthesized estimation factor (SEF) is brought out innovatively for the first time. Subjective assessment adopts a display setup -- 0.5mrn/pixel, which considers human visual char- acteristic and mainstream monitor. The set of methods are applied in compression plan design of panchromatic camera loaded on ZY-1-02C satellite. Through systematic and comprehensive assess- ment, simulation results show that image compression quality with the compression ratio of d:l can meet the remote sensing application requirements.
基金Supported by the High Technology Research and Development Programme of China (2001AA135091) and the National Natural Science Foundation of China (60375008).
文摘IHS (Intensity, Hue and Saturation) transform is one of the most commonly used tusion algonthm. But the matching error causes spectral distortion and degradation in processing of image fusion with IHS method. A study on IHS fusion indicates that the color distortion can't be avoided. Meanwhile, the statistical property of wavelet coefficient with wavelet decomposition reflects those significant features, such as edges, lines and regions. So, a united optimal fusion method, which uses the statistical property and IHS transform on pixel and feature levels, is proposed. That is, the high frequency of intensity component Ⅰ is fused on feature level with multi-resolution wavelet in IHS space. And the low frequency of intensity component Ⅰ is fused on pixel level with optimal weight coefficients. Spectral information and spatial resolution are two performance indexes of optimal weight coefficients. Experiment results with QuickBird data of Shanghai show that it is a practical and effective method.
基金sponsored by National Key R&D Program of China(2018YFC1504504)Youth Foundation of Yunnan Earthquake Agency(2021K01)Project of Yunnan Earthquake Agency“Chuan bang dai”(CQ3-2021001).
文摘In order to improve the accuracy of building structure identification using remote sensing images,a building structure classification method based on multi-feature fusion of UAV remote sensing image is proposed in this paper.Three identification approaches of remote sensing images are integrated in this method:object-oriented,texture feature,and digital elevation based on DSM and DEM.So RGB threshold classification method is used to classify the identification results.The accuracy of building structure classification based on each feature and the multi-feature fusion are compared and analyzed.The results show that the building structure classification method is feasible and can accurately identify the structures in large-area remote sensing images.
基金supported by “the Fundamental Research Funds for the Central Universities” No.2018CUCTJ081
文摘Considering that there is no single full reference image quality assessment method that could give the best performance in all situations, some multi-method fusion metrics were proposed. Machine learning techniques are often involved in such multi-method fusion metrics so that its output would be more consistent with human visual perceptions. On the other hand, the robustness and generalization ability of these multi-method fusion metrics are questioned because of the scarce of images with mean opinion scores. In order to comprehensively validate whether or not the generalization ability of such multi-method fusion IQA metrics are satisfying, we construct a new image database which contains up to 60 reference images. The newly built image database is then used to test the generalization ability of different multi-method fusion IQA metrics. Cross database validation experiment indicates that in our new image database, the performances of all the multi-method fusion IQA metrics have no statistical significant different with some single-method IQA metrics such as FSIM and MAD. In the end, a thorough analysis is given to explain why the performance of multi-method fusion IQA framework drop significantly in cross database validation.
基金funded by the Major Scientific and Technological Innovation Project of Shandong Province,Grant No.2022CXGC010609.
文摘Semantic segmentation of remote sensing images is one of the core tasks of remote sensing image interpretation.With the continuous develop-ment of artificial intelligence technology,the use of deep learning methods for interpreting remote-sensing images has matured.Existing neural networks disregard the spatial relationship between two targets in remote sensing images.Semantic segmentation models that combine convolutional neural networks(CNNs)and graph convolutional neural networks(GCNs)cause a lack of feature boundaries,which leads to the unsatisfactory segmentation of various target feature boundaries.In this paper,we propose a new semantic segmentation model for remote sensing images(called DGCN hereinafter),which combines deep semantic segmentation networks(DSSN)and GCNs.In the GCN module,a loss function for boundary information is employed to optimize the learning of spatial relationship features between the target features and their relationships.A hierarchical fusion method is utilized for feature fusion and classification to optimize the spatial relationship informa-tion in the original feature information.Extensive experiments on ISPRS 2D and DeepGlobe semantic segmentation datasets show that compared with the existing semantic segmentation models of remote sensing images,the DGCN significantly optimizes the segmentation effect of feature boundaries,effectively reduces the noise in the segmentation results and improves the segmentation accuracy,which demonstrates the advancements of our model.
文摘In order to apply the deep learning to the stereo image quality evaluation,two problems need to be solved:The first one is that we have a bit of training samples,another is how to input the dimensional image’s left view or right view.In this paper,we transfer the 2D image quality evaluation model to the stereo image quality evaluation,and this method solves the first problem;use the method of principal component analysis is used to fuse the left and right views into an input image in order to solve the second problem.At the same time,the input image is preprocessed by phase congruency transformation,which further improves the performance of the algorithm.The structure of the deep convolution neural network consists of four convolution layers and three maximum pooling layers and two fully connected layers.The experimental results on LIVE3D image database show that the prediction quality score of the model is in good agreement with the subjective evaluation value.
基金Supported by the National Key Research and Development Program of China(No.2016YFC1402003)the CAS Earth Big Data Science Project(No.XDA19060303)the Innovation Project of the State Key Laboratory of Resources and Environmental Information System(No.O88RAA01YA)
文摘Efficient and accurate access to coastal land cover information is of great significance for marine disaster prevention and mitigation.Although the popular and common sensors of land resource satellites provide free and valuable images to map the land cover,coastal areas often encounter significant cloud cover,especially in tropical areas,which makes the classification in those areas non-ideal.To solve this problem,we proposed a framework of combining medium-resolution optical images and synthetic aperture radar(SAR)data with the recently popular object-based image analysis(OBIA)method and used the Landsat Operational Land Imager(OLI)and Phased Array type L-band Synthetic Aperture Radar(PALSAR)images acquired in Singapore in 2017 as a case study.We designed experiments to confirm two critical factors of this framework:one is the segmentation scale that determines the average object size,and the other is the classification feature.Accuracy assessments of the land cover indicated that the optimal segmentation scale was between 40 and 80,and the features of the combination of OLI and SAR resulted in higher accuracy than any individual features,especially in areas with cloud cover.Based on the land cover generated by this framework,we assessed the vulnerability of the marine disasters of Singapore in 2008 and 2017 and found that the high-vulnerability areas mainly located in the southeast and increased by 118.97 km2 over the past decade.To clarify the disaster response plan for different geographical environments,we classified risk based on altitude and distance from shore.The newly increased high-vulnerability regions within 4 km offshore and below 30 m above sea level are at high risk;these regions may need to focus on strengthening disaster prevention construction.This study serves as a typical example of using remote sensing techniques for the vulnerability assessment of marine disasters,especially those in cloudy coastal areas.
基金ProjectsupportedbytheNationalNaturalScienceFoundationofChina (No .40 0 2 30 0 4 ) .
文摘On the basis of a thorough understanding of the physical characteristics of remote sensing image, this paper employs the theories of wavelet transform and signal sampling to develop a new image fusion algorithm. The algorithm has been successfully applied to the image fusion of SPOT PAN and TM of Guangdong province, China. The experimental results show that a perfect image fusion can be built up by using the image analytical solution and re-construction in the image frequency domain based on the physical characteristics of the image formation. The method has demonstrated that the results of the image fusion do not change spectral characteristics of the original image.
文摘In recent years image fusion method has been used widely in different studies to improve spatial resolution of multispectral images. This study aims to fuse high resolution satellite imagery with low multispectral imagery in order to assist policymakers in the effective planning and management of urban forest ecosystem in Baton Rouge. To accomplish these objectives, Landsat 8 and PlanetScope satellite images were acquired from United States Geological Survey (USGS) Earth Explorer and Planet websites with pixel resolution of 30m and 3m respectively. The reference images (observed Landsat 8 and PlanetScope imagery) were acquired on 06/08/2020 and 11/19/2020. The image processing was performed in ArcMap and used 6-5-4 band combination for Landsat 8 to visually inspect healthy vegetation and the green spaces. The near-infrared (NIR) panchromatic band for PlanetScope was merged with Landsat 8 image using the Create Pan-Sharpened raster tool in ArcMap and applied the Intensity-Hue-Saturation (IHS) method. In addition, location of urban forestry parks in the study area was picked using the handheld GPS and recorded in an excel sheet. This sheet was converted into Excel (.csv) file and imported into ESRI ArcMap to identify the spatial distribution of the green spaces in East Baton Rouge parish. Results show fused images have better contrast and improve visualization of spatial features than non-fused images. For example, roads, trees, buildings appear sharper, easily discernible, and less pixelated compared to the Landsat 8 image in the fused image. The paper concludes by outlining policy recommendations in the form of sequential measurement of urban forest over time to help track changes and allows for better informed policy and decision making with respect to urban forest management.
基金This work was funded by the National Natural Science Foundation of China(U1603242)the Major Science and Technology Projects in Inner Mongolia,China(ZDZX2018054).
文摘The Aral Sea Basin in Central Asia is an important geographical environment unit in the center of Eurasia.It is of great significance to the ecological protection and sustainable development of Central Asia to carry out dynamic monitoring and effective evaluation of the eco-environmental quality of the Aral Sea Basin.In this study,the arid remote sensing ecological index(ARSEI)for large-scale arid areas was developed,which coupled the information of the greenness index,the salinity index,the humidity index,the heat index,and the land degradation index of arid areas.The ARSEI was used to monitor and evaluate the eco-environmental quality of the Aral Sea Basin from 2000 to 2019.The results show that the greenness index,the humidity index and the land degradation index had a positive impact on the quality of the ecological environment in the Aral Sea Basin,while the salinity index and the heat index exerted a negative impact on the quality of the ecological environment.The eco-environmental quality of the Aral Sea Basin demonstrated a trend of initial improvement,followed by deterioration,and finally further improvement.The spatial variation of these changes was significant.From 2000 to 2019,grassland and wasteland(saline alkali land and sandy land)in the central and western parts of the basin had the worst ecological environment quality.The areas with poor ecological environment quality are mainly distributed in rivers,wetlands,and cultivated land around lakes.During the period from 2000 to 2019,except for the surrounding areas of the Aral Sea,the ecological environment quality in other areas of the Aral Sea Basin has been improved in general.The correlation coefficients between the change in the eco-environmental quality and the heat index and between the change in the eco-environmental quality and the humidity index were–0.593 and 0.524,respectively.Climate conditions and human activities have led to different combinations of heat and humidity changes in the eco-environmental quality of the Aral Sea Basin.However,human activities had a greater impact.The ARSEI can quantitatively and intuitively reflect the scale and causes of large-scale and long-time period changes of the eco-environmental quality in arid areas;it is very suitable for the study of the eco-environmental quality in arid areas.
文摘Fusing satellite(remote sensing)images is an interesting topic in processing satellite images.The result image is achieved through fusing information from spectral and panchromatic images for sharpening.In this paper,a new algorithm based on based the Artificial bee colony(ABC)algorithm with peak signalto-noise ratio(PSNR)index optimization is proposed to fusing remote sensing images in this paper.Firstly,Wavelet transform is used to split the input images into components over the high and low frequency domains.Then,two fusing rules are used for obtaining the fused images.The first rule is“the high frequency components are fused by using the average values”.The second rule is“the low frequency components are fused by using the combining rule with parameter”.The parameter for fusing the low frequency components is defined by using ABC algorithm,an algorithm based on PSNR index optimization.The experimental results on different input images show that the proposed algorithm is better than some recent methods.
文摘This study compares three types of classifications of satellite data to identify the most suitable for making city maps in a semi-arid region. The source of our data was GeoEye 1 satellite. To classify this data, two pro-grammes were used: an Object-Based Classification and a Pixel-Based Classification. The second classification programme was further subdi-vided into two groups. The first group included classes (buildings, streets, vacant land, vegetations) which were treated simultaneously and on a single image basis. The second, however, was where each class was identified individually, and the results of each class produced a single image and were later enhanced. The classification results were then as-sessed and compared before and after enhancement using visual then automatic assessment. The results of the evaluation showed that the pix-el-based individual classification of each class was rated the highest after enhancement, increasing the Overall Classification Accuracy by 2%, from 89% to 91.00%. The results of this classification type were adopted for mapping Jeddah’s buildings, roads, and vegetations.
基金supported by National Nature Science Foundation of China (Nos. 61462046 and 61762052)Natural Science Foundation of Jiangxi Province (Nos. 20161BAB202049 and 20161BAB204172)+2 种基金the Bidding Project of the Key Laboratory of Watershed Ecology and Geographical Environment Monitoring, NASG (Nos. WE2016003, WE2016013 and WE2016015)the Science and Technology Research Projects of Jiangxi Province Education Department (Nos. GJJ160741, GJJ170632 and GJJ170633)the Art Planning Project of Jiangxi Province (Nos. YG2016250 and YG2017381)
文摘Image registration is an indispensable component in multi-source remote sensing image processing. In this paper, we put forward a remote sensing image registration method by including an improved multi-scale and multi-direction Harris algorithm and a novel compound feature. Multi-scale circle Gaussian combined invariant moments and multi-direction gray level co-occurrence matrix are extracted as features for image matching. The proposed algorithm is evaluated on numerous multi-source remote sensor images with noise and illumination changes. Extensive experimental studies prove that our proposed method is capable of receiving stable and even distribution of key points as well as obtaining robust and accurate correspondence matches. It is a promising scheme in multi-source remote sensing image registration.
基金supported by the National Basic Research Program ofChina("973"Program)(Grant Nos.2006CB701300,2006CB701304)the China Postdoctoral Foundation(Grant No.2007041172),Hubei Natural Science Foundation(Grant No.2007ABA042)LIESMARS Special Research Fund and the Wuhan Key Scientific and Technological Project(Grant No.200810321144)
文摘In order to improve the quality of remote sensing image fusion,a new method combining nonsubsampled Laplacian pyramid (NLP)and bidimensional empirical mode decomposition(BEMD)is proposed.First,the high resolution panchromatic image (PAN)is decomposed using NLP until the approximate component and the low resolution multispectral image(MS)contain features with a similar scale.Then,the approximation component and the MS are decomposed by BEMD,resulting in a number of bidimensional intrinsic mode functions(BIMF)and a residue respectively.The instantaneous frequency is computed in 4 directions of the BIMFs.Considering the positive or negative coefficients in the corresponding position,a weighted algorithm is designed for fusing the high frequency details using the instantaneous frequency and the coefficient absolute value of the BIMFs as fusion feature.The fused image is then obtained through inverse BEMD and NLP.Experimental results have illustrated the advantage of this method over the IHS,DWT andà-Trous wavelet in both spectral and spatial detail qualities.
基金This work was supported by the National Basic Research Program of China (No. 2011 CB707104) and the National Natural Science Foundation of China (Grant No. 61273298).
文摘We propose an adaptive regularized algorithm for remote sensing image fusion based on variational methods. In the algorithm, we integrate the inputs using a "grey world" assumption to achieve visual uniformity. We propose a fusion operator that can automatically select the total variation (TV)-LI term for edges and L2-terms for non-edges. To implement our algorithm, we use the steepest descent method to solve the corresponding Euler-Lagrange equation. Experimental results show that the proposed algorithm achieves remarkable results.
文摘Remote sensing image fusion has come a long way from research experiments to an operational image processing technology.Having established a framework for image fusion at the end of the 90s,we now provide an overview on the advances in image fusion during the past 15 years.Assembling information about new remote sensing image fusion techniques,recent technical developments and their influence on image fusion,international societies and working groups,and new journals and publications,we provide insight into new trends.It becomes clear that image fusion facilitates remote sensing image exploitation.It aims at achieving better and more reliable information to better understand complex Earth systems.The numerous publications during the last decade show that remote sensing image fusion is a well-established research field.The experiences gained foster other technological developments in terms of sensor configuration and data exploitation.Multi-modal data usage enables the implementation of the concept of Digital Earth.In order to advance in this respect,we recommend that updated guidelines and a set of commonly accepted quality assessment criteria are needed in image fusion.
基金the National Natural Science Foundation of China (No.60872065)
文摘The purpose of remote sensing images fusion is to produce a fused image that contains more clear,accurate and comprehensive information than any single image.A novel fusion method is proposed in this paper based on nonsubsampled contourlet transform(NSCT) and region segmentation.Firstly,the multispectral image is transformed to intensity-hue-saturation(IHS) system.Secondly,the panchromatic image and the component intensity of the multispectral image are decomposed by NSCT.Then the NSCT coefficients of high and low frequency subbands are fused by different rules,respectively.For the high frequency subbands,the fusion rules are also unalike in the smooth and edge regions.The two regions are segregated in the panchromatic image,and the segmentation is based on particle swarm optimization.Finally,the fusion image can be obtained by performing inverse NSCT and inverse IHS transform.The experimental results are evaluated by both subjective and objective criteria.It is shown that the proposed method can obtain superior results to others.