The joint interpretation of hyperspectral images(HSIs)and light detection and ranging(LiDAR)data has developed rapidly in recent years due to continuously evolving image processing technology.Nowadays,most feature ext...The joint interpretation of hyperspectral images(HSIs)and light detection and ranging(LiDAR)data has developed rapidly in recent years due to continuously evolving image processing technology.Nowadays,most feature extraction methods are carried out by convolving the raw data with fixed-size filters,whereas the structural and texture information of objects in multiple scales cannot be sufficiently exploited.In this article,a shearlet-based structure-aware filtering approach,abbreviated as ShearSAF,is proposed for HSI and LiDAR feature extraction and classification.Specifically,superpixel-guided kernel principal component analysis(KPCA)is firstly adopted on raw HSIs to reduce the dimensions.Then,the KPCA-reduced HSI and LiDAR data are converted to the shearlet domain for texture and area feature extraction.In contrast,superpixel segmentation algorithm utilizes the raw HSI data to obtain the initial oversegmentation map.Subsequently,by utilizing a well-designed minimum merging cost that fully considers spectral(HSI and LiDAR data),texture,and area features,a region merging procedure is gradually conducted to produce a final merging map.Further,a scale map that locally indicates the filter size is achieved by calculating the edge distance.Finally,the KPCA-reduced HSI and LiDAR data are convolved with the locally adaptive filters for feature extraction,and a random forest(RF)classifier is thus adopted for classification.The effectiveness of our ShearSAF approach is verified on three real-world datasets,and the results show that the performance of ShearSAF can achieve an accuracy higher than that of comparison methods when exploiting small-size training sample problems.The codes of this work will be available at http://jiasen.tech/papers/for the sake of reproducibility.展开更多
基金supported in part by the National Natural Science Foundation of China under Grant 41971300 and Grant 61901278in part by the Key Project of Department of Education of Guangdong Province under Grant 2020ZDZX3045in part by the Shenzhen Scientific Research and Development Funding Program under Grant JCYJ20180305124802421 and Grant JCYJ20180305125902403.
文摘The joint interpretation of hyperspectral images(HSIs)and light detection and ranging(LiDAR)data has developed rapidly in recent years due to continuously evolving image processing technology.Nowadays,most feature extraction methods are carried out by convolving the raw data with fixed-size filters,whereas the structural and texture information of objects in multiple scales cannot be sufficiently exploited.In this article,a shearlet-based structure-aware filtering approach,abbreviated as ShearSAF,is proposed for HSI and LiDAR feature extraction and classification.Specifically,superpixel-guided kernel principal component analysis(KPCA)is firstly adopted on raw HSIs to reduce the dimensions.Then,the KPCA-reduced HSI and LiDAR data are converted to the shearlet domain for texture and area feature extraction.In contrast,superpixel segmentation algorithm utilizes the raw HSI data to obtain the initial oversegmentation map.Subsequently,by utilizing a well-designed minimum merging cost that fully considers spectral(HSI and LiDAR data),texture,and area features,a region merging procedure is gradually conducted to produce a final merging map.Further,a scale map that locally indicates the filter size is achieved by calculating the edge distance.Finally,the KPCA-reduced HSI and LiDAR data are convolved with the locally adaptive filters for feature extraction,and a random forest(RF)classifier is thus adopted for classification.The effectiveness of our ShearSAF approach is verified on three real-world datasets,and the results show that the performance of ShearSAF can achieve an accuracy higher than that of comparison methods when exploiting small-size training sample problems.The codes of this work will be available at http://jiasen.tech/papers/for the sake of reproducibility.