High resolution satellite images are becoming increasingly available for urban multi-temporal semantic understanding.However,few datasets can be used for land-use/land-cover(LULC)classification,binary change detection...High resolution satellite images are becoming increasingly available for urban multi-temporal semantic understanding.However,few datasets can be used for land-use/land-cover(LULC)classification,binary change detection(BCD)and semantic change detection(SCD)simultaneously because classification datasets always have one time phase and BCD datasets focus only on the changed location,ignoring the changed classes.Public SCD datasets are rare but much needed.To solve the above problems,a tri-temporal SCD dataset made up of Gaofen-2(GF-2)remote sensing imagery(with 11 LULC classes and 60 change directions)was built in this study,namely,the Wuhan Urban Semantic Understanding(WUSU)dataset.Popular deep learning based methods for LULC classification,BCD and SCD are tested to verify the reliability of WUSU.A Siamese-based multi-task joint framework with a multi-task joint loss(MJ loss)named ChangeMJ is proposed to restore the object boundaries and obtains the best results in LULC classification,BCD and SCD,compared to the state-of-the-art(SOTA)methods.Finally,a large spatial-scale mapping for Wuhan central urban area is carried out to verify that the WUsU dataset and the ChangeMJ framework have good application values.展开更多
Accurate crop distribution mapping is required for crop yield prediction and field management. Due to rapid progress in remote sensing technology, fine spatial resolution(FSR) remotely sensed imagery now offers great ...Accurate crop distribution mapping is required for crop yield prediction and field management. Due to rapid progress in remote sensing technology, fine spatial resolution(FSR) remotely sensed imagery now offers great opportunities for mapping crop types in great detail. However, within-class variance can hamper attempts to discriminate crop classes at fine resolutions. Multi-temporal FSR remotely sensed imagery provides a means of increasing crop classification from FSR imagery, although current methods do not exploit the available information fully. In this research, a novel Temporal Sequence Object-based Convolutional Neural Network(TS-OCNN) was proposed to classify agricultural crop type from FSR image time-series. An object-based CNN(OCNN) model was adopted in the TS-OCNN to classify images at the object level(i.e., segmented objects or crop parcels), thus, maintaining the precise boundary information of crop parcels. The combination of image time-series was first utilized as the input to the OCNN model to produce an ‘original’ or baseline classification. Then the single-date images were fed automatically into the deep learning model scene-by-scene in order of image acquisition date to increase successively the crop classification accuracy. By doing so, the joint information in the FSR multi-temporal observations and the unique individual information from the single-date images were exploited comprehensively for crop classification. The effectiveness of the proposed approach was investigated using multitemporal SAR and optical imagery, respectively, over two heterogeneous agricultural areas. The experimental results demonstrated that the newly proposed TS-OCNN approach consistently increased crop classification accuracy, and achieved the greatest accuracies(82.68% and 87.40%) in comparison with state-of-the-art benchmark methods, including the object-based CNN(OCNN)(81.63% and85.88%), object-based image analysis(OBIA)(78.21% and 84.83%), and standard pixel-wise CNN(79.18%and 82.90%). The proposed approach is the first known attempt to explore simultaneously the joint information from image time-series with the unique information from single-date images for crop classification using a deep learning framework. The TS-OCNN, therefore, represents a new approach for agricultural landscape classification from multi-temporal FSR imagery. Besides, it is readily generalizable to other landscapes(e.g., forest landscapes), with a wide application prospect.展开更多
基金supported by National Key Research and Development Program of China under grant number 2022YFB3903404National Natural Science Foundation of China under grant number 42325105,42071350LIESMARS Special Research Funding.
文摘High resolution satellite images are becoming increasingly available for urban multi-temporal semantic understanding.However,few datasets can be used for land-use/land-cover(LULC)classification,binary change detection(BCD)and semantic change detection(SCD)simultaneously because classification datasets always have one time phase and BCD datasets focus only on the changed location,ignoring the changed classes.Public SCD datasets are rare but much needed.To solve the above problems,a tri-temporal SCD dataset made up of Gaofen-2(GF-2)remote sensing imagery(with 11 LULC classes and 60 change directions)was built in this study,namely,the Wuhan Urban Semantic Understanding(WUSU)dataset.Popular deep learning based methods for LULC classification,BCD and SCD are tested to verify the reliability of WUSU.A Siamese-based multi-task joint framework with a multi-task joint loss(MJ loss)named ChangeMJ is proposed to restore the object boundaries and obtains the best results in LULC classification,BCD and SCD,compared to the state-of-the-art(SOTA)methods.Finally,a large spatial-scale mapping for Wuhan central urban area is carried out to verify that the WUsU dataset and the ChangeMJ framework have good application values.
基金supported by the Strategic Priority Research Program of the Chinese Academy of Sciences (XDA28070503)the National Key Research and Development Program of China(2021YFD1500100)+2 种基金the Open Fund of State Laboratory of Information Engineering in Surveying,Mapping and Remote Sensing,Wuhan University (20R04)Land Observation Satellite Supporting Platform of National Civil Space Infrastructure Project(CASPLOS-CCSI)a PhD studentship ‘‘Deep Learning in massive area,multi-scale resolution remotely sensed imagery”(EAA7369),sponsored by Lancaster University and Ordnance Survey (the national mapping agency of Great Britain)。
文摘Accurate crop distribution mapping is required for crop yield prediction and field management. Due to rapid progress in remote sensing technology, fine spatial resolution(FSR) remotely sensed imagery now offers great opportunities for mapping crop types in great detail. However, within-class variance can hamper attempts to discriminate crop classes at fine resolutions. Multi-temporal FSR remotely sensed imagery provides a means of increasing crop classification from FSR imagery, although current methods do not exploit the available information fully. In this research, a novel Temporal Sequence Object-based Convolutional Neural Network(TS-OCNN) was proposed to classify agricultural crop type from FSR image time-series. An object-based CNN(OCNN) model was adopted in the TS-OCNN to classify images at the object level(i.e., segmented objects or crop parcels), thus, maintaining the precise boundary information of crop parcels. The combination of image time-series was first utilized as the input to the OCNN model to produce an ‘original’ or baseline classification. Then the single-date images were fed automatically into the deep learning model scene-by-scene in order of image acquisition date to increase successively the crop classification accuracy. By doing so, the joint information in the FSR multi-temporal observations and the unique individual information from the single-date images were exploited comprehensively for crop classification. The effectiveness of the proposed approach was investigated using multitemporal SAR and optical imagery, respectively, over two heterogeneous agricultural areas. The experimental results demonstrated that the newly proposed TS-OCNN approach consistently increased crop classification accuracy, and achieved the greatest accuracies(82.68% and 87.40%) in comparison with state-of-the-art benchmark methods, including the object-based CNN(OCNN)(81.63% and85.88%), object-based image analysis(OBIA)(78.21% and 84.83%), and standard pixel-wise CNN(79.18%and 82.90%). The proposed approach is the first known attempt to explore simultaneously the joint information from image time-series with the unique information from single-date images for crop classification using a deep learning framework. The TS-OCNN, therefore, represents a new approach for agricultural landscape classification from multi-temporal FSR imagery. Besides, it is readily generalizable to other landscapes(e.g., forest landscapes), with a wide application prospect.