Remote sensing image(RSI)with concurrently high spatial,temporal,and spectral resolutions cannot be produced by a single sensor.Multisource RSI fusion is a convenient technique to realize high spatial resolution multi...Remote sensing image(RSI)with concurrently high spatial,temporal,and spectral resolutions cannot be produced by a single sensor.Multisource RSI fusion is a convenient technique to realize high spatial resolution multispectral(MS)images(spatial spectral fusion,i.e.SSF)and high temporal and spatial resolution MS images(spatiotemporal fusion,i.e.STF).Currently,deep learning-based fusion models can only implement SSF or STF,lacking models that perform both SSF and STF.Multiresolution generative adversarial networks with bidirectional adaptive-stage progressive guided fusion(BAPGF)for RSI are proposed to implement both SSF and STF,namely BPF-MGAN.A bidirectional adaptive-stage feature extraction architecture infine-scale-to-coarse-scale and coarse-scale-to-fine-scale modes is introduced.The designed BAPGF introduces a previous fusion result-oriented cross-stage-level dual-residual attention fusion strategy to enhance critical information and suppress superfluous information.Adaptive resolution U-shaped discriminators are implemented to feed multiresolution context into the generator.A generalized multitask loss function unlimited by no-reference images is developed to strengthen the model via constraints on the multiscale feature,structural,and content similarities.The BPF-MGAN model is validated on SSF datasets and STF datasets.Compared with the state-of-the-art SSF and STF models,results demonstrate the superior performance of the proposed BPF-MGAN model in both subjective and objective evaluations.展开更多
基金funded by the National Key Research and Development Program of China under Grants 2020YFB2104400 and 2020YFB2104401the National Natural Science Foundation of China under Grant 82260362the Hainan Major Science and Technology Program of China under Grant ZDKJ202017.
文摘Remote sensing image(RSI)with concurrently high spatial,temporal,and spectral resolutions cannot be produced by a single sensor.Multisource RSI fusion is a convenient technique to realize high spatial resolution multispectral(MS)images(spatial spectral fusion,i.e.SSF)and high temporal and spatial resolution MS images(spatiotemporal fusion,i.e.STF).Currently,deep learning-based fusion models can only implement SSF or STF,lacking models that perform both SSF and STF.Multiresolution generative adversarial networks with bidirectional adaptive-stage progressive guided fusion(BAPGF)for RSI are proposed to implement both SSF and STF,namely BPF-MGAN.A bidirectional adaptive-stage feature extraction architecture infine-scale-to-coarse-scale and coarse-scale-to-fine-scale modes is introduced.The designed BAPGF introduces a previous fusion result-oriented cross-stage-level dual-residual attention fusion strategy to enhance critical information and suppress superfluous information.Adaptive resolution U-shaped discriminators are implemented to feed multiresolution context into the generator.A generalized multitask loss function unlimited by no-reference images is developed to strengthen the model via constraints on the multiscale feature,structural,and content similarities.The BPF-MGAN model is validated on SSF datasets and STF datasets.Compared with the state-of-the-art SSF and STF models,results demonstrate the superior performance of the proposed BPF-MGAN model in both subjective and objective evaluations.