Although most of the existing image super-resolution(SR)methods have achieved superior performance,contrastive learning for high-level tasks has not been fully utilized in the existing image SR methods based on deep l...Although most of the existing image super-resolution(SR)methods have achieved superior performance,contrastive learning for high-level tasks has not been fully utilized in the existing image SR methods based on deep learning.This work focuses on two well-known strategies developed for lightweight and robust SR,i.e.,contrastive learning and feedback mechanism,and proposes an integrated solution called a split-based feedback network(SPFBN).The proposed SPFBN is based on a feedback mechanism to learn abstract representations and uses contrastive learning to explore high information in the representation space.Specifically,this work first uses hidden states and constraints in recurrent neural network(RNN)to implement a feedback mechanism.Then,use contrastive learning to perform representation learning to obtain high-level information by pushing the final image to the intermediate images and pulling the final SR image to the high-resolution image.Besides,a split-based feedback block(SPFB)is proposed to reduce model redundancy,which tolerates features with similar patterns but requires fewer parameters.Extensive experimental results demonstrate the superiority of the proposed method in comparison with the state-of-the-art methods.Moreover,this work extends the experiment to prove the effectiveness of this method and shows better overall reconstruction quality.展开更多
Super-resolution techniques are employed to enhance image resolution by reconstructing high-resolution images from one or more low-resolution inputs.Super-resolution is of paramount importance in the context of remote...Super-resolution techniques are employed to enhance image resolution by reconstructing high-resolution images from one or more low-resolution inputs.Super-resolution is of paramount importance in the context of remote sensing,satellite,aerial,security and surveillance imaging.Super-resolution remote sensing imagery is essential for surveillance and security purposes,enabling authorities to monitor remote or sensitive areas with greater clarity.This study introduces a single-image super-resolution approach for remote sensing images,utilizing deep shearlet residual learning in the shearlet transform domain,and incorporating the Enhanced Deep Super-Resolution network(EDSR).Unlike conventional approaches that estimate residuals between high and low-resolution images,the proposed approach calculates the shearlet coefficients for the desired high-resolution image using the provided low-resolution image instead of estimating a residual image between the high-and low-resolution image.The shearlet transform is chosen for its excellent sparse approximation capabilities.Initially,remote sensing images are transformed into the shearlet domain,which divides the input image into low and high frequencies.The shearlet coefficients are fed into the EDSR network.The high-resolution image is subsequently reconstructed using the inverse shearlet transform.The incorporation of the EDSR network enhances training stability,leading to improved generated images.The experimental results from the Deep Shearlet Residual Learning approach demonstrate its superior performance in remote sensing image recovery,effectively restoring both global topology and local edge detail information,thereby enhancing image quality.Compared to other networks,our proposed approach outperforms the state-of-the-art in terms of image quality,achieving an average peak signal-to-noise ratio of 35 and a structural similarity index measure of approximately 0.9.展开更多
At present,super-resolution algorithms are employed to tackle the challenge of low image resolution,but it is difficult to extract differentiated feature details based on various inputs,resulting in poor generalizatio...At present,super-resolution algorithms are employed to tackle the challenge of low image resolution,but it is difficult to extract differentiated feature details based on various inputs,resulting in poor generalization ability.Given this situation,this study first analyzes the features of some feature extraction modules of the current super-resolution algorithm and then proposes an adaptive feature fusion block(AFB)for feature extraction.This module mainly comprises dynamic convolution,attention mechanism,and pixel-based gating mechanism.Combined with dynamic convolution with scale information,the network can extract more differentiated feature information.The introduction of a channel spatial attention mechanism combined with multi-feature fusion further enables the network to retain more important feature information.Dynamic convolution and pixel-based gating mechanisms enhance the module’s adaptability.Finally,a comparative experiment of a super-resolution algorithm based on the AFB module is designed to substantiate the efficiency of the AFB module.The results revealed that the network combined with the AFB module has stronger generalization ability and expression ability.展开更多
The employment of deep convolutional neural networks has recently contributed to significant progress in single image super-resolution(SISR)research.However,the high computational demands of most SR techniques hinder ...The employment of deep convolutional neural networks has recently contributed to significant progress in single image super-resolution(SISR)research.However,the high computational demands of most SR techniques hinder their applicability to edge devices,despite their satisfactory reconstruction performance.These methods commonly use standard convolutions,which increase the convolutional operation cost of the model.In this paper,a lightweight Partial Separation and Multiscale Fusion Network(PSMFNet)is proposed to alleviate this problem.Specifically,this paper introduces partial convolution(PConv),which reduces the redundant convolution operations throughout the model by separating some of the features of an image while retaining features useful for image reconstruction.Additionally,it is worth noting that the existing methods have not fully utilized the rich feature information,leading to information loss,which reduces the ability to learn feature representations.Inspired by self-attention,this paper develops a multiscale feature fusion block(MFFB),which can better utilize the non-local features of an image.MFFB can learn long-range dependencies from the spatial dimension and extract features from the channel dimension,thereby obtaining more comprehensive and rich feature information.As the role of the MFFB is to capture rich global features,this paper further introduces an efficient inverted residual block(EIRB)to supplement the local feature extraction ability of PSMFNet.A comprehensive analysis of the experimental results shows that PSMFNet maintains a better performance with fewer parameters than the state-of-the-art models.展开更多
Frequency modulated continuous wave(FMCW)radar is an advantageous sensor scheme for target estimation and environmental perception.However,existing algorithms based on discrete Fourier transform(DFT),multiple signal c...Frequency modulated continuous wave(FMCW)radar is an advantageous sensor scheme for target estimation and environmental perception.However,existing algorithms based on discrete Fourier transform(DFT),multiple signal classification(MUSIC)and compressed sensing,etc.,cannot achieve both low complexity and high resolution simultaneously.This paper proposes an efficient 2-D MUSIC algorithm for super-resolution target estimation/tracking based on FMCW radar.Firstly,we enhance the efficiency of 2-D MUSIC azimuth-range spectrum estimation by incorporating 2-D DFT and multi-level resolution searching strategy.Secondly,we apply the gradient descent method to tightly integrate the spatial continuity of object motion into spectrum estimation when processing multi-epoch radar data,which improves the efficiency of continuous target tracking.These two approaches have improved the algorithm efficiency by nearly 2-4 orders of magnitude without losing accuracy and resolution.Simulation experiments are conducted to validate the effectiveness of the algorithm in both single-epoch estimation and multi-epoch tracking scenarios.展开更多
Single Image Super-Resolution(SISR)technology aims to reconstruct a clear,high-resolution image with more information from an input low-resolution image that is blurry and contains less information.This technology has...Single Image Super-Resolution(SISR)technology aims to reconstruct a clear,high-resolution image with more information from an input low-resolution image that is blurry and contains less information.This technology has significant research value and is widely used in fields such as medical imaging,satellite image processing,and security surveillance.Despite significant progress in existing research,challenges remain in reconstructing clear and complex texture details,with issues such as edge blurring and artifacts still present.The visual perception effect still needs further enhancement.Therefore,this study proposes a Pyramid Separable Channel Attention Network(PSCAN)for the SISR task.Thismethod designs a convolutional backbone network composed of Pyramid Separable Channel Attention blocks to effectively extract and fuse multi-scale features.This expands the model’s receptive field,reduces resolution loss,and enhances the model’s ability to reconstruct texture details.Additionally,an innovative artifact loss function is designed to better distinguish between artifacts and real edge details,reducing artifacts in the reconstructed images.We conducted comprehensive ablation and comparative experiments on the Arabidopsis root image dataset and several public datasets.The experimental results show that the proposed PSCAN method achieves the best-known performance in both subjective visual effects and objective evaluation metrics,with improvements of 0.84 in Peak Signal-to-Noise Ratio(PSNR)and 0.017 in Structural Similarity Index(SSIM).This demonstrates that the method can effectively preserve high-frequency texture details,reduce artifacts,and have good generalization performance.展开更多
Digital in-line holographic microscopy(DIHM)is a widely used interference technique for real-time reconstruction of living cells’morphological information with large space-bandwidth product and compact setup.However,...Digital in-line holographic microscopy(DIHM)is a widely used interference technique for real-time reconstruction of living cells’morphological information with large space-bandwidth product and compact setup.However,the need for a larger pixel size of detector to improve imaging photosensitivity,field-of-view,and signal-to-noise ratio often leads to the loss of sub-pixel information and limited pixel resolution.Additionally,the twin-image appearing in the reconstruction severely degrades the quality of the reconstructed image.The deep learning(DL)approach has emerged as a powerful tool for phase retrieval in DIHM,effectively addressing these challenges.However,most DL-based strategies are datadriven or end-to-end net approaches,suffering from excessive data dependency and limited generalization ability.Herein,a novel multi-prior physics-enhanced neural network with pixel super-resolution(MPPN-PSR)for phase retrieval of DIHM is proposed.It encapsulates the physical model prior,sparsity prior and deep image prior in an untrained deep neural network.The effectiveness and feasibility of MPPN-PSR are demonstrated by comparing it with other traditional and learning-based phase retrieval methods.With the capabilities of pixel super-resolution,twin-image elimination and high-throughput jointly from a single-shot intensity measurement,the proposed DIHM approach is expected to be widely adopted in biomedical workflow and industrial measurement.展开更多
Transformer-based stereo image super-resolution reconstruction(Stereo SR)methods have significantly improved image quality.However,existing methods have deficiencies in paying attention to detailed features and do not...Transformer-based stereo image super-resolution reconstruction(Stereo SR)methods have significantly improved image quality.However,existing methods have deficiencies in paying attention to detailed features and do not consider the offset of pixels along the epipolar lines in complementary views when integrating stereo information.To address these challenges,this paper introduces a novel epipolar line window attention stereo image super-resolution network(EWASSR).For detail feature restoration,we design a feature extractor based on Transformer and convolutional neural network(CNN),which consists of(shifted)window-based self-attention((S)W-MSA)and feature distillation and enhancement blocks(FDEB).This combination effectively solves the problem of global image perception and local feature attention and captures more discriminative high-frequency features of the image.Furthermore,to address the problem of offset of complementary pixels in stereo images,we propose an epipolar line window attention(EWA)mechanism,which divides windows along the epipolar direction to promote efficient matching of shifted pixels,even in pixel smooth areas.More accurate pixel matching can be achieved using adjacent pixels in the window as a reference.Extensive experiments demonstrate that our EWASSR can reconstruct more realistic detailed features.Comparative quantitative results show that in the experimental results of our EWASSR on the Middlebury and Flickr1024 data sets for 2×SR,compared with the recent network,the Peak signal-to-noise ratio(PSNR)increased by 0.37 dB and 0.34 dB,respectively.展开更多
Hyperspectral images can easily discriminate different materials due to their fine spectral resolution.However,obtaining a hyperspectral image(HSI)with a high spatial resolution is still a challenge as we are limited ...Hyperspectral images can easily discriminate different materials due to their fine spectral resolution.However,obtaining a hyperspectral image(HSI)with a high spatial resolution is still a challenge as we are limited by the high computing requirements.The spatial resolution of HSI can be enhanced by utilizing Deep Learning(DL)based Super-resolution(SR).A 3D-CNNHSR model is developed in the present investigation for 3D spatial super-resolution for HSI,without losing the spectral content.The 3DCNNHSR model was tested for the Hyperion HSI.The pre-processing of the HSI was done before applying the SR model so that the full advantage of hyperspectral data can be utilized with minimizing the errors.The key innovation of the present investigation is that it used 3D convolution as it simultaneously applies convolution in both the spatial and spectral dimensions and captures spatial-spectral features.By clustering contiguous spectral content together,a cube is formed and by convolving the cube with the 3D kernel a 3D convolution is realized.The 3D-CNNHSR model was compared with a 2D-CNN model,additionally,the assessment was based on higherresolution data from the Sentinel-2 satellite.Based on the evaluation metrics it was observed that the 3D-CNNHSR model yields better results for the SR of HSI with efficient computational speed,which is significantly less than previous studies.展开更多
Background Recurrent recovery is a common method for video super-resolution(VSR)that models the correlation between frames via hidden states.However,the application of this structure in real-world scenarios can lead t...Background Recurrent recovery is a common method for video super-resolution(VSR)that models the correlation between frames via hidden states.However,the application of this structure in real-world scenarios can lead to unsatisfactory artifacts.We found that in real-world VSR training,the use of unknown and complex degradation can better simulate the degradation process in the real world.Methods Based on this,we propose the RealFuVSR model,which simulates real-world degradation and mitigates artifacts caused by the VSR.Specifically,we propose a multiscale feature extraction module(MSF)module that extracts and fuses features from multiple scales,thereby facilitating the elimination of hidden state artifacts.To improve the accuracy of the hidden state alignment information,RealFuVSR uses an advanced optical flow-guided deformable convolution.Moreover,a cascaded residual upsampling module was used to eliminate noise caused by the upsampling process.Results The experiment demonstrates that RealFuVSR model can not only recover high-quality videos but also outperforms the state-of-the-art RealBasicVSR and RealESRGAN models.展开更多
Previous deep learning-based super-resolution(SR)methods rely on the assumption that the degradation process is predefined(e.g.,bicubic downsampling).Thus,their performance would suffer from deterioration if the real ...Previous deep learning-based super-resolution(SR)methods rely on the assumption that the degradation process is predefined(e.g.,bicubic downsampling).Thus,their performance would suffer from deterioration if the real degradation is not consistent with the assumption.To deal with real-world scenarios,existing blind SR methods are committed to estimating both the degradation and the super-resolved image with an extra loss or iterative scheme.However,degradation estimation that requires more computation would result in limited SR performance due to the accumulated estimation errors.In this paper,we propose a contrastive regularization built upon contrastive learning to exploit both the information of blurry images and clear images as negative and positive samples,respectively.Contrastive regularization ensures that the restored image is pulled closer to the clear image and pushed far away from the blurry image in the representation space.Furthermore,instead of estimating the degradation,we extract global statistical prior information to capture the character of the distortion.Considering the coupling between the degradation and the low-resolution image,we embed the global prior into the distortion-specific SR network to make our method adaptive to the changes of distortions.We term our distortion-specific network with contrastive regularization as CRDNet.The extensive experiments on synthetic and realworld scenes demonstrate that our lightweight CRDNet surpasses state-of-the-art blind super-resolution approaches.展开更多
Hyperspectral image super-resolution,which refers to reconstructing the high-resolution hyperspectral image from the input low-resolution observation,aims to improve the spatial resolution of the hyperspectral image,w...Hyperspectral image super-resolution,which refers to reconstructing the high-resolution hyperspectral image from the input low-resolution observation,aims to improve the spatial resolution of the hyperspectral image,which is beneficial for subsequent applications.The development of deep learning has promoted significant progress in hyperspectral image super-resolution,and the powerful expression capabilities of deep neural networks make the predicted results more reliable.Recently,several latest deep learning technologies have made the hyperspectral image super-resolution method explode.However,a comprehensive review and analysis of the latest deep learning methods from the hyperspectral image super-resolution perspective is absent.To this end,in this survey,we first introduce the concept of hyperspectral image super-resolution and classify the methods from the perspectives with or without auxiliary information.Then,we review the learning-based methods in three categories,including single hyperspectral image super-resolution,panchromatic-based hyperspectral image super-resolution,and multispectral-based hyperspectral image super-resolution.Subsequently,we summarize the commonly used hyperspectral dataset,and the evaluations for some representative methods in three categories are performed qualitatively and quantitatively.Moreover,we briefly introduce several typical applications of hyperspectral image super-resolution,including ground object classification,urban change detection,and ecosystem monitoring.Finally,we provide the conclusion and challenges in existing learning-based methods,looking forward to potential future research directions.展开更多
Super-resolution(SR)microscopy has dramatically enhanced our understanding of biological processes.However,scattering media in thick specimens severely limits the spatial resolution,often rendering the images unclear ...Super-resolution(SR)microscopy has dramatically enhanced our understanding of biological processes.However,scattering media in thick specimens severely limits the spatial resolution,often rendering the images unclear or indistinguishable.Additionally,live-cell imaging faces challenges in achieving high temporal resolution for fast-moving subcellular structures.Here,we present the principles of a synthetic wave microscopy(SWM)to extract three-dimensional information from thick unlabeled specimens,where photobleaching and phototoxicity are avoided.SWM exploits multiple-wave interferometry to reveal the specimen’s phase information in the area of interest,which is not affected by the scattering media in the optical path.SWM achieves~0.42λ/NA resolution at an imaging speed of up to 106 pixels/s.SWM proves better temporal resolution and sensitivity than the most conventional microscopes currently available while maintaining exceptional SR and anti-scattering capabilities.Penetrating through the scattering media is challenging for conventional imaging techniques.Remarkably,SWM retains its efficacy even in conditions of low signal-to-noise ratios.It facilitates the visualization of dynamic subcellular structures in live cells,encompassing tubular endoplasmic reticulum(ER),lipid droplets,mitochondria,and lysosomes.展开更多
基金the National Key R&D Program of China(No.2019YFB1405900)the National Natural Science Foundation of China(No.62172035,61976098)。
文摘Although most of the existing image super-resolution(SR)methods have achieved superior performance,contrastive learning for high-level tasks has not been fully utilized in the existing image SR methods based on deep learning.This work focuses on two well-known strategies developed for lightweight and robust SR,i.e.,contrastive learning and feedback mechanism,and proposes an integrated solution called a split-based feedback network(SPFBN).The proposed SPFBN is based on a feedback mechanism to learn abstract representations and uses contrastive learning to explore high information in the representation space.Specifically,this work first uses hidden states and constraints in recurrent neural network(RNN)to implement a feedback mechanism.Then,use contrastive learning to perform representation learning to obtain high-level information by pushing the final image to the intermediate images and pulling the final SR image to the high-resolution image.Besides,a split-based feedback block(SPFB)is proposed to reduce model redundancy,which tolerates features with similar patterns but requires fewer parameters.Extensive experimental results demonstrate the superiority of the proposed method in comparison with the state-of-the-art methods.Moreover,this work extends the experiment to prove the effectiveness of this method and shows better overall reconstruction quality.
文摘Super-resolution techniques are employed to enhance image resolution by reconstructing high-resolution images from one or more low-resolution inputs.Super-resolution is of paramount importance in the context of remote sensing,satellite,aerial,security and surveillance imaging.Super-resolution remote sensing imagery is essential for surveillance and security purposes,enabling authorities to monitor remote or sensitive areas with greater clarity.This study introduces a single-image super-resolution approach for remote sensing images,utilizing deep shearlet residual learning in the shearlet transform domain,and incorporating the Enhanced Deep Super-Resolution network(EDSR).Unlike conventional approaches that estimate residuals between high and low-resolution images,the proposed approach calculates the shearlet coefficients for the desired high-resolution image using the provided low-resolution image instead of estimating a residual image between the high-and low-resolution image.The shearlet transform is chosen for its excellent sparse approximation capabilities.Initially,remote sensing images are transformed into the shearlet domain,which divides the input image into low and high frequencies.The shearlet coefficients are fed into the EDSR network.The high-resolution image is subsequently reconstructed using the inverse shearlet transform.The incorporation of the EDSR network enhances training stability,leading to improved generated images.The experimental results from the Deep Shearlet Residual Learning approach demonstrate its superior performance in remote sensing image recovery,effectively restoring both global topology and local edge detail information,thereby enhancing image quality.Compared to other networks,our proposed approach outperforms the state-of-the-art in terms of image quality,achieving an average peak signal-to-noise ratio of 35 and a structural similarity index measure of approximately 0.9.
基金Supported by Sichuan Science and Technology Program(2021YFQ0003,2023YFSY0026,2023YFH0004).
文摘At present,super-resolution algorithms are employed to tackle the challenge of low image resolution,but it is difficult to extract differentiated feature details based on various inputs,resulting in poor generalization ability.Given this situation,this study first analyzes the features of some feature extraction modules of the current super-resolution algorithm and then proposes an adaptive feature fusion block(AFB)for feature extraction.This module mainly comprises dynamic convolution,attention mechanism,and pixel-based gating mechanism.Combined with dynamic convolution with scale information,the network can extract more differentiated feature information.The introduction of a channel spatial attention mechanism combined with multi-feature fusion further enables the network to retain more important feature information.Dynamic convolution and pixel-based gating mechanisms enhance the module’s adaptability.Finally,a comparative experiment of a super-resolution algorithm based on the AFB module is designed to substantiate the efficiency of the AFB module.The results revealed that the network combined with the AFB module has stronger generalization ability and expression ability.
基金Guangdong Science and Technology Program under Grant No.202206010052Foshan Province R&D Key Project under Grant No.2020001006827Guangdong Academy of Sciences Integrated Industry Technology Innovation Center Action Special Project under Grant No.2022GDASZH-2022010108.
文摘The employment of deep convolutional neural networks has recently contributed to significant progress in single image super-resolution(SISR)research.However,the high computational demands of most SR techniques hinder their applicability to edge devices,despite their satisfactory reconstruction performance.These methods commonly use standard convolutions,which increase the convolutional operation cost of the model.In this paper,a lightweight Partial Separation and Multiscale Fusion Network(PSMFNet)is proposed to alleviate this problem.Specifically,this paper introduces partial convolution(PConv),which reduces the redundant convolution operations throughout the model by separating some of the features of an image while retaining features useful for image reconstruction.Additionally,it is worth noting that the existing methods have not fully utilized the rich feature information,leading to information loss,which reduces the ability to learn feature representations.Inspired by self-attention,this paper develops a multiscale feature fusion block(MFFB),which can better utilize the non-local features of an image.MFFB can learn long-range dependencies from the spatial dimension and extract features from the channel dimension,thereby obtaining more comprehensive and rich feature information.As the role of the MFFB is to capture rich global features,this paper further introduces an efficient inverted residual block(EIRB)to supplement the local feature extraction ability of PSMFNet.A comprehensive analysis of the experimental results shows that PSMFNet maintains a better performance with fewer parameters than the state-of-the-art models.
基金funded by the National Natural Science Foundation of China,grant number 42074176,U1939204。
文摘Frequency modulated continuous wave(FMCW)radar is an advantageous sensor scheme for target estimation and environmental perception.However,existing algorithms based on discrete Fourier transform(DFT),multiple signal classification(MUSIC)and compressed sensing,etc.,cannot achieve both low complexity and high resolution simultaneously.This paper proposes an efficient 2-D MUSIC algorithm for super-resolution target estimation/tracking based on FMCW radar.Firstly,we enhance the efficiency of 2-D MUSIC azimuth-range spectrum estimation by incorporating 2-D DFT and multi-level resolution searching strategy.Secondly,we apply the gradient descent method to tightly integrate the spatial continuity of object motion into spectrum estimation when processing multi-epoch radar data,which improves the efficiency of continuous target tracking.These two approaches have improved the algorithm efficiency by nearly 2-4 orders of magnitude without losing accuracy and resolution.Simulation experiments are conducted to validate the effectiveness of the algorithm in both single-epoch estimation and multi-epoch tracking scenarios.
基金supported by Beijing Municipal Science and Technology Project(No.Z221100007122003).
文摘Single Image Super-Resolution(SISR)technology aims to reconstruct a clear,high-resolution image with more information from an input low-resolution image that is blurry and contains less information.This technology has significant research value and is widely used in fields such as medical imaging,satellite image processing,and security surveillance.Despite significant progress in existing research,challenges remain in reconstructing clear and complex texture details,with issues such as edge blurring and artifacts still present.The visual perception effect still needs further enhancement.Therefore,this study proposes a Pyramid Separable Channel Attention Network(PSCAN)for the SISR task.Thismethod designs a convolutional backbone network composed of Pyramid Separable Channel Attention blocks to effectively extract and fuse multi-scale features.This expands the model’s receptive field,reduces resolution loss,and enhances the model’s ability to reconstruct texture details.Additionally,an innovative artifact loss function is designed to better distinguish between artifacts and real edge details,reducing artifacts in the reconstructed images.We conducted comprehensive ablation and comparative experiments on the Arabidopsis root image dataset and several public datasets.The experimental results show that the proposed PSCAN method achieves the best-known performance in both subjective visual effects and objective evaluation metrics,with improvements of 0.84 in Peak Signal-to-Noise Ratio(PSNR)and 0.017 in Structural Similarity Index(SSIM).This demonstrates that the method can effectively preserve high-frequency texture details,reduce artifacts,and have good generalization performance.
文摘Digital in-line holographic microscopy(DIHM)is a widely used interference technique for real-time reconstruction of living cells’morphological information with large space-bandwidth product and compact setup.However,the need for a larger pixel size of detector to improve imaging photosensitivity,field-of-view,and signal-to-noise ratio often leads to the loss of sub-pixel information and limited pixel resolution.Additionally,the twin-image appearing in the reconstruction severely degrades the quality of the reconstructed image.The deep learning(DL)approach has emerged as a powerful tool for phase retrieval in DIHM,effectively addressing these challenges.However,most DL-based strategies are datadriven or end-to-end net approaches,suffering from excessive data dependency and limited generalization ability.Herein,a novel multi-prior physics-enhanced neural network with pixel super-resolution(MPPN-PSR)for phase retrieval of DIHM is proposed.It encapsulates the physical model prior,sparsity prior and deep image prior in an untrained deep neural network.The effectiveness and feasibility of MPPN-PSR are demonstrated by comparing it with other traditional and learning-based phase retrieval methods.With the capabilities of pixel super-resolution,twin-image elimination and high-throughput jointly from a single-shot intensity measurement,the proposed DIHM approach is expected to be widely adopted in biomedical workflow and industrial measurement.
基金This work was supported by Sichuan Science and Technology Program(2023YFG0262).
文摘Transformer-based stereo image super-resolution reconstruction(Stereo SR)methods have significantly improved image quality.However,existing methods have deficiencies in paying attention to detailed features and do not consider the offset of pixels along the epipolar lines in complementary views when integrating stereo information.To address these challenges,this paper introduces a novel epipolar line window attention stereo image super-resolution network(EWASSR).For detail feature restoration,we design a feature extractor based on Transformer and convolutional neural network(CNN),which consists of(shifted)window-based self-attention((S)W-MSA)and feature distillation and enhancement blocks(FDEB).This combination effectively solves the problem of global image perception and local feature attention and captures more discriminative high-frequency features of the image.Furthermore,to address the problem of offset of complementary pixels in stereo images,we propose an epipolar line window attention(EWA)mechanism,which divides windows along the epipolar direction to promote efficient matching of shifted pixels,even in pixel smooth areas.More accurate pixel matching can be achieved using adjacent pixels in the window as a reference.Extensive experiments demonstrate that our EWASSR can reconstruct more realistic detailed features.Comparative quantitative results show that in the experimental results of our EWASSR on the Middlebury and Flickr1024 data sets for 2×SR,compared with the recent network,the Peak signal-to-noise ratio(PSNR)increased by 0.37 dB and 0.34 dB,respectively.
基金Deanship of Scientific Research at King Khalid University for funding this work through large group Research Project under Grant Number RGP2/80/44.
文摘Hyperspectral images can easily discriminate different materials due to their fine spectral resolution.However,obtaining a hyperspectral image(HSI)with a high spatial resolution is still a challenge as we are limited by the high computing requirements.The spatial resolution of HSI can be enhanced by utilizing Deep Learning(DL)based Super-resolution(SR).A 3D-CNNHSR model is developed in the present investigation for 3D spatial super-resolution for HSI,without losing the spectral content.The 3DCNNHSR model was tested for the Hyperion HSI.The pre-processing of the HSI was done before applying the SR model so that the full advantage of hyperspectral data can be utilized with minimizing the errors.The key innovation of the present investigation is that it used 3D convolution as it simultaneously applies convolution in both the spatial and spectral dimensions and captures spatial-spectral features.By clustering contiguous spectral content together,a cube is formed and by convolving the cube with the 3D kernel a 3D convolution is realized.The 3D-CNNHSR model was compared with a 2D-CNN model,additionally,the assessment was based on higherresolution data from the Sentinel-2 satellite.Based on the evaluation metrics it was observed that the 3D-CNNHSR model yields better results for the SR of HSI with efficient computational speed,which is significantly less than previous studies.
基金Supported by Open Project of the Ministry of Industry and Information Technology Key Laboratory of Performance and Reliability Testing and Evaluation for Basic Software and Hardware。
文摘Background Recurrent recovery is a common method for video super-resolution(VSR)that models the correlation between frames via hidden states.However,the application of this structure in real-world scenarios can lead to unsatisfactory artifacts.We found that in real-world VSR training,the use of unknown and complex degradation can better simulate the degradation process in the real world.Methods Based on this,we propose the RealFuVSR model,which simulates real-world degradation and mitigates artifacts caused by the VSR.Specifically,we propose a multiscale feature extraction module(MSF)module that extracts and fuses features from multiple scales,thereby facilitating the elimination of hidden state artifacts.To improve the accuracy of the hidden state alignment information,RealFuVSR uses an advanced optical flow-guided deformable convolution.Moreover,a cascaded residual upsampling module was used to eliminate noise caused by the upsampling process.Results The experiment demonstrates that RealFuVSR model can not only recover high-quality videos but also outperforms the state-of-the-art RealBasicVSR and RealESRGAN models.
基金supported by the National Natural Science Foundation of China(61971165)the Key Research and Development Program of Hubei Province(2020BAB113)。
文摘Previous deep learning-based super-resolution(SR)methods rely on the assumption that the degradation process is predefined(e.g.,bicubic downsampling).Thus,their performance would suffer from deterioration if the real degradation is not consistent with the assumption.To deal with real-world scenarios,existing blind SR methods are committed to estimating both the degradation and the super-resolved image with an extra loss or iterative scheme.However,degradation estimation that requires more computation would result in limited SR performance due to the accumulated estimation errors.In this paper,we propose a contrastive regularization built upon contrastive learning to exploit both the information of blurry images and clear images as negative and positive samples,respectively.Contrastive regularization ensures that the restored image is pulled closer to the clear image and pushed far away from the blurry image in the representation space.Furthermore,instead of estimating the degradation,we extract global statistical prior information to capture the character of the distortion.Considering the coupling between the degradation and the low-resolution image,we embed the global prior into the distortion-specific SR network to make our method adaptive to the changes of distortions.We term our distortion-specific network with contrastive regularization as CRDNet.The extensive experiments on synthetic and realworld scenes demonstrate that our lightweight CRDNet surpasses state-of-the-art blind super-resolution approaches.
基金supported in part by the National Natural Science Foundation of China(62276192)。
文摘Hyperspectral image super-resolution,which refers to reconstructing the high-resolution hyperspectral image from the input low-resolution observation,aims to improve the spatial resolution of the hyperspectral image,which is beneficial for subsequent applications.The development of deep learning has promoted significant progress in hyperspectral image super-resolution,and the powerful expression capabilities of deep neural networks make the predicted results more reliable.Recently,several latest deep learning technologies have made the hyperspectral image super-resolution method explode.However,a comprehensive review and analysis of the latest deep learning methods from the hyperspectral image super-resolution perspective is absent.To this end,in this survey,we first introduce the concept of hyperspectral image super-resolution and classify the methods from the perspectives with or without auxiliary information.Then,we review the learning-based methods in three categories,including single hyperspectral image super-resolution,panchromatic-based hyperspectral image super-resolution,and multispectral-based hyperspectral image super-resolution.Subsequently,we summarize the commonly used hyperspectral dataset,and the evaluations for some representative methods in three categories are performed qualitatively and quantitatively.Moreover,we briefly introduce several typical applications of hyperspectral image super-resolution,including ground object classification,urban change detection,and ecosystem monitoring.Finally,we provide the conclusion and challenges in existing learning-based methods,looking forward to potential future research directions.
基金support from CAS West Light Grant (xbzgzdsys-202206)National Key Research and Development Program of China (2021YFA1401003).
文摘Super-resolution(SR)microscopy has dramatically enhanced our understanding of biological processes.However,scattering media in thick specimens severely limits the spatial resolution,often rendering the images unclear or indistinguishable.Additionally,live-cell imaging faces challenges in achieving high temporal resolution for fast-moving subcellular structures.Here,we present the principles of a synthetic wave microscopy(SWM)to extract three-dimensional information from thick unlabeled specimens,where photobleaching and phototoxicity are avoided.SWM exploits multiple-wave interferometry to reveal the specimen’s phase information in the area of interest,which is not affected by the scattering media in the optical path.SWM achieves~0.42λ/NA resolution at an imaging speed of up to 106 pixels/s.SWM proves better temporal resolution and sensitivity than the most conventional microscopes currently available while maintaining exceptional SR and anti-scattering capabilities.Penetrating through the scattering media is challenging for conventional imaging techniques.Remarkably,SWM retains its efficacy even in conditions of low signal-to-noise ratios.It facilitates the visualization of dynamic subcellular structures in live cells,encompassing tubular endoplasmic reticulum(ER),lipid droplets,mitochondria,and lysosomes.