In recent years,the integration of deep learning techniques with biophotonic setups has opened new horizons in bioimaging.A compelling trend in this feld involves deliberately compromising certain measurement metrics ...In recent years,the integration of deep learning techniques with biophotonic setups has opened new horizons in bioimaging.A compelling trend in this feld involves deliberately compromising certain measurement metrics to engineer better bioimaging tools in terms of e.g,cost,speed,and form-factor,followed by compensating for the resulting defects through the utilization of deep learning models trained on a large amount of ideal,superior or alternative data.This strategic approach has found increasing popularity due to its potential to enhance various aspects of biophotonic imaging.One of the primary motivations for employing this strategy is the pursuit of higher temporal resolution or increased imaging speed,critical for capturing fine dynamic biological processes.Additionally,this approach offers the prospect of simplifying hardware requirements and complexities,thereby making advanced imaging standards more accessible in terms of cost and/or size.This article provides an in-depth review of the diverse measurement aspects that researchers intentionally impair in their biophotonic setups,including the point spread function(PSF),signal-to-noise ratio(SNR),sampling density,and pixel resolution.By deliberately compromising these metrics,researchers aim to not only recuperate them through the application of deep learning networks,but also bolster in return other crucial parameters,such as the field of view(FOV),depth of field(DOF),and space-bandwidth product(SBP).Throughout this article,we discuss various biophotonic methods that have successfully employed this strategic approach.These techniques span a wide range of applications and showcase the versatility and effectiveness of deep learning in the context of compromised biophotonic data.Finally,by offering our perspectives on the exciting future possibilities of this rapidly evolving concept,we hope to motivate our readers from various disciplines to explore novel ways of balancing hardware compromises with compensation via artificial intelligence(Al).展开更多
Deep learning-based image reconstruction methods have achieved remarkable success in phase recovery and holographic imaging.However,the generalization of their image reconstruction performance to new types of samples ...Deep learning-based image reconstruction methods have achieved remarkable success in phase recovery and holographic imaging.However,the generalization of their image reconstruction performance to new types of samples never seen by the network remains a challenge.Here we introduce a deep learning framework,termed Fourier Imager Network(FIN),that can perform end-to-end phase recovery and image reconstruction from raw holograms of new types of samples,exhibiting unprecedented success in external generalization.FIN architecture is based on spatial Fourier transform modules that process the spatial frequencies of its inputs using learnable filters and a global receptive field.Compared with existing convolutional deep neural networks used for hologram reconstruction,FIN exhibits superior generalization to new types of samples,while also being much faster in its image inference speed,completing the hologram reconstruction task in~0.04 s per 1 mm^(2) of the sample area.We experimentally validated the performance of FIN by training it using human lung tissue samples and blindly testing it on human prostate,salivary gland tissue and Pap smear samples,proving its superior external generalization and image reconstruction speed.Beyond holographic microscopy and quantitative phase imaging,FIN and the underlying neural network architecture might open up various new opportunities to design broadly generalizable deep learning models in computational imaging and machine vision fields.展开更多
Volumetric imaging of samples using fluorescence microscopy plays an important role in various fields including physical,medical and life sciences.Here we report a deep learning-based volumetric image inference framew...Volumetric imaging of samples using fluorescence microscopy plays an important role in various fields including physical,medical and life sciences.Here we report a deep learning-based volumetric image inference framework that uses 2D images that are sparsely captured by a standard wide-field fluorescence microscope at arbitrary axial positions within the sample volume.Through a recurrent convolutional neural network,which we term as Recurrent-MZ,2D fluorescence information from a few axial planes within the sample is explicitly incorporated to digitally reconstruct the sample volume over an extended depth-of-field.Using experiments on C.elegans and nanobead samples,Recurrent-MZ is demonstrated to significantly increase the depth-of-field of a 63×/1.4NA objective lens,also providing a 30-fold reduction in the number of axial scans required to image the same sample volume.We further illustrated the generalization of this recurrent network for 3D imaging by showing its resilience to varying imaging conditions,including e.g.,different sequences of input images,covering various axial permutations and unknown axial positioning errors.We also demonstrated wide-field to confocal cross-modality image transformations using Recurrent-MZ framework and performed 3D image reconstruction of a sample using a few wide-field 2D fluorescence images as input,matching confocal microscopy images of the same sample volume.Recurrent-MZ demonstrates the first application of recurrent neural networks in microscopic image reconstruction and provides a flexible and rapid volumetric imaging framework,overcoming the limitations of current 3D scanning microscopy tools.展开更多
文摘In recent years,the integration of deep learning techniques with biophotonic setups has opened new horizons in bioimaging.A compelling trend in this feld involves deliberately compromising certain measurement metrics to engineer better bioimaging tools in terms of e.g,cost,speed,and form-factor,followed by compensating for the resulting defects through the utilization of deep learning models trained on a large amount of ideal,superior or alternative data.This strategic approach has found increasing popularity due to its potential to enhance various aspects of biophotonic imaging.One of the primary motivations for employing this strategy is the pursuit of higher temporal resolution or increased imaging speed,critical for capturing fine dynamic biological processes.Additionally,this approach offers the prospect of simplifying hardware requirements and complexities,thereby making advanced imaging standards more accessible in terms of cost and/or size.This article provides an in-depth review of the diverse measurement aspects that researchers intentionally impair in their biophotonic setups,including the point spread function(PSF),signal-to-noise ratio(SNR),sampling density,and pixel resolution.By deliberately compromising these metrics,researchers aim to not only recuperate them through the application of deep learning networks,but also bolster in return other crucial parameters,such as the field of view(FOV),depth of field(DOF),and space-bandwidth product(SBP).Throughout this article,we discuss various biophotonic methods that have successfully employed this strategic approach.These techniques span a wide range of applications and showcase the versatility and effectiveness of deep learning in the context of compromised biophotonic data.Finally,by offering our perspectives on the exciting future possibilities of this rapidly evolving concept,we hope to motivate our readers from various disciplines to explore novel ways of balancing hardware compromises with compensation via artificial intelligence(Al).
文摘Deep learning-based image reconstruction methods have achieved remarkable success in phase recovery and holographic imaging.However,the generalization of their image reconstruction performance to new types of samples never seen by the network remains a challenge.Here we introduce a deep learning framework,termed Fourier Imager Network(FIN),that can perform end-to-end phase recovery and image reconstruction from raw holograms of new types of samples,exhibiting unprecedented success in external generalization.FIN architecture is based on spatial Fourier transform modules that process the spatial frequencies of its inputs using learnable filters and a global receptive field.Compared with existing convolutional deep neural networks used for hologram reconstruction,FIN exhibits superior generalization to new types of samples,while also being much faster in its image inference speed,completing the hologram reconstruction task in~0.04 s per 1 mm^(2) of the sample area.We experimentally validated the performance of FIN by training it using human lung tissue samples and blindly testing it on human prostate,salivary gland tissue and Pap smear samples,proving its superior external generalization and image reconstruction speed.Beyond holographic microscopy and quantitative phase imaging,FIN and the underlying neural network architecture might open up various new opportunities to design broadly generalizable deep learning models in computational imaging and machine vision fields.
文摘Volumetric imaging of samples using fluorescence microscopy plays an important role in various fields including physical,medical and life sciences.Here we report a deep learning-based volumetric image inference framework that uses 2D images that are sparsely captured by a standard wide-field fluorescence microscope at arbitrary axial positions within the sample volume.Through a recurrent convolutional neural network,which we term as Recurrent-MZ,2D fluorescence information from a few axial planes within the sample is explicitly incorporated to digitally reconstruct the sample volume over an extended depth-of-field.Using experiments on C.elegans and nanobead samples,Recurrent-MZ is demonstrated to significantly increase the depth-of-field of a 63×/1.4NA objective lens,also providing a 30-fold reduction in the number of axial scans required to image the same sample volume.We further illustrated the generalization of this recurrent network for 3D imaging by showing its resilience to varying imaging conditions,including e.g.,different sequences of input images,covering various axial permutations and unknown axial positioning errors.We also demonstrated wide-field to confocal cross-modality image transformations using Recurrent-MZ framework and performed 3D image reconstruction of a sample using a few wide-field 2D fluorescence images as input,matching confocal microscopy images of the same sample volume.Recurrent-MZ demonstrates the first application of recurrent neural networks in microscopic image reconstruction and provides a flexible and rapid volumetric imaging framework,overcoming the limitations of current 3D scanning microscopy tools.