Objective and Impact Statement.Segmentation of blood vessels from two-photon microscopy(2PM)angiograms of brains has important applications in hemodynamic analysis and disease diagnosis.Here,we develop a generalizable...Objective and Impact Statement.Segmentation of blood vessels from two-photon microscopy(2PM)angiograms of brains has important applications in hemodynamic analysis and disease diagnosis.Here,we develop a generalizable deep learning technique for accurate 2PM vascular segmentation of sizable regions in mouse brains acquired from multiple 2PM setups.The technique is computationally efficient,thus ideal for large-scale neurovascular analysis.Introduction.Vascular segmentation from 2PM angiograms is an important first step in hemodynamic modeling of brain vasculature.Existing segmentation methods based on deep learning either lack the ability to generalize to data from different imaging systems or are computationally infeasible for large-scale angiograms.In this work,we overcome both these limitations by a method that is generalizable to various imaging systems and is able to segment large-scale angiograms.Methods.We employ a computationally efficient deep learning framework with a loss function that incorporates a balanced binary-cross-entropy loss and total variation regularization on the network’s output.Its effectiveness is demonstrated on experimentally acquired in vivo angiograms from mouse brains of dimensions up to 808×808×702μm.Results.To demonstrate the superior generalizability of our framework,we train on data from only one 2PM microscope and demonstrate high-quality segmentation on data from a different microscope without any network tuning.Overall,our method demonstrates 10×faster computation in terms of voxels-segmented-per-second and 3×larger depth compared to the state-of-the-art.Conclusion.Our work provides a generalizable and computationally efficient anatomical modeling framework for brain vasculature,which consists of deep learning-based vascular segmentation followed by graphing.It paves the way for future modeling and analysis of hemodynamic response at much greater scales that were inaccessible before.展开更多
Deep learning has been broadly applied to imaging in scattering applications.A common framework is to train a descattering network for image recovery by removing scattering artifacts.To achieve the best results on a b...Deep learning has been broadly applied to imaging in scattering applications.A common framework is to train a descattering network for image recovery by removing scattering artifacts.To achieve the best results on a broad spectrum of scattering conditions,individual“expert”networks need to be trained for each condition.However,the expert’s performance sharply degrades when the testing condition differs from the training.An alternative brute-force approach is to train a“generalist”network using data from diverse scattering conditions.It generally requires a larger network to encapsulate the diversity in the data and a sufficiently large training set to avoid overfitting.Here,we propose an adaptive learning framework,termed dynamic synthesis network(DSN),which dynamically adjusts the model weights and adapts to different scattering conditions.The adaptability is achieved by a novel“mixture of experts”architecture that enables dynamically synthesizing a network by blending multiple experts using a gating network.We demonstrate the DSN in holographic 3D particle imaging for a variety of scattering conditions.We show in simulation that our DSN provides generalization across a continuum of scattering conditions.In addition,we show that by training the DSN entirely on simulated data,the network can generalize to experiments and achieve robust 3D descattering.We expect the same concept can find many other applications,such as denoising and imaging in scattering media.Broadly,our dynamic synthesis framework opens up a new paradigm for designing highly adaptive deep learning and computational imaging techniques.展开更多
We introduce a computational framework that incorporates multiple scattering for large-scale threedimensional(3-D)particle localization using single-shot in-line holography.Traditional holographic techniques rely on s...We introduce a computational framework that incorporates multiple scattering for large-scale threedimensional(3-D)particle localization using single-shot in-line holography.Traditional holographic techniques rely on single-scattering models that become inaccurate under high particle densities and large refractive index contrasts.Existing multiple scattering solvers become computationally prohibitive for large-scale problems,which comprise millions of voxels within the scattering volume.Our approach overcomes the computational bottleneck by slicewise computation of multiple scattering under an efficient recursive framework.In the forward model,each recursion estimates the next higher-order multiple scattered field among the object slices.In the inverse model,each order of scattering is recursively estimated by a nonlinear optimization procedure.This nonlinear inverse model is further supplemented by a sparsity promoting procedure that is particularly effective in localizing 3-D distributed particles.We show that our multiple-scattering model leads to significant improvement in the quality of 3-D localization compared to traditional methods based on single scattering approximation.Our experiments demonstrate robust inverse multiple scattering,allowing reconstruction of 100 million voxels from a single 1-megapixel hologram with a sparsity prior.The performance bound of our approach is quantified in simulation and validated experimentally.Our work promises utilization of multiple scattering for versatile large-scale applications.展开更多
文摘Objective and Impact Statement.Segmentation of blood vessels from two-photon microscopy(2PM)angiograms of brains has important applications in hemodynamic analysis and disease diagnosis.Here,we develop a generalizable deep learning technique for accurate 2PM vascular segmentation of sizable regions in mouse brains acquired from multiple 2PM setups.The technique is computationally efficient,thus ideal for large-scale neurovascular analysis.Introduction.Vascular segmentation from 2PM angiograms is an important first step in hemodynamic modeling of brain vasculature.Existing segmentation methods based on deep learning either lack the ability to generalize to data from different imaging systems or are computationally infeasible for large-scale angiograms.In this work,we overcome both these limitations by a method that is generalizable to various imaging systems and is able to segment large-scale angiograms.Methods.We employ a computationally efficient deep learning framework with a loss function that incorporates a balanced binary-cross-entropy loss and total variation regularization on the network’s output.Its effectiveness is demonstrated on experimentally acquired in vivo angiograms from mouse brains of dimensions up to 808×808×702μm.Results.To demonstrate the superior generalizability of our framework,we train on data from only one 2PM microscope and demonstrate high-quality segmentation on data from a different microscope without any network tuning.Overall,our method demonstrates 10×faster computation in terms of voxels-segmented-per-second and 3×larger depth compared to the state-of-the-art.Conclusion.Our work provides a generalizable and computationally efficient anatomical modeling framework for brain vasculature,which consists of deep learning-based vascular segmentation followed by graphing.It paves the way for future modeling and analysis of hemodynamic response at much greater scales that were inaccessible before.
基金funding from National Science Foundation(1813848 and 1846784)。
文摘Deep learning has been broadly applied to imaging in scattering applications.A common framework is to train a descattering network for image recovery by removing scattering artifacts.To achieve the best results on a broad spectrum of scattering conditions,individual“expert”networks need to be trained for each condition.However,the expert’s performance sharply degrades when the testing condition differs from the training.An alternative brute-force approach is to train a“generalist”network using data from diverse scattering conditions.It generally requires a larger network to encapsulate the diversity in the data and a sufficiently large training set to avoid overfitting.Here,we propose an adaptive learning framework,termed dynamic synthesis network(DSN),which dynamically adjusts the model weights and adapts to different scattering conditions.The adaptability is achieved by a novel“mixture of experts”architecture that enables dynamically synthesizing a network by blending multiple experts using a gating network.We demonstrate the DSN in holographic 3D particle imaging for a variety of scattering conditions.We show in simulation that our DSN provides generalization across a continuum of scattering conditions.In addition,we show that by training the DSN entirely on simulated data,the network can generalize to experiments and achieve robust 3D descattering.We expect the same concept can find many other applications,such as denoising and imaging in scattering media.Broadly,our dynamic synthesis framework opens up a new paradigm for designing highly adaptive deep learning and computational imaging techniques.
基金National Science Foundation(NSF)(Grant Nos.1813848 and 1813910).
文摘We introduce a computational framework that incorporates multiple scattering for large-scale threedimensional(3-D)particle localization using single-shot in-line holography.Traditional holographic techniques rely on single-scattering models that become inaccurate under high particle densities and large refractive index contrasts.Existing multiple scattering solvers become computationally prohibitive for large-scale problems,which comprise millions of voxels within the scattering volume.Our approach overcomes the computational bottleneck by slicewise computation of multiple scattering under an efficient recursive framework.In the forward model,each recursion estimates the next higher-order multiple scattered field among the object slices.In the inverse model,each order of scattering is recursively estimated by a nonlinear optimization procedure.This nonlinear inverse model is further supplemented by a sparsity promoting procedure that is particularly effective in localizing 3-D distributed particles.We show that our multiple-scattering model leads to significant improvement in the quality of 3-D localization compared to traditional methods based on single scattering approximation.Our experiments demonstrate robust inverse multiple scattering,allowing reconstruction of 100 million voxels from a single 1-megapixel hologram with a sparsity prior.The performance bound of our approach is quantified in simulation and validated experimentally.Our work promises utilization of multiple scattering for versatile large-scale applications.