The rapid development of artificial intelligence(AI)facilitates various applications from all areas but also poses great challenges in its hardware implementation in terms of speed and energy because of the explosive ...The rapid development of artificial intelligence(AI)facilitates various applications from all areas but also poses great challenges in its hardware implementation in terms of speed and energy because of the explosive growth of data.Optical computing provides a distinctive perspective to address this bottleneck by harnessing the unique properties of photons including broad bandwidth,low latency,and high energy efficiency.In this review,we introduce the latest developments of optical computing for different AI models,including feedforward neural networks,reservoir computing,and spiking neural networks(SNNs).Recent progress in integrated photonic devices,combined with the rise of AI,provides a great opportunity for the renaissance of optical computing in practical applications.This effort requires multidisciplinary efforts from a broad community.This review provides an overview of the state-of-the-art accomplishments in recent years,discusses the availability of current technologies,and points out various remaining challenges in different aspects to push the frontier.We anticipate that the era of large-scale integrated photonics processors will soon arrive for practical AI applications in the form of hybrid optoelectronic frameworks.展开更多
Coded exposure photography is a promising computational imaging technique capable of addressing motion blur much better than using a conventional camera, via tailoring invertible blur kernels. However, existing method...Coded exposure photography is a promising computational imaging technique capable of addressing motion blur much better than using a conventional camera, via tailoring invertible blur kernels. However, existing methods suffer from restrictive assumptions, complicated preprocessing, and inferior performance. To address these issues,we proposed an end-to-end framework to handle general motion blurs with a unified deep neural network, and optimize the shutter's encoding pattern together with the deblurring processing to achieve high-quality sharp images. The framework incorporates a learnable flutter shutter sequence to capture coded exposure snapshots and a learning-based deblurring network to restore the sharp images from the blurry inputs. By co-optimizing the encoding and the deblurring modules jointly, our approach avoids exhaustively searching for encoding sequences and achieves an optimal overall deblurring performance. Compared with existing coded exposure based motion deblurring methods, the proposed framework eliminates tedious preprocessing steps such as foreground segmentation and blur kernel estimation, and extends coded exposure deblurring to more general blind and nonuniform cases. Both simulation and real-data experiments demonstrate the superior performance and flexibility of the proposed method.展开更多
Scalable,high-capacity,and low-power computing architecture is the primary assurance for increasingly manifold and large-scale machine learning tasks.Traditional electronic artificial agents by conventional power-hung...Scalable,high-capacity,and low-power computing architecture is the primary assurance for increasingly manifold and large-scale machine learning tasks.Traditional electronic artificial agents by conventional power-hungry processors have faced the issues of energy and scaling walls,hindering them from the sustainable performance improvement and iterative multi-task learning.Referring to another modality of light,photonic computing has been progressively applied in high-efficient neuromorphic systems.Here,we innovate a reconfigurable lifelong-learning optical neural network(L2 ONN),for highly-integrated tens-of-task machine intelligence with elaborated algorithm-hardware codesign.Benefiting from the inherent sparsity and parallelism in massive photonic connections,L2 ONN learns each single task by adaptively activating sparse photonic neuron connections in the coherent light field,while incrementally acquiring expertise on various tasks by gradually enlarging the activation.The multi-task optical features are parallelly processed by multi-spectrum representations allocated with different wavelengths.Extensive evaluations on freespace and on-chip architectures confirm that for the first time,L2 ONN avoided the catastrophic forgetting issue of photonic computing,owning versatile skills on challenging tens-of-tasks(vision classification,voice recognition,medical diagnosis,etc.)with a single model.Particularly,L2 ONN achieves more than an order of magnitude higher efficiency than the representative electronic artificial neural networks,and 14×larger capacity than existing optical neural networks while maintaining competitive performance on each individual task.The proposed photonic neuromorphic architecture points out a new form of lifelong learning scheme,permitting terminal/edge AI systems with light-speed efficiency and unprecedented scalability.展开更多
Optical aberrations degrade the performance of fluorescence microscopy.Conventional adaptive optics(AO)leverages specific devices,such as the Shack–Hartmann wavefront sensor and deformable mirror,to measure and corre...Optical aberrations degrade the performance of fluorescence microscopy.Conventional adaptive optics(AO)leverages specific devices,such as the Shack–Hartmann wavefront sensor and deformable mirror,to measure and correct optical aberrations.However,conventional AO requires either additional hardware or a more complicated imaging procedure,resulting in higher cost or a lower acquisition speed.In this study,we proposed a novel space-frequency encoding network(SFE-Net)that can directly estimate the aberrated point spread functions(PSFs)from biological images,enabling fast optical aberration estimation with high accuracy without engaging extra optics and image acquisition.We showed that with the estimated PSFs,the optical aberration can be computationally removed by the deconvolution algorithm.Furthermore,to fully exploit the benefits of SFE-Net,we incorporated the estimated PSF with neural network architecture design to devise an aberration-aware deeplearning super-resolution model,dubbed SFT-DFCAN.We demonstrated that the combination of SFE-Net and SFT-DFCAN enables instant digital AO and optical aberration-aware super-resolution reconstruction for live-cell imaging.展开更多
Endowed with the superior computing speed and energy efficiency,optical neural networks(ONNs)have attracted ever-growing attention in recent years.Existing optical computing architectures are mainly single-channel due...Endowed with the superior computing speed and energy efficiency,optical neural networks(ONNs)have attracted ever-growing attention in recent years.Existing optical computing architectures are mainly single-channel due to the lack of advanced optical connection and interaction operators,solving simple tasks such as hand-written digit classification,saliency detection,etc.The limited computing capacity and scalability of single-channel ONNs restrict the optical implementation of advanced machine vision.Herein,we develop Monet:a multichannel optical neural network architecture for a universal multiple-input multiple-channel optical computing based on a novel projection-interference-prediction framework where the inter-and intra-channel connections are mapped to optical interference and diffraction.In our Monet,optical interference patterns are generated by projecting and interfering the multichannel inputs in a shared domain.These patterns encoding the correspondences together with feature embeddings are iteratively produced through the projection-interference process to predict the final output optically.For the first time,Monet validates that multichannel processing properties can be optically implemented with high-efficiency,enabling real-world intelligent multichannel-processing tasks solved via optical computing,including 3D/motion detections.Extensive experiments on different scenarios demonstrate the effectiveness of Monet in handling advanced machine vision tasks with comparative accuracy as the electronic counterparts yet achieving a ten-fold improvement in computing efficiency.For intelligent computing,the trends of dealing with real-world advanced tasks are irreversible.Breaking the capacity and scalability limitations of single-channel ONN and further exploring the multichannel processing potential of wave optics,we anticipate that the proposed technique will accelerate the development of more powerful optical Al as critical support for modern advanced machine vision.展开更多
Various biological behaviors can only be observed in 3D at high speed over the long term with low phototoxicity.Light-field microscopy(LFM)provides an elegant compact solution to record 3D information in a tomographic...Various biological behaviors can only be observed in 3D at high speed over the long term with low phototoxicity.Light-field microscopy(LFM)provides an elegant compact solution to record 3D information in a tomographic manner simultaneously,which can facilitate high photon efficiency.However,LFM still suffers from the missing-cone problem,leading to degraded axial resolution and ringing effects after deconvolution.Here,we propose a mirrorenhanced scanning LFM(MiSLFM)to achieve long-term high-speed 3D imaging at super-resolved axial resolution with a single objective,by fully exploiting the extended depth of field of LFM with a tilted mirror placed below samples.To establish the unique capabilities of MiSLFM,we performed extensive experiments,we observed various organelle interactions and intercellular interactions in different types of photosensitive cells under extremely low light conditions.Moreover,we demonstrated that superior axial resolution facilitates more robust blood cell tracking in zebrafish larvae at high speed.展开更多
The development of deep learning and open access to a substantial collection of imaging data together provide a potential solution for computational image transformation,which is gradually changing the landscape of op...The development of deep learning and open access to a substantial collection of imaging data together provide a potential solution for computational image transformation,which is gradually changing the landscape of optical imaging and biomedical research.However,current implementations of deep learning usually operate in a supervised manner,and their reliance on laborious and error-prone data annotation procedures remains a barrier to more general applicability.Here,we propose an unsupervised image transformation to facilitate the utilization of deep learning for optical microscopy,even in some cases in which supervised models cannot be applied.Through the introduction of a saliency constraint,the unsupervised model,named Unsupervised content-preserving Transformation for Optical Microscopy(UTOM);can learn the mapping between two image domains without requiring paired training data while avoiding distortions of the image content.UTOM shows promising performance in a wide range of biomedical image transformation tasks,including in silico histological staining,fluorescence image restoration,and virtual fluorescence labeling.Quantitative evaluations reveal that UTOM achieves stable and high-fidelity image transformations across different imaging conditions and modalities.We anticipate that our framework will encourage a paradigm shift in training neural networks and enable more applications of artificial intelligence in biomedical imaging.展开更多
Structured illumination microscopy(SIM)has become the standard for next-generation wide-field microscopy,offering ultrahigh imaging speed,superresolution,a large fiield-of-view,and long-term imaging.Over the past deca...Structured illumination microscopy(SIM)has become the standard for next-generation wide-field microscopy,offering ultrahigh imaging speed,superresolution,a large fiield-of-view,and long-term imaging.Over the past decade,SIM hardware and software have flourished,leading to successful applications in various biological questions.However,unlocking the full potential of SIM system hardware requires the development of advanced reconstruction algorithms.Here,we introduce the basic theory of two SIM algorithms,namely,optical sectioning SIM(OS-SIM)and superresolution SIM(SR-SIM),and summarize their implementation modalities.We then provide a brief overview of existing OS-SIM processing algorithms and review the development of SR-SIM reconstruction algorithms,focusing primarily on 2D-SIM,3D-SIM,and blind-SIM.To showcase the state-of-the-art development of SIM systems and assist users in selecting a commercial SIM system for a specific application,we compare the features of representative off-the-shelf SIM systems.Finally,we provide perspectives on the potential future developments of SIM.展开更多
Training an artificial neural network with backpropagation algorithms to perform advanced machine learning tasks requires an extensive computational process.This paper proposes to implement the backpropagation algorit...Training an artificial neural network with backpropagation algorithms to perform advanced machine learning tasks requires an extensive computational process.This paper proposes to implement the backpropagation algorithm optically for in situ training of both linear and nonlinear diffractive optical neural networlks,which enables the acceleration of training speed and improvement in energy efficiency on core computing modules.We demonstrate that the gradient of a loss function with respect to the weights of diffractive layers can be accurately calculated by measuring the forward and backward propagated optical fields based on light reciprocity and phase conjunction principles.The diffractive modulation weights are updated by programming a high-speed spatial light modulator to minimize the error between prediction and target output and perform inference tasks at the speed of light.We numerically validate the effectiveness of our approach on simulated networks for various applications.The proposed in situ optical learning architecture achieves accuracy comparable to in silico training with an electronic computer on the tasks of object dlassification and matrix-vector multiplication,which further allows the diffractive optical neural network to adapt to system imperfections.Also,the self-adaptive property of our approach facilitates the novel application of the network for all-optical imaging through scattering media.The proposed approach paves the way for robust implementation of large-scale difractive neural networks to perform distinctive tasks all-optically.展开更多
Micro-endoscopes are widely used for detecting and visualizing hard-to-reach areas of the human body and for in vivo observation of animals.A micro-endoscope that can realize 3D imaging at the camera framerate could b...Micro-endoscopes are widely used for detecting and visualizing hard-to-reach areas of the human body and for in vivo observation of animals.A micro-endoscope that can realize 3D imaging at the camera framerate could benefit various clinical and biological applications.In this work,we report the development of a compact light-field micro-endoscope(LFME)that can obtain snapshot 3D fluorescence imaging,by jointly using a single-mode fiber bundle and a small-size light-field configuration.To demonstrate the real imaging performance of our method,we put a resolution chart in different z positions and capture the z-stack images successively for reconstruction,achieving 333-μm-diameter field of view,24μm optimal depth of field,and up to 3.91μm spatial resolution near the focal plane.We also test our method on a human skin tissue section and He La cells.Our LFME prototype provides epi-fluorescence imaging ability with a relatively small(2-mm-diameter)imaging probe,making it suitable for in vivo detection of brain activity and gastrointestinal diseases of animals.展开更多
High-resolution images are widely used in our everyday life;however,high-speed video capture is more challenging due to the low frame rate of cameras working at the high-resolution mode.The main bottleneck lies in the...High-resolution images are widely used in our everyday life;however,high-speed video capture is more challenging due to the low frame rate of cameras working at the high-resolution mode.The main bottleneck lies in the low throughput of existing imaging systems.Toward this end,snapshot compressive imaging(SCI)was proposed as a promising solution to improve the throughput of imaging systems by compressive sampling and computational reconstruction.During acquisition,multiple high-speed images are encoded and collapsed to a single measurement.Then,algorithms are employed to retrieve the video frames from the coded snapshot.Recently developed plug-and-play algorithms made the SCI reconstruction possible in large-scale problems.However,the lack of high-resolution encoding systems still precludes SCI’s wide application.Thus,in this paper,we build,to the best of our knowledge,a novel hybrid coded aperture snapshot compressive imaging(HCA-SCI)system by incorporating a dynamic liquid crystal on silicon and a high-resolution lithography mask.We further implement a Pn P reconstruction algorithm with cascaded denoisers for high-quality reconstruction.Based on the proposed HCA-SCI system and algorithm,we obtain a 10-mega-pixel SCI system to capture high-speed scenes,leading to a high throughput of 4.6×10^(9) voxels per second.Both simulation and real-data experiments verify the feasibility and performance of our proposed HCA-SCI scheme.展开更多
Seeing through dense occlusions and reconstructing scene images is an important but challenging task.Traditional framebased image de-occlusion methods may lead to fatal errors when facing extremely dense occlusions du...Seeing through dense occlusions and reconstructing scene images is an important but challenging task.Traditional framebased image de-occlusion methods may lead to fatal errors when facing extremely dense occlusions due to the lack of valid information available from the limited input occluded frames.Event cameras are bio-inspired vision sensors that record the brightness changes at each pixel asynchronously with high temporal resolution.However,synthesizing images solely from event streams is ill-posed since only the brightness changes are recorded in the event stream,and the initial brightness is unknown.In this paper,we propose an event-enhanced multi-modal fusion hybrid network for image de-occlusion,which uses event streams to provide complete scene information and frames to provide color and texture information.An event stream encoder based on the spiking neural network(SNN)is proposed to encode and denoise the event stream efficiently.A comparison loss is proposed to generate clearer results.Experimental results on a largescale event-based and frame-based image de-occlusion dataset demonstrate that our proposed method achieves state-of-the-art performance.展开更多
The choice of therapeutic agents remains an unsolved issue in the repair of spinal cord injury.In this work,various agents and configurations were investigated and compared for their performance in promoting nerve reg...The choice of therapeutic agents remains an unsolved issue in the repair of spinal cord injury.In this work,various agents and configurations were investigated and compared for their performance in promoting nerve regeneration,including bead assembly and bulk gel of collagen and Matrigel,under acellular and cell-laden conditions,and cerebral organoid(CO)as the in vitro preorganized agent.First,in Matrigel-based agents and the CO transplantations,the recipient animal gained more axon regeneration and the higher Basso,Beattie,and Bresnahan(BBB)scoring than the grafted collagen gels.Second,new nerves more uniformly infiltrated into the transplants in bead form assembly than the molded chunks.Third,the materials loaded the neural progenitor cells(NPCs)or the CO implantation groups received more regenerated nerve fibers than their acellular counterparts,suggesting the necessity to transplant exogenous cells for large trauma(e.g.,a 5 mm long spinal cord transect).In addition,the activated microglial cells might benefit from neural regeneration after receiving CO transplantation in the recipient animals.The organoid augmentation may suggest that in vitro maturation of a microtissue complex is necessary before transplantation and proposes organoids as the premium therapeutic agents for nerve regeneration.展开更多
基金supported by the National Natural Science Foundation of China(61927802,61722209,and 61805145)the Beijing Municipal Science and Technology Commission(Z181100003118014)+3 种基金the National Key Research and Development Program of China(2020AAA0130000)the support from the National Postdoctoral Program for Innovative TalentShuimu Tsinghua Scholar Programthe support from the Hong Kong Research Grants Council(16306220)。
文摘The rapid development of artificial intelligence(AI)facilitates various applications from all areas but also poses great challenges in its hardware implementation in terms of speed and energy because of the explosive growth of data.Optical computing provides a distinctive perspective to address this bottleneck by harnessing the unique properties of photons including broad bandwidth,low latency,and high energy efficiency.In this review,we introduce the latest developments of optical computing for different AI models,including feedforward neural networks,reservoir computing,and spiking neural networks(SNNs).Recent progress in integrated photonic devices,combined with the rise of AI,provides a great opportunity for the renaissance of optical computing in practical applications.This effort requires multidisciplinary efforts from a broad community.This review provides an overview of the state-of-the-art accomplishments in recent years,discusses the availability of current technologies,and points out various remaining challenges in different aspects to push the frontier.We anticipate that the era of large-scale integrated photonics processors will soon arrive for practical AI applications in the form of hybrid optoelectronic frameworks.
基金the Shuimu Tsinghua Scholar ProgramProject funded by National Natural Science Foundation of China(62125106,61860206003,and 62088102)+4 种基金in part by Shenzhen Science and Technology Research and Development Funds(JCYJ20180507183706645)in part by Ministry of Science and Technology of China(2021ZD0109901)in part by Beijing National Research Center for Information Science and Technology(BNR2020RC01002)China Postdoctoral Science Foundation(2020TQ0172,2020M670338,and YJ20200109)Postdoctoral International Exchange Program(YJ20210124)。
基金Ministry of Science and Technology of the People's Republic of China (2020AAA0108202)National Natural Science Foundation of China (61931012, 62088102)。
文摘Coded exposure photography is a promising computational imaging technique capable of addressing motion blur much better than using a conventional camera, via tailoring invertible blur kernels. However, existing methods suffer from restrictive assumptions, complicated preprocessing, and inferior performance. To address these issues,we proposed an end-to-end framework to handle general motion blurs with a unified deep neural network, and optimize the shutter's encoding pattern together with the deblurring processing to achieve high-quality sharp images. The framework incorporates a learnable flutter shutter sequence to capture coded exposure snapshots and a learning-based deblurring network to restore the sharp images from the blurry inputs. By co-optimizing the encoding and the deblurring modules jointly, our approach avoids exhaustively searching for encoding sequences and achieves an optimal overall deblurring performance. Compared with existing coded exposure based motion deblurring methods, the proposed framework eliminates tedious preprocessing steps such as foreground segmentation and blur kernel estimation, and extends coded exposure deblurring to more general blind and nonuniform cases. Both simulation and real-data experiments demonstrate the superior performance and flexibility of the proposed method.
基金supported in part by Natural Science Foundation of China(NSFC)under contracts No.62205176,62125106,61860206003,62088102 and 62271283in part by Ministry of Science and Technology of China under contract No.2021ZD0109901in part by China Postdoctoral Science Foundation under contract No.2022M721889.
文摘Scalable,high-capacity,and low-power computing architecture is the primary assurance for increasingly manifold and large-scale machine learning tasks.Traditional electronic artificial agents by conventional power-hungry processors have faced the issues of energy and scaling walls,hindering them from the sustainable performance improvement and iterative multi-task learning.Referring to another modality of light,photonic computing has been progressively applied in high-efficient neuromorphic systems.Here,we innovate a reconfigurable lifelong-learning optical neural network(L2 ONN),for highly-integrated tens-of-task machine intelligence with elaborated algorithm-hardware codesign.Benefiting from the inherent sparsity and parallelism in massive photonic connections,L2 ONN learns each single task by adaptively activating sparse photonic neuron connections in the coherent light field,while incrementally acquiring expertise on various tasks by gradually enlarging the activation.The multi-task optical features are parallelly processed by multi-spectrum representations allocated with different wavelengths.Extensive evaluations on freespace and on-chip architectures confirm that for the first time,L2 ONN avoided the catastrophic forgetting issue of photonic computing,owning versatile skills on challenging tens-of-tasks(vision classification,voice recognition,medical diagnosis,etc.)with a single model.Particularly,L2 ONN achieves more than an order of magnitude higher efficiency than the representative electronic artificial neural networks,and 14×larger capacity than existing optical neural networks while maintaining competitive performance on each individual task.The proposed photonic neuromorphic architecture points out a new form of lifelong learning scheme,permitting terminal/edge AI systems with light-speed efficiency and unprecedented scalability.
基金National Natural Science Foundation of China(31970659,32125024)National Key Research and Development Program of China(2021YFA1300303)+3 种基金Chinese Academy of Sciences(YSBR-076,ZDBS-LY-SM004)China Postdoctoral Science Foundation(2022M721842,2023T160365)Tsinghua University(2022SM035)New Cornerstone Science Foundation。
文摘Optical aberrations degrade the performance of fluorescence microscopy.Conventional adaptive optics(AO)leverages specific devices,such as the Shack–Hartmann wavefront sensor and deformable mirror,to measure and correct optical aberrations.However,conventional AO requires either additional hardware or a more complicated imaging procedure,resulting in higher cost or a lower acquisition speed.In this study,we proposed a novel space-frequency encoding network(SFE-Net)that can directly estimate the aberrated point spread functions(PSFs)from biological images,enabling fast optical aberration estimation with high accuracy without engaging extra optics and image acquisition.We showed that with the estimated PSFs,the optical aberration can be computationally removed by the deconvolution algorithm.Furthermore,to fully exploit the benefits of SFE-Net,we incorporated the estimated PSF with neural network architecture design to devise an aberration-aware deeplearning super-resolution model,dubbed SFT-DFCAN.We demonstrated that the combination of SFE-Net and SFT-DFCAN enables instant digital AO and optical aberration-aware super-resolution reconstruction for live-cell imaging.
基金supported in part by Ministry of Science and Technology of China under contract Na.20212D0109901,in part by Natural Science Foundation of China(NSFO under contract No.62125106,61860206003 and 62088102,in part by Bejing National Research Center for Information Science and Technology(BNRist)under Grant No.BNR2020RC01002,in part by Young Elite Scientists Sponsorship Program by CAST No.2021QNRC001.in part by Shuimu TSinghua Scholar Program,China Postdoctoral Science Foundation No.2022M711874.and Postdoctoral International Exchange Program No.YJ20210124.
文摘Endowed with the superior computing speed and energy efficiency,optical neural networks(ONNs)have attracted ever-growing attention in recent years.Existing optical computing architectures are mainly single-channel due to the lack of advanced optical connection and interaction operators,solving simple tasks such as hand-written digit classification,saliency detection,etc.The limited computing capacity and scalability of single-channel ONNs restrict the optical implementation of advanced machine vision.Herein,we develop Monet:a multichannel optical neural network architecture for a universal multiple-input multiple-channel optical computing based on a novel projection-interference-prediction framework where the inter-and intra-channel connections are mapped to optical interference and diffraction.In our Monet,optical interference patterns are generated by projecting and interfering the multichannel inputs in a shared domain.These patterns encoding the correspondences together with feature embeddings are iteratively produced through the projection-interference process to predict the final output optically.For the first time,Monet validates that multichannel processing properties can be optically implemented with high-efficiency,enabling real-world intelligent multichannel-processing tasks solved via optical computing,including 3D/motion detections.Extensive experiments on different scenarios demonstrate the effectiveness of Monet in handling advanced machine vision tasks with comparative accuracy as the electronic counterparts yet achieving a ten-fold improvement in computing efficiency.For intelligent computing,the trends of dealing with real-world advanced tasks are irreversible.Breaking the capacity and scalability limitations of single-channel ONN and further exploring the multichannel processing potential of wave optics,we anticipate that the proposed technique will accelerate the development of more powerful optical Al as critical support for modern advanced machine vision.
文摘Various biological behaviors can only be observed in 3D at high speed over the long term with low phototoxicity.Light-field microscopy(LFM)provides an elegant compact solution to record 3D information in a tomographic manner simultaneously,which can facilitate high photon efficiency.However,LFM still suffers from the missing-cone problem,leading to degraded axial resolution and ringing effects after deconvolution.Here,we propose a mirrorenhanced scanning LFM(MiSLFM)to achieve long-term high-speed 3D imaging at super-resolved axial resolution with a single objective,by fully exploiting the extended depth of field of LFM with a tilted mirror placed below samples.To establish the unique capabilities of MiSLFM,we performed extensive experiments,we observed various organelle interactions and intercellular interactions in different types of photosensitive cells under extremely low light conditions.Moreover,we demonstrated that superior axial resolution facilitates more robust blood cell tracking in zebrafish larvae at high speed.
基金We would like to acknowledge Weigert et al.for making their source code and data related to image restoration openly available to the comm unity.We thank the Rubin Lab at Harvard,the Finkbeiner Lab at Gladstone,and Google Accelerated Science for releasing their datasets on virtual cell staining.We thank Jingjing Wang,affiliated with the apparatus sharing platform of Tsinghua University,for assistance with the imaging of histopathology slides.This work was supported by the National Natural Science Foundation of China(62088102,61831014,62071271,and 62071272)Projects of MOST(2020AA0105500 and 2020AAA0130000)+1 种基金Shenzhen Science and Technology Projects(ZDYBH201900000002 and JCYJ20180508152042002)the National Postdoctoral Program for Innovative Talents(BX20190173).
文摘The development of deep learning and open access to a substantial collection of imaging data together provide a potential solution for computational image transformation,which is gradually changing the landscape of optical imaging and biomedical research.However,current implementations of deep learning usually operate in a supervised manner,and their reliance on laborious and error-prone data annotation procedures remains a barrier to more general applicability.Here,we propose an unsupervised image transformation to facilitate the utilization of deep learning for optical microscopy,even in some cases in which supervised models cannot be applied.Through the introduction of a saliency constraint,the unsupervised model,named Unsupervised content-preserving Transformation for Optical Microscopy(UTOM);can learn the mapping between two image domains without requiring paired training data while avoiding distortions of the image content.UTOM shows promising performance in a wide range of biomedical image transformation tasks,including in silico histological staining,fluorescence image restoration,and virtual fluorescence labeling.Quantitative evaluations reveal that UTOM achieves stable and high-fidelity image transformations across different imaging conditions and modalities.We anticipate that our framework will encourage a paradigm shift in training neural networks and enable more applications of artificial intelligence in biomedical imaging.
基金This work was supported by the Ministry of Science and Technology(2022YFC3401100)the National Natural Science Foundation of China(62025501,31971376,and 92150301)the fellowship of China Postdoctoral ScienceFoundation(2021M700243).
文摘Structured illumination microscopy(SIM)has become the standard for next-generation wide-field microscopy,offering ultrahigh imaging speed,superresolution,a large fiield-of-view,and long-term imaging.Over the past decade,SIM hardware and software have flourished,leading to successful applications in various biological questions.However,unlocking the full potential of SIM system hardware requires the development of advanced reconstruction algorithms.Here,we introduce the basic theory of two SIM algorithms,namely,optical sectioning SIM(OS-SIM)and superresolution SIM(SR-SIM),and summarize their implementation modalities.We then provide a brief overview of existing OS-SIM processing algorithms and review the development of SR-SIM reconstruction algorithms,focusing primarily on 2D-SIM,3D-SIM,and blind-SIM.To showcase the state-of-the-art development of SIM systems and assist users in selecting a commercial SIM system for a specific application,we compare the features of representative off-the-shelf SIM systems.Finally,we provide perspectives on the potential future developments of SIM.
基金Beijing Municipal Science and Technology Commission(No.Z181100003118014)National Natural Science Foundation of China(No.61722209)Tsinghua University Initiative Scientific Research Program.
文摘Training an artificial neural network with backpropagation algorithms to perform advanced machine learning tasks requires an extensive computational process.This paper proposes to implement the backpropagation algorithm optically for in situ training of both linear and nonlinear diffractive optical neural networlks,which enables the acceleration of training speed and improvement in energy efficiency on core computing modules.We demonstrate that the gradient of a loss function with respect to the weights of diffractive layers can be accurately calculated by measuring the forward and backward propagated optical fields based on light reciprocity and phase conjunction principles.The diffractive modulation weights are updated by programming a high-speed spatial light modulator to minimize the error between prediction and target output and perform inference tasks at the speed of light.We numerically validate the effectiveness of our approach on simulated networks for various applications.The proposed in situ optical learning architecture achieves accuracy comparable to in silico training with an electronic computer on the tasks of object dlassification and matrix-vector multiplication,which further allows the diffractive optical neural network to adapt to system imperfections.Also,the self-adaptive property of our approach facilitates the novel application of the network for all-optical imaging through scattering media.The proposed approach paves the way for robust implementation of large-scale difractive neural networks to perform distinctive tasks all-optically.
基金National Natural Science Foundation of China(62071219,62025108)Natural Science Foundation of Jiangsu Province(BK20190292)。
文摘Micro-endoscopes are widely used for detecting and visualizing hard-to-reach areas of the human body and for in vivo observation of animals.A micro-endoscope that can realize 3D imaging at the camera framerate could benefit various clinical and biological applications.In this work,we report the development of a compact light-field micro-endoscope(LFME)that can obtain snapshot 3D fluorescence imaging,by jointly using a single-mode fiber bundle and a small-size light-field configuration.To demonstrate the real imaging performance of our method,we put a resolution chart in different z positions and capture the z-stack images successively for reconstruction,achieving 333-μm-diameter field of view,24μm optimal depth of field,and up to 3.91μm spatial resolution near the focal plane.We also test our method on a human skin tissue section and He La cells.Our LFME prototype provides epi-fluorescence imaging ability with a relatively small(2-mm-diameter)imaging probe,making it suitable for in vivo detection of brain activity and gastrointestinal diseases of animals.
基金Ministry of Science and Technology of the People’s Republic of China(2020AAA0108202)National Natural Science Foundation of China(62088102,61931012)。
文摘High-resolution images are widely used in our everyday life;however,high-speed video capture is more challenging due to the low frame rate of cameras working at the high-resolution mode.The main bottleneck lies in the low throughput of existing imaging systems.Toward this end,snapshot compressive imaging(SCI)was proposed as a promising solution to improve the throughput of imaging systems by compressive sampling and computational reconstruction.During acquisition,multiple high-speed images are encoded and collapsed to a single measurement.Then,algorithms are employed to retrieve the video frames from the coded snapshot.Recently developed plug-and-play algorithms made the SCI reconstruction possible in large-scale problems.However,the lack of high-resolution encoding systems still precludes SCI’s wide application.Thus,in this paper,we build,to the best of our knowledge,a novel hybrid coded aperture snapshot compressive imaging(HCA-SCI)system by incorporating a dynamic liquid crystal on silicon and a high-resolution lithography mask.We further implement a Pn P reconstruction algorithm with cascaded denoisers for high-quality reconstruction.Based on the proposed HCA-SCI system and algorithm,we obtain a 10-mega-pixel SCI system to capture high-speed scenes,leading to a high throughput of 4.6×10^(9) voxels per second.Both simulation and real-data experiments verify the feasibility and performance of our proposed HCA-SCI scheme.
基金supported by National Natural Science Funds of China (Nos. 62088102 and 62021002)Beijing Natural Science Foundation, China (No. 4222025)
文摘Seeing through dense occlusions and reconstructing scene images is an important but challenging task.Traditional framebased image de-occlusion methods may lead to fatal errors when facing extremely dense occlusions due to the lack of valid information available from the limited input occluded frames.Event cameras are bio-inspired vision sensors that record the brightness changes at each pixel asynchronously with high temporal resolution.However,synthesizing images solely from event streams is ill-posed since only the brightness changes are recorded in the event stream,and the initial brightness is unknown.In this paper,we propose an event-enhanced multi-modal fusion hybrid network for image de-occlusion,which uses event streams to provide complete scene information and frames to provide color and texture information.An event stream encoder based on the spiking neural network(SNN)is proposed to encode and denoise the event stream efficiently.A comparison loss is proposed to generate clearer results.Experimental results on a largescale event-based and frame-based image de-occlusion dataset demonstrate that our proposed method achieves state-of-the-art performance.
基金This work was supported by the National Natural Science Foundationof China(grant numbers61971255 and 82111530212)the Natural Science Foundation of Guangdong Province(grant number 2021B1515020092)+1 种基金the Shenzhen Science and Technology Innovation Commission(grant numbersKCXFZ20200201101050887,RCYX20200714114736146,RCBS20200714114911104,and WDZC20200821141349001)the Shenzhen Bay Laboratory Fund(grant number SZBL2020090501014).
文摘The choice of therapeutic agents remains an unsolved issue in the repair of spinal cord injury.In this work,various agents and configurations were investigated and compared for their performance in promoting nerve regeneration,including bead assembly and bulk gel of collagen and Matrigel,under acellular and cell-laden conditions,and cerebral organoid(CO)as the in vitro preorganized agent.First,in Matrigel-based agents and the CO transplantations,the recipient animal gained more axon regeneration and the higher Basso,Beattie,and Bresnahan(BBB)scoring than the grafted collagen gels.Second,new nerves more uniformly infiltrated into the transplants in bead form assembly than the molded chunks.Third,the materials loaded the neural progenitor cells(NPCs)or the CO implantation groups received more regenerated nerve fibers than their acellular counterparts,suggesting the necessity to transplant exogenous cells for large trauma(e.g.,a 5 mm long spinal cord transect).In addition,the activated microglial cells might benefit from neural regeneration after receiving CO transplantation in the recipient animals.The organoid augmentation may suggest that in vitro maturation of a microtissue complex is necessary before transplantation and proposes organoids as the premium therapeutic agents for nerve regeneration.