Image denoising,one of the essential inverse problems,targets to remove noise/artifacts from input images.In general,digital image denoising algorithms,executed on computers,present latency due to several iterations i...Image denoising,one of the essential inverse problems,targets to remove noise/artifacts from input images.In general,digital image denoising algorithms,executed on computers,present latency due to several iterations implemented in,e.g.,graphics processing units(GPUs).While deep learning-enabled methods can operate non-iteratively,they also introduce latency and impose a significant computational burden,leading to increased power consumption.Here,we introduce an analog diffractive image denoiser to all-optically and non-iteratively clean various forms of noise and artifacts from input images–implemented at the speed of light propagation within a thin diffractive visual processor that axially spans<250×λ,whereλis the wavelength of light.This all-optical image denoiser comprises passive transmissive layers optimized using deep learning to physically scatter the optical modes that represent various noise features,causing them to miss the output image Field-of-View(FoV)while retaining the object features of interest.Our results show that these diffractive denoisers can efficiently remove salt and pepper noise and image rendering-related spatial artifacts from input phase or intensity images while achieving an output power efficiency of~30–40%.We experimentally demonstrated the effectiveness of this analog denoiser architecture using a 3D-printed diffractive visual processor operating at the terahertz spectrum.Owing to their speed,power-efficiency,and minimal computational overhead,all-optical diffractive denoisers can be transformative for various image display and projection systems,including,e.g.,holographic displays.展开更多
Deep learning-based image reconstruction methods have achieved remarkable success in phase recovery and holographic imaging.However,the generalization of their image reconstruction performance to new types of samples ...Deep learning-based image reconstruction methods have achieved remarkable success in phase recovery and holographic imaging.However,the generalization of their image reconstruction performance to new types of samples never seen by the network remains a challenge.Here we introduce a deep learning framework,termed Fourier Imager Network(FIN),that can perform end-to-end phase recovery and image reconstruction from raw holograms of new types of samples,exhibiting unprecedented success in external generalization.FIN architecture is based on spatial Fourier transform modules that process the spatial frequencies of its inputs using learnable filters and a global receptive field.Compared with existing convolutional deep neural networks used for hologram reconstruction,FIN exhibits superior generalization to new types of samples,while also being much faster in its image inference speed,completing the hologram reconstruction task in~0.04 s per 1 mm^(2) of the sample area.We experimentally validated the performance of FIN by training it using human lung tissue samples and blindly testing it on human prostate,salivary gland tissue and Pap smear samples,proving its superior external generalization and image reconstruction speed.Beyond holographic microscopy and quantitative phase imaging,FIN and the underlying neural network architecture might open up various new opportunities to design broadly generalizable deep learning models in computational imaging and machine vision fields.展开更多
Bionic jumping robot can cross the obstacles by jumping, and it has a good application prospect in the unstructured com- plex environment. The less Degree of Freedom (DOF) jumping leg, which has the characteristics ...Bionic jumping robot can cross the obstacles by jumping, and it has a good application prospect in the unstructured com- plex environment. The less Degree of Freedom (DOF) jumping leg, which has the characteristics of simple control and high rigidity, and is very important in research. Based on the experimental observation of leg physiological structure and take-off process of locust, two 1 DOF jumping leg models, which includes four-bar jumping leg model and slider-crank jumping leg model, are established, and multi objective optimization is conducted to deduce the motion law of two 1 DOF jumping leg models and jumping leg of locust is closer. Then the jumping performance evaluation indices are proposed, which include the mechanical property, body attitude, jumping distance and jumping performances of the two jumping leg models environmental effect. According to these evaluation indices, the are analyzed and compared, and the simulation is conducted for further explanations. The analysis results show that the four-bar jumping leg has smaller structural size and its motion law is closer to the hindleg of locust. The slider-crank jumping leg has better mechanical property, stronger energy storage capacity and the rough ground has less effect on it. This study offers a quantitative analysis and comparison for different jumping leg models of bionic locust jumping robot. Furthermore, a theoretical basis for future research and engineering application is established.展开更多
Volumetric imaging of samples using fluorescence microscopy plays an important role in various fields including physical,medical and life sciences.Here we report a deep learning-based volumetric image inference framew...Volumetric imaging of samples using fluorescence microscopy plays an important role in various fields including physical,medical and life sciences.Here we report a deep learning-based volumetric image inference framework that uses 2D images that are sparsely captured by a standard wide-field fluorescence microscope at arbitrary axial positions within the sample volume.Through a recurrent convolutional neural network,which we term as Recurrent-MZ,2D fluorescence information from a few axial planes within the sample is explicitly incorporated to digitally reconstruct the sample volume over an extended depth-of-field.Using experiments on C.elegans and nanobead samples,Recurrent-MZ is demonstrated to significantly increase the depth-of-field of a 63×/1.4NA objective lens,also providing a 30-fold reduction in the number of axial scans required to image the same sample volume.We further illustrated the generalization of this recurrent network for 3D imaging by showing its resilience to varying imaging conditions,including e.g.,different sequences of input images,covering various axial permutations and unknown axial positioning errors.We also demonstrated wide-field to confocal cross-modality image transformations using Recurrent-MZ framework and performed 3D image reconstruction of a sample using a few wide-field 2D fluorescence images as input,matching confocal microscopy images of the same sample volume.Recurrent-MZ demonstrates the first application of recurrent neural networks in microscopic image reconstruction and provides a flexible and rapid volumetric imaging framework,overcoming the limitations of current 3D scanning microscopy tools.展开更多
基金Research Group at UCLA acknowledges the support of U.S.Department of Energy(DOE),Office of Basic Energy Sciences,Division of Materials Sciences and Engineering under Award#DE-SC0023088.
文摘Image denoising,one of the essential inverse problems,targets to remove noise/artifacts from input images.In general,digital image denoising algorithms,executed on computers,present latency due to several iterations implemented in,e.g.,graphics processing units(GPUs).While deep learning-enabled methods can operate non-iteratively,they also introduce latency and impose a significant computational burden,leading to increased power consumption.Here,we introduce an analog diffractive image denoiser to all-optically and non-iteratively clean various forms of noise and artifacts from input images–implemented at the speed of light propagation within a thin diffractive visual processor that axially spans<250×λ,whereλis the wavelength of light.This all-optical image denoiser comprises passive transmissive layers optimized using deep learning to physically scatter the optical modes that represent various noise features,causing them to miss the output image Field-of-View(FoV)while retaining the object features of interest.Our results show that these diffractive denoisers can efficiently remove salt and pepper noise and image rendering-related spatial artifacts from input phase or intensity images while achieving an output power efficiency of~30–40%.We experimentally demonstrated the effectiveness of this analog denoiser architecture using a 3D-printed diffractive visual processor operating at the terahertz spectrum.Owing to their speed,power-efficiency,and minimal computational overhead,all-optical diffractive denoisers can be transformative for various image display and projection systems,including,e.g.,holographic displays.
文摘Deep learning-based image reconstruction methods have achieved remarkable success in phase recovery and holographic imaging.However,the generalization of their image reconstruction performance to new types of samples never seen by the network remains a challenge.Here we introduce a deep learning framework,termed Fourier Imager Network(FIN),that can perform end-to-end phase recovery and image reconstruction from raw holograms of new types of samples,exhibiting unprecedented success in external generalization.FIN architecture is based on spatial Fourier transform modules that process the spatial frequencies of its inputs using learnable filters and a global receptive field.Compared with existing convolutional deep neural networks used for hologram reconstruction,FIN exhibits superior generalization to new types of samples,while also being much faster in its image inference speed,completing the hologram reconstruction task in~0.04 s per 1 mm^(2) of the sample area.We experimentally validated the performance of FIN by training it using human lung tissue samples and blindly testing it on human prostate,salivary gland tissue and Pap smear samples,proving its superior external generalization and image reconstruction speed.Beyond holographic microscopy and quantitative phase imaging,FIN and the underlying neural network architecture might open up various new opportunities to design broadly generalizable deep learning models in computational imaging and machine vision fields.
基金the National Natural Science Foundation of China (Grant No. 51375035), and the Specialized Research Fund for the Doctoral Program of Higher Education of China (Grant No. 20121102110021).
文摘Bionic jumping robot can cross the obstacles by jumping, and it has a good application prospect in the unstructured com- plex environment. The less Degree of Freedom (DOF) jumping leg, which has the characteristics of simple control and high rigidity, and is very important in research. Based on the experimental observation of leg physiological structure and take-off process of locust, two 1 DOF jumping leg models, which includes four-bar jumping leg model and slider-crank jumping leg model, are established, and multi objective optimization is conducted to deduce the motion law of two 1 DOF jumping leg models and jumping leg of locust is closer. Then the jumping performance evaluation indices are proposed, which include the mechanical property, body attitude, jumping distance and jumping performances of the two jumping leg models environmental effect. According to these evaluation indices, the are analyzed and compared, and the simulation is conducted for further explanations. The analysis results show that the four-bar jumping leg has smaller structural size and its motion law is closer to the hindleg of locust. The slider-crank jumping leg has better mechanical property, stronger energy storage capacity and the rough ground has less effect on it. This study offers a quantitative analysis and comparison for different jumping leg models of bionic locust jumping robot. Furthermore, a theoretical basis for future research and engineering application is established.
文摘Volumetric imaging of samples using fluorescence microscopy plays an important role in various fields including physical,medical and life sciences.Here we report a deep learning-based volumetric image inference framework that uses 2D images that are sparsely captured by a standard wide-field fluorescence microscope at arbitrary axial positions within the sample volume.Through a recurrent convolutional neural network,which we term as Recurrent-MZ,2D fluorescence information from a few axial planes within the sample is explicitly incorporated to digitally reconstruct the sample volume over an extended depth-of-field.Using experiments on C.elegans and nanobead samples,Recurrent-MZ is demonstrated to significantly increase the depth-of-field of a 63×/1.4NA objective lens,also providing a 30-fold reduction in the number of axial scans required to image the same sample volume.We further illustrated the generalization of this recurrent network for 3D imaging by showing its resilience to varying imaging conditions,including e.g.,different sequences of input images,covering various axial permutations and unknown axial positioning errors.We also demonstrated wide-field to confocal cross-modality image transformations using Recurrent-MZ framework and performed 3D image reconstruction of a sample using a few wide-field 2D fluorescence images as input,matching confocal microscopy images of the same sample volume.Recurrent-MZ demonstrates the first application of recurrent neural networks in microscopic image reconstruction and provides a flexible and rapid volumetric imaging framework,overcoming the limitations of current 3D scanning microscopy tools.