In an age where digitization is widespread in clinical and preclinical workflows,pathology is still predominantly practiced by microscopic evaluation of stained tissue specimens affixed on glass slides.Over the last d...In an age where digitization is widespread in clinical and preclinical workflows,pathology is still predominantly practiced by microscopic evaluation of stained tissue specimens affixed on glass slides.Over the last decade,new high throughput digital scanning microscopes have ushered in the era of digital pathology that,along with recent advances in machine vision,have opened up new possibilities for Computer-Aided-Diagnoses.Despite these advances,the high infrastructural costs related to digital pathology and the perception that the digitization process is an additional and nondirectly reimbursable step have challenged its widespread adoption.Here,we discuss how emerging virtual staining technologies and machine learning can help to disrupt the standard histopathology workflow and create new avenues for the diagnostic paradigm that will benefit patients and healthcare systems alike via digital pathology.展开更多
In their recently published paper in Opto-Electronic Ad-vances,Pietro Ferraro and his colleagues report on a new high-throughput tomographic phase instrument that precisely quantifies intracellular lipid droplets(LDs)...In their recently published paper in Opto-Electronic Ad-vances,Pietro Ferraro and his colleagues report on a new high-throughput tomographic phase instrument that precisely quantifies intracellular lipid droplets(LDs)1.LDs are lipid storage organelles found in most cell types and play an active role in critical biological pro-cesses,including energy metabolism,membrane homeo-stasis.展开更多
The immunohistochemical(IHC)staining of the human epidermal growth factor receptor 2(HER2)biomarker is widely practiced in breast tissue analysis,preclinical studies,and diagnostic decisions,guiding cancer treatment a...The immunohistochemical(IHC)staining of the human epidermal growth factor receptor 2(HER2)biomarker is widely practiced in breast tissue analysis,preclinical studies,and diagnostic decisions,guiding cancer treatment and investigation of pathogenesis.HER2 staining demands laborious tissue treatment and chemical processing performed by a histotechnologist,which typically takes one day to prepare in a laboratory,increasing analysis time and associated costs.Here,we describe a deep learning-based virtual HER2 IHC staining method using a conditional generative adversarial network that is trained to rapidly transform autofluorescence microscopic images of unlabeled/label-free breast tissue sections into bright-field equivalent microscopic images,matching the standard HER2 IHC staining that is chemically performed on the same tissue sections.The efficacy of this virtual HER2 staining framework was demonstrated by quantitative analysis,in which three board-certified breast pathologists blindly graded the HER2 scores of virtually stained and immunohistochemically stained HER2 whole slide images(WSIs)to reveal that the HER2 scores determined by inspecting virtual IHC images are as accurate as their immunohistochemically stained counterparts.A second quantitative blinded study performed by the same diagnosticians further revealed that the virtually stained HER2 images exhibit a comparable staining quality in the level of nuclear detail,membrane clearness,and absence of staining artifacts with respect to their immunohistochemically stained counterparts.This virtual HER2 staining framework bypasses the costly,laborious,and time-consuming IHC staining procedures in laboratory and can be extended to other types of biomarkers to accelerate the IHC tissue staining used in life sciences and biomedical workflow.展开更多
Complex field imaging,which captures both the amplitude and phase information of input optical fields or objects,can offer rich structural insights into samples,such as their absorption and refractive index distributi...Complex field imaging,which captures both the amplitude and phase information of input optical fields or objects,can offer rich structural insights into samples,such as their absorption and refractive index distributions.However,conventional image sensors are intensity-based and inherently lack the capability to directly measure the phase distribution of a field.This limitation can be overcome using interferometric or holographic methods,often supplemented by iterative phase retrieval algorithms,leading to a considerable increase in hardware complexity and computational demand.Here,we present a complex field imager design that enables snapshot imaging of both the amplitude and quantitative phase information of input fields using an intensity-based sensor array without any digital processing.Our design utilizes successive deep learning-optimized diffractive surfaces that are structured to collectively modulate the input complex field,forming two independent imaging channels that perform amplitude-to-amplitude and phase-to-intensity transformations between the input and output planes within a compact optical design,axially spanning~100 wavelengths.The intensity distributions of the output fields at these two channels on the sensor plane directly correspond to the amplitude and quantitative phase profiles of the input complex field,eliminating the need for any digital image reconstruction algorithms.We experimentally validated the efficacy of our complex field diffractive imager designs through 3D-printed prototypes operating at the terahertz spectrum,with the output amplitude and phase channel images closely aligning with our numerical simulations.We envision that this complex field imager will have various applications in security,biomedical imaging,sensing and material science,among others.展开更多
Image denoising,one of the essential inverse problems,targets to remove noise/artifacts from input images.In general,digital image denoising algorithms,executed on computers,present latency due to several iterations i...Image denoising,one of the essential inverse problems,targets to remove noise/artifacts from input images.In general,digital image denoising algorithms,executed on computers,present latency due to several iterations implemented in,e.g.,graphics processing units(GPUs).While deep learning-enabled methods can operate non-iteratively,they also introduce latency and impose a significant computational burden,leading to increased power consumption.Here,we introduce an analog diffractive image denoiser to all-optically and non-iteratively clean various forms of noise and artifacts from input images–implemented at the speed of light propagation within a thin diffractive visual processor that axially spans<250×λ,whereλis the wavelength of light.This all-optical image denoiser comprises passive transmissive layers optimized using deep learning to physically scatter the optical modes that represent various noise features,causing them to miss the output image Field-of-View(FoV)while retaining the object features of interest.Our results show that these diffractive denoisers can efficiently remove salt and pepper noise and image rendering-related spatial artifacts from input phase or intensity images while achieving an output power efficiency of~30–40%.We experimentally demonstrated the effectiveness of this analog denoiser architecture using a 3D-printed diffractive visual processor operating at the terahertz spectrum.Owing to their speed,power-efficiency,and minimal computational overhead,all-optical diffractive denoisers can be transformative for various image display and projection systems,including,e.g.,holographic displays.展开更多
In recent years,the integration of deep learning techniques with biophotonic setups has opened new horizons in bioimaging.A compelling trend in this feld involves deliberately compromising certain measurement metrics ...In recent years,the integration of deep learning techniques with biophotonic setups has opened new horizons in bioimaging.A compelling trend in this feld involves deliberately compromising certain measurement metrics to engineer better bioimaging tools in terms of e.g,cost,speed,and form-factor,followed by compensating for the resulting defects through the utilization of deep learning models trained on a large amount of ideal,superior or alternative data.This strategic approach has found increasing popularity due to its potential to enhance various aspects of biophotonic imaging.One of the primary motivations for employing this strategy is the pursuit of higher temporal resolution or increased imaging speed,critical for capturing fine dynamic biological processes.Additionally,this approach offers the prospect of simplifying hardware requirements and complexities,thereby making advanced imaging standards more accessible in terms of cost and/or size.This article provides an in-depth review of the diverse measurement aspects that researchers intentionally impair in their biophotonic setups,including the point spread function(PSF),signal-to-noise ratio(SNR),sampling density,and pixel resolution.By deliberately compromising these metrics,researchers aim to not only recuperate them through the application of deep learning networks,but also bolster in return other crucial parameters,such as the field of view(FOV),depth of field(DOF),and space-bandwidth product(SBP).Throughout this article,we discuss various biophotonic methods that have successfully employed this strategic approach.These techniques span a wide range of applications and showcase the versatility and effectiveness of deep learning in the context of compromised biophotonic data.Finally,by offering our perspectives on the exciting future possibilities of this rapidly evolving concept,we hope to motivate our readers from various disciplines to explore novel ways of balancing hardware compromises with compensation via artificial intelligence(Al).展开更多
Diffractive deep neural networks(D2NNs)are composed of successive transmissive layers optimized using supervised deep learning to all-optically implement various computational tasks between an input and output field-o...Diffractive deep neural networks(D2NNs)are composed of successive transmissive layers optimized using supervised deep learning to all-optically implement various computational tasks between an input and output field-of-view.Here,we present a pyramid-structured diffractive optical network design(which we term P-D2NN),optimized specifically for unidirectional image magnification and demagnification.In this design,the diffractive layers are pyramidally scaled in alignment with the direction of the image magnification or demagnification.This P-D2NN design creates high-fidelity magnified or demagnified images in only one direction,while inhibiting the image formation in the opposite direction—achieving the desired unidirectional imaging operation using a much smaller number of diffractive degrees of freedom within the optical processor volume.Furthermore,the P-D2NN design maintains its unidirectional image magnification/demagnification functionality across a large band of illumination wavelengths despite being trained with a single wavelength.We also designed a wavelength-multiplexed P-D2NN,where a unidirectional magnifier and a unidirectional demagnifier operate simultaneously in opposite directions,at two distinct illumination wavelengths.Furthermore,we demonstrate that by cascading multiple unidirectional P-D2NN modules,we can achieve higher magnification factors.The efficacy of the P-D2NN architecture was also validated experimentally using terahertz illumination,successfully matching our numerical simulations.P-D2NN offers a physics-inspired strategy for designing task-specific visual processors.展开更多
Object classification is an important aspect of machine intelligence.Current practices in object classification entail the digitization of object information followed by the application of digital algorithms such as d...Object classification is an important aspect of machine intelligence.Current practices in object classification entail the digitization of object information followed by the application of digital algorithms such as deep neural networks.The execution of digital neural networks is power-consuming,and the throughput is limited.The existing von Neumann digital computing paradigm is also less suited for the implementation of highly parallel neural network architectures.^(1)展开更多
Nonlinear encoding of optical information can be achieved using various forms of data representation.Here,we analyze the performances of different nonlinear information encoding strategies that can be employed in diff...Nonlinear encoding of optical information can be achieved using various forms of data representation.Here,we analyze the performances of different nonlinear information encoding strategies that can be employed in diffractive optical processors based on linear materials and shed light on their utility and performance gaps compared to the state-of-the-art digital deep neural networks.For a comprehensive evaluation,we used different datasets to compare the statistical inference performance of simpler-to-implement nonlinear encoding strategies that involve,e.g.,phase encoding,against data repetition-based nonlinear encoding strategies.We show that data repetition within a diffractive volume(e.g.,through an optical cavity or cascaded introduction of the input data)causes the loss of the universal linear transformation capability of a diffractive optical processor.Therefore,data repetition-based diffractive blocks cannot provide optical analogs to fully connected or convolutional layers commonly employed in digital neural networks.However,they can still be effectively trained for specific inference tasks and achieve enhanced accuracy,benefiting from the nonlinear encoding of the input information.Our results also reveal that phase encoding of input information without data repetition provides a simpler nonlinear encoding strategy with comparable statistical inference accuracy to data repetition-based diffractive processors.Our analyses and conclusions would be of broad interest to explore the push-pull relationship between linear material-based diffractive optical systems and nonlinear encoding strategies in visual information processors.展开更多
As an optical processor,a diffractive deep neural network(D2NN)utilizes engineered diffractive surfaces designed through machine learning to perform all-optical information processing,completing its tasks at the speed...As an optical processor,a diffractive deep neural network(D2NN)utilizes engineered diffractive surfaces designed through machine learning to perform all-optical information processing,completing its tasks at the speed of light propagation through thin optical layers.With sufficient degrees of freedom,D2NNs can perform arbitrary complex-valued linear transformations using spatially coherent light.Similarly,D2NNs can also perform arbitrary linear intensity transformations with spatially incoherent illumination;however,under spatially incoherent light,these transformations are nonnegative,acting on diffraction-limited optical intensity patterns at the input field of view.Here,we expand the use of spatially incoherent D2NNs to complex-valued information processing for executing arbitrary complex-valued linear transformations using spatially incoherent light.Through simulations,we show that as the number of optimized diffractive features increases beyond a threshold dictated by the multiplication of the input and output space-bandwidth products,a spatially incoherent diffractive visual processor can approximate any complex-valued linear transformation and be used for all-optical image encryption using incoherent illumination.The findings are important for the all-optical processing of information under natural light using various forms of diffractive surface-based optical processors.展开更多
Phase recovery from intensity-only measurements forms the heart of coherent imaging techniques and holography.In this study,we demonstrate that a neural network can learn to perform phase recovery and holographic imag...Phase recovery from intensity-only measurements forms the heart of coherent imaging techniques and holography.In this study,we demonstrate that a neural network can learn to perform phase recovery and holographic image reconstruction after appropriate training.This deep learning-based approach provides an entirely new framework to conduct holographic imaging by rapidly eliminating twin-image and self-interference-related spatial artifacts.This neural network-based method is fast to compute and reconstructs phase and amplitude images of the objects using only one hologram,requiring fewer measurements in addition to being computationally faster.We validated this method by reconstructing the phase and amplitude images of various samples,including blood and Pap smears and tissue sections.These results highlight that challenging problems in imaging science can be overcome through machine learning,providing new avenues to design powerful computational imaging systems.展开更多
Recent advances in deep learning have given rise to a new paradigm of holographic image reconstruction and phase recovery techniques with real-time performance.Through data-driven approaches,these emerging techniques ...Recent advances in deep learning have given rise to a new paradigm of holographic image reconstruction and phase recovery techniques with real-time performance.Through data-driven approaches,these emerging techniques have overcome some of the challenges associated with existing holographic image reconstruction methods while also minimizing the hardware requirements of holography.These recent advances open up a myriad of new opportunities for the use of coherent imaging systems in biomedical and engineering research and related applications.展开更多
Using a deep neural network,we demonstrate a digital staining technique,which we term PhaseStain,to transform the quantitative phase images(QPI)of label-free tissue sections into images that are equivalent to the brig...Using a deep neural network,we demonstrate a digital staining technique,which we term PhaseStain,to transform the quantitative phase images(QPI)of label-free tissue sections into images that are equivalent to the brightfield microscopy images of the same samples that are histologically stained.Through pairs of image data(QPI and the corresponding brightfield images,acquired after staining),we train a generative adversarial network and demonstrate the effectiveness of this virtual-staining approach using sections of human skin,kidney,and liver tissue,matching the brightfield microscopy images of the same samples stained with Hematoxylin and Eosin,Jones’stain,and Masson’s trichrome stain,respectively.This digital-staining framework may further strengthen various uses of label-free QPI techniques in pathology applications and biomedical research in general,by eliminating the need for histological staining,reducing sample preparation related costs and saving time.Our results provide a powerful example of some of the unique opportunities created by data-driven image transformations enabled by deep learning.展开更多
Deep learning has been transformative in many fields,motivating the emergence of various optical computing architectures.Diffractive optical network is a recently introduced optical computing framework that merges wav...Deep learning has been transformative in many fields,motivating the emergence of various optical computing architectures.Diffractive optical network is a recently introduced optical computing framework that merges wave optics with deep-learning methods to design optical neural networks.Diffraction-based all-optical object recognition systems,designed through this framework and fabricated by 3D printing,have been reported to recognize handwritten digits and fashion products,demonstrating all-optical inference and generalization to sub-classes of data.These previous diffractive approaches employed monochromatic coherent light as the illumination source.Here,we report a broadband diffractive optical neural network design that simultaneously processes a continuum of wavelengths generated by a temporally incoherent broadband source to all-optically perform a specific task learned using deep learning.We experimentally validated the success of this broadband diffractive neural network architecture by designing,fabricating and testing seven different multi-layer,diffractive optical systems that transform the optical wavefront generated by a broadband THz pulse to realize(1)a series of tuneable,single-passband and dual-passband spectral filters and(2)spatially controlled wavelength de-multiplexing.Merging the native or engineered dispersion of various material systems with a deep-learning-based design strategy,broadband diffractive neural networks help us engineer the light–matter interaction in 3D,diverging from intuitive and analytical design methods to create taskspecific optical components that can all-optically perform deterministic tasks or statistical inference for optical machine learning.展开更多
Wide field-of-view(FOV)and high-resolution imaging requires microscopy modalities to have large space-bandwidth products.Lensfree on-chip microscopy decouples resolution from FOV and can achieve a space-bandwidth prod...Wide field-of-view(FOV)and high-resolution imaging requires microscopy modalities to have large space-bandwidth products.Lensfree on-chip microscopy decouples resolution from FOV and can achieve a space-bandwidth product greater than one billion under unit magnification using state-of-the-art opto-electronic sensor chips and pixel super-resolution techniques.However,using vertical illumination,the effective numerical aperture(NA)that can be achieved with an on-chip microscope is limited by a poor signal-to-noise ratio(SNR)at high spatial frequencies and imaging artifacts that arise as a result of the relatively narrow acceptance angles of the sensor’s pixels.Here,we report,for the first time,a synthetic aperture-based on-chip microscope in which the illumination angle is scanned across the surface of a dome to increase the effective NA of the reconstructed lensfree image to 1.4,achieving e.g.,,250-nm resolution at 700-nm wavelength under unit magnification.This synthetic aperture approach not only represents the largest NA achieved to date using an on-chip microscope but also enables color imaging of connected tissue samples,such as pathology slides,by achieving robust phase recovery without the need for multi-height scanning or any prior information about the sample.To validate the effectiveness of this synthetic aperture-based,partially coherent,holographic on-chip microscope,we have successfully imaged color-stained cancer tissue slides as well as unstained Papanicolaou smears across a very large FOV of 20.5 mm^(2).This compact on-chip microscope based on a synthetic aperture approach could be useful for various applications in medicine,physical sciences and engineering that demand high-resolution wide-field imaging.展开更多
We report a deep learning-enabled field-portable and cost-effective imaging flow cytometer that automatically captures phase-contrast color images of the contents of a continuously flowing water sample at a throughput...We report a deep learning-enabled field-portable and cost-effective imaging flow cytometer that automatically captures phase-contrast color images of the contents of a continuously flowing water sample at a throughput of 100 mL/h.The device is based on partially coherent lens-free holographic microscopy and acquires the diffraction patterns of flowing micro-objects inside a microfluidic channel.These holographic diffraction patterns are reconstructed in real time using a deep learning-based phase-recovery and image-reconstruction method to produce a color image of each micro-object without the use of external labeling.Motion blur is eliminated by simultaneously illuminating the sample with red,green,and blue light-emitting diodes that are pulsed.Operated by a laptop computer,this portable device measures 15.5 cm×15 cm×12.5 cm,weighs 1 kg,and compared to standard imaging flow cytometers,it provides extreme reductions of cost,size and weight while also providing a high volumetric throughput over a large object size range.We demonstrated the capabilities of this device by measuring ocean samples at the Los Angeles coastline and obtaining images of its micro-and nanoplankton composition.Furthermore,we measured the concentration of a potentially toxic alga(Pseudo-nitzschia)in six public beaches in Los Angeles and achieved good agreement with measurements conducted by the California Department of Public Health.The cost-effectiveness,compactness,and simplicity of this computational platform might lead to the creation of a network of imaging flow cytometers for largescale and continuous monitoring of the ocean microbiome,including its plankton composition.展开更多
Digital holographic microscopy enables the 3D reconstruction of volumetric samples from a single-snapshot hologram.However,unlike a conventional bright-field microscopy image,the quality of holographic reconstructions...Digital holographic microscopy enables the 3D reconstruction of volumetric samples from a single-snapshot hologram.However,unlike a conventional bright-field microscopy image,the quality of holographic reconstructions is compromised by interference fringes as a result of twin images and out-of-plane objects.Here,we demonstrate that cross-modality deep learning using a generative adversarial network(GAN)can endow holographic images of a sample volume with bright-field microscopy contrast,combining the volumetric imaging capability of holography with the speckle-and artifact-free image contrast of incoherent bright-field microscopy.We illustrate the performance of this“bright-field holography”method through the snapshot imaging of bioaerosols distributed in 3D,matching the artifact-free image contrast and axial sectioning performance of a high-NA bright-field microscope.This data-driven deep-learning-based imaging method bridges the contrast gap between coherent and incoherent imaging,and enables the snapshot 3D imaging of objects with bright-field contrast from a single hologram,benefiting from the wave-propagation framework of holography.展开更多
We demonstrate a handheld on-chip biosensing technology that employs plasmonic microarrays coupled with a lens-free computational imaging system towards multiplexed and high-throughput screening of biomolecular intera...We demonstrate a handheld on-chip biosensing technology that employs plasmonic microarrays coupled with a lens-free computational imaging system towards multiplexed and high-throughput screening of biomolecular interactions for point-of-care applications and resource-limited settings.This lightweight and field-portable biosensing device,weighing 60 g and 7.5 cm tall,utilizes a compact optoelectronic sensor array to record the diffraction patterns of plasmonic nanostructures under uniform illumination by a single-light emitting diode tuned to the plasmonic mode of the nanoapertures.Employing a sensitive plasmonic array design that is combined with lens-free computational imaging,we demonstrate label-free and quantitative detection of biomolecules with a protein layer thickness down to 3 nm.Integrating large-scale plasmonic microarrays,our on-chip imaging platform enables simultaneous detection of protein mono-and bilayers on the same platform over a wide range of biomolecule concentrations.In this handheld device,we also employ an iterative phase retrieval-based image reconstruction method,which offers the ability to digitally image a highly multiplexed array of sensors on the same plasmonic chip,making this approach especially suitable for high-throughput diagnostic applications in field settings.展开更多
The precise engineering of materials and surfaces has been at the heart of some of the recent advances in optics and photonics.These advances related to the engineering of materials with new functionalities have also ...The precise engineering of materials and surfaces has been at the heart of some of the recent advances in optics and photonics.These advances related to the engineering of materials with new functionalities have also opened up exciting avenues for designing trainable surfaces that can perform computation and machine-learning tasks through light-matter interactions and diffraction.Here,we analyze the information-processing capacity of coherent optical networks formed by diffractive surfaces that are trained to perform an all-optical computational task between a given input and output field-of-view.We show that the dimensionality of the all-optical solution space covering the complex-valued transformations between the input and output fields-of-view is linearly proportional to the number of diffractive surfaces within the optical network,up to a limit that is dictated by the extent of the input and output fields-of-view.Deeper diffractive networks that are composed of larger numbers of trainable surfaces can cover a higher-dimensional subspace of the complex-valued linear transformations between a larger input field-of-view and a larger output field-of-view and exhibit depth advantages in terms of their statistical inference,learning,and generalization capabilities for different image classification tasks when compared with a single trainable diffractive surface.These analyses and conclusions are broadly applicable to various forms of diffractive surfaces,including,e.g.,plasmomc and/or dielectric-based metasurfaces and flat optics,which can be used to form all-optical processors.展开更多
A plethora of research advances have emerged in the fields of optics and photonics that benefit from harnessing the power of machine learning.Specifically,there has been a revival of interest in optical computing hard...A plethora of research advances have emerged in the fields of optics and photonics that benefit from harnessing the power of machine learning.Specifically,there has been a revival of interest in optical computing hardware due to its potential advantages for machine learning tasks in terms of parallelization,power efficiency and computation speed.Diffractive deep neural networks(D^(2)NNs)form such an optical computing framework that benefits from deep learning-based design of successive diffractive layers to all-optically process information as the input light diffracts through these passive layers.D^(2)NNs have demonstrated success in various tasks,including object classification,the spectral encoding of information,optical pulse shaping and imaging.Here,we substantially improve the inference performance of diffractive optical networks using feature engineering and ensemble learning.After independently training 1252 D^(2)NNs that were diversely engineered with a variety of passive input filters,we applied a pruning algorithm to select an optimized ensemble of D^(2)NNs that collectively improved the image classification accuracy.Through this pruning,we numerically demonstrated that ensembles of N=14 and N=30 D^(2)NNs achieve blind testing accuracies of 61.14±0.23%and 62.13±0.05%,respectively,on the classification of GFAR-10 test images,providing an inference improvennent of>16%compared to the average performance of the individual D^(2)NNs within each ensemble.These results constitute the highest inference accuracies achieved to date by any diffractive optical neural network design on the same dataset and might provide a significant leap to extend the application space of diffractive optical image classification and machine vision systems.展开更多
基金This study was financially supported by the NSF Biophotonics Program (USA).
文摘In an age where digitization is widespread in clinical and preclinical workflows,pathology is still predominantly practiced by microscopic evaluation of stained tissue specimens affixed on glass slides.Over the last decade,new high throughput digital scanning microscopes have ushered in the era of digital pathology that,along with recent advances in machine vision,have opened up new possibilities for Computer-Aided-Diagnoses.Despite these advances,the high infrastructural costs related to digital pathology and the perception that the digitization process is an additional and nondirectly reimbursable step have challenged its widespread adoption.Here,we discuss how emerging virtual staining technologies and machine learning can help to disrupt the standard histopathology workflow and create new avenues for the diagnostic paradigm that will benefit patients and healthcare systems alike via digital pathology.
文摘In their recently published paper in Opto-Electronic Ad-vances,Pietro Ferraro and his colleagues report on a new high-throughput tomographic phase instrument that precisely quantifies intracellular lipid droplets(LDs)1.LDs are lipid storage organelles found in most cell types and play an active role in critical biological pro-cesses,including energy metabolism,membrane homeo-stasis.
基金support of NSF Biophotonics Program and the NIH/National Center for Advancing Translational Science UCLA CTSI Grant UL1TR001881.
文摘The immunohistochemical(IHC)staining of the human epidermal growth factor receptor 2(HER2)biomarker is widely practiced in breast tissue analysis,preclinical studies,and diagnostic decisions,guiding cancer treatment and investigation of pathogenesis.HER2 staining demands laborious tissue treatment and chemical processing performed by a histotechnologist,which typically takes one day to prepare in a laboratory,increasing analysis time and associated costs.Here,we describe a deep learning-based virtual HER2 IHC staining method using a conditional generative adversarial network that is trained to rapidly transform autofluorescence microscopic images of unlabeled/label-free breast tissue sections into bright-field equivalent microscopic images,matching the standard HER2 IHC staining that is chemically performed on the same tissue sections.The efficacy of this virtual HER2 staining framework was demonstrated by quantitative analysis,in which three board-certified breast pathologists blindly graded the HER2 scores of virtually stained and immunohistochemically stained HER2 whole slide images(WSIs)to reveal that the HER2 scores determined by inspecting virtual IHC images are as accurate as their immunohistochemically stained counterparts.A second quantitative blinded study performed by the same diagnosticians further revealed that the virtually stained HER2 images exhibit a comparable staining quality in the level of nuclear detail,membrane clearness,and absence of staining artifacts with respect to their immunohistochemically stained counterparts.This virtual HER2 staining framework bypasses the costly,laborious,and time-consuming IHC staining procedures in laboratory and can be extended to other types of biomarkers to accelerate the IHC tissue staining used in life sciences and biomedical workflow.
基金The Ozcan Research Group at UCLA acknowledges the support of ONR (Grant#N00014-22-1-2016).
文摘Complex field imaging,which captures both the amplitude and phase information of input optical fields or objects,can offer rich structural insights into samples,such as their absorption and refractive index distributions.However,conventional image sensors are intensity-based and inherently lack the capability to directly measure the phase distribution of a field.This limitation can be overcome using interferometric or holographic methods,often supplemented by iterative phase retrieval algorithms,leading to a considerable increase in hardware complexity and computational demand.Here,we present a complex field imager design that enables snapshot imaging of both the amplitude and quantitative phase information of input fields using an intensity-based sensor array without any digital processing.Our design utilizes successive deep learning-optimized diffractive surfaces that are structured to collectively modulate the input complex field,forming two independent imaging channels that perform amplitude-to-amplitude and phase-to-intensity transformations between the input and output planes within a compact optical design,axially spanning~100 wavelengths.The intensity distributions of the output fields at these two channels on the sensor plane directly correspond to the amplitude and quantitative phase profiles of the input complex field,eliminating the need for any digital image reconstruction algorithms.We experimentally validated the efficacy of our complex field diffractive imager designs through 3D-printed prototypes operating at the terahertz spectrum,with the output amplitude and phase channel images closely aligning with our numerical simulations.We envision that this complex field imager will have various applications in security,biomedical imaging,sensing and material science,among others.
基金Research Group at UCLA acknowledges the support of U.S.Department of Energy(DOE),Office of Basic Energy Sciences,Division of Materials Sciences and Engineering under Award#DE-SC0023088.
文摘Image denoising,one of the essential inverse problems,targets to remove noise/artifacts from input images.In general,digital image denoising algorithms,executed on computers,present latency due to several iterations implemented in,e.g.,graphics processing units(GPUs).While deep learning-enabled methods can operate non-iteratively,they also introduce latency and impose a significant computational burden,leading to increased power consumption.Here,we introduce an analog diffractive image denoiser to all-optically and non-iteratively clean various forms of noise and artifacts from input images–implemented at the speed of light propagation within a thin diffractive visual processor that axially spans<250×λ,whereλis the wavelength of light.This all-optical image denoiser comprises passive transmissive layers optimized using deep learning to physically scatter the optical modes that represent various noise features,causing them to miss the output image Field-of-View(FoV)while retaining the object features of interest.Our results show that these diffractive denoisers can efficiently remove salt and pepper noise and image rendering-related spatial artifacts from input phase or intensity images while achieving an output power efficiency of~30–40%.We experimentally demonstrated the effectiveness of this analog denoiser architecture using a 3D-printed diffractive visual processor operating at the terahertz spectrum.Owing to their speed,power-efficiency,and minimal computational overhead,all-optical diffractive denoisers can be transformative for various image display and projection systems,including,e.g.,holographic displays.
文摘In recent years,the integration of deep learning techniques with biophotonic setups has opened new horizons in bioimaging.A compelling trend in this feld involves deliberately compromising certain measurement metrics to engineer better bioimaging tools in terms of e.g,cost,speed,and form-factor,followed by compensating for the resulting defects through the utilization of deep learning models trained on a large amount of ideal,superior or alternative data.This strategic approach has found increasing popularity due to its potential to enhance various aspects of biophotonic imaging.One of the primary motivations for employing this strategy is the pursuit of higher temporal resolution or increased imaging speed,critical for capturing fine dynamic biological processes.Additionally,this approach offers the prospect of simplifying hardware requirements and complexities,thereby making advanced imaging standards more accessible in terms of cost and/or size.This article provides an in-depth review of the diverse measurement aspects that researchers intentionally impair in their biophotonic setups,including the point spread function(PSF),signal-to-noise ratio(SNR),sampling density,and pixel resolution.By deliberately compromising these metrics,researchers aim to not only recuperate them through the application of deep learning networks,but also bolster in return other crucial parameters,such as the field of view(FOV),depth of field(DOF),and space-bandwidth product(SBP).Throughout this article,we discuss various biophotonic methods that have successfully employed this strategic approach.These techniques span a wide range of applications and showcase the versatility and effectiveness of deep learning in the context of compromised biophotonic data.Finally,by offering our perspectives on the exciting future possibilities of this rapidly evolving concept,we hope to motivate our readers from various disciplines to explore novel ways of balancing hardware compromises with compensation via artificial intelligence(Al).
基金the support of ONR(Grant#N00014-22-1-2016)The Jarrahi Research Group at UCLA acknowledges the support of NSF(Grant#2141223).
文摘Diffractive deep neural networks(D2NNs)are composed of successive transmissive layers optimized using supervised deep learning to all-optically implement various computational tasks between an input and output field-of-view.Here,we present a pyramid-structured diffractive optical network design(which we term P-D2NN),optimized specifically for unidirectional image magnification and demagnification.In this design,the diffractive layers are pyramidally scaled in alignment with the direction of the image magnification or demagnification.This P-D2NN design creates high-fidelity magnified or demagnified images in only one direction,while inhibiting the image formation in the opposite direction—achieving the desired unidirectional imaging operation using a much smaller number of diffractive degrees of freedom within the optical processor volume.Furthermore,the P-D2NN design maintains its unidirectional image magnification/demagnification functionality across a large band of illumination wavelengths despite being trained with a single wavelength.We also designed a wavelength-multiplexed P-D2NN,where a unidirectional magnifier and a unidirectional demagnifier operate simultaneously in opposite directions,at two distinct illumination wavelengths.Furthermore,we demonstrate that by cascading multiple unidirectional P-D2NN modules,we can achieve higher magnification factors.The efficacy of the P-D2NN architecture was also validated experimentally using terahertz illumination,successfully matching our numerical simulations.P-D2NN offers a physics-inspired strategy for designing task-specific visual processors.
文摘Object classification is an important aspect of machine intelligence.Current practices in object classification entail the digitization of object information followed by the application of digital algorithms such as deep neural networks.The execution of digital neural networks is power-consuming,and the throughput is limited.The existing von Neumann digital computing paradigm is also less suited for the implementation of highly parallel neural network architectures.^(1)
基金supported by the U.S.Department of Energy(DOE),Office of Basic Energy Sciences,Division of Materials Sciences and Engineering under Award#DE-SC0023088.
文摘Nonlinear encoding of optical information can be achieved using various forms of data representation.Here,we analyze the performances of different nonlinear information encoding strategies that can be employed in diffractive optical processors based on linear materials and shed light on their utility and performance gaps compared to the state-of-the-art digital deep neural networks.For a comprehensive evaluation,we used different datasets to compare the statistical inference performance of simpler-to-implement nonlinear encoding strategies that involve,e.g.,phase encoding,against data repetition-based nonlinear encoding strategies.We show that data repetition within a diffractive volume(e.g.,through an optical cavity or cascaded introduction of the input data)causes the loss of the universal linear transformation capability of a diffractive optical processor.Therefore,data repetition-based diffractive blocks cannot provide optical analogs to fully connected or convolutional layers commonly employed in digital neural networks.However,they can still be effectively trained for specific inference tasks and achieve enhanced accuracy,benefiting from the nonlinear encoding of the input information.Our results also reveal that phase encoding of input information without data repetition provides a simpler nonlinear encoding strategy with comparable statistical inference accuracy to data repetition-based diffractive processors.Our analyses and conclusions would be of broad interest to explore the push-pull relationship between linear material-based diffractive optical systems and nonlinear encoding strategies in visual information processors.
基金support of the U.S.Department of Energy (DOE),Office of Basic Energy Sciences,Division of Materials Sciences and Engineering under Award#DE-SC0023088.
文摘As an optical processor,a diffractive deep neural network(D2NN)utilizes engineered diffractive surfaces designed through machine learning to perform all-optical information processing,completing its tasks at the speed of light propagation through thin optical layers.With sufficient degrees of freedom,D2NNs can perform arbitrary complex-valued linear transformations using spatially coherent light.Similarly,D2NNs can also perform arbitrary linear intensity transformations with spatially incoherent illumination;however,under spatially incoherent light,these transformations are nonnegative,acting on diffraction-limited optical intensity patterns at the input field of view.Here,we expand the use of spatially incoherent D2NNs to complex-valued information processing for executing arbitrary complex-valued linear transformations using spatially incoherent light.Through simulations,we show that as the number of optimized diffractive features increases beyond a threshold dictated by the multiplication of the input and output space-bandwidth products,a spatially incoherent diffractive visual processor can approximate any complex-valued linear transformation and be used for all-optical image encryption using incoherent illumination.The findings are important for the all-optical processing of information under natural light using various forms of diffractive surface-based optical processors.
文摘Phase recovery from intensity-only measurements forms the heart of coherent imaging techniques and holography.In this study,we demonstrate that a neural network can learn to perform phase recovery and holographic image reconstruction after appropriate training.This deep learning-based approach provides an entirely new framework to conduct holographic imaging by rapidly eliminating twin-image and self-interference-related spatial artifacts.This neural network-based method is fast to compute and reconstructs phase and amplitude images of the objects using only one hologram,requiring fewer measurements in addition to being computationally faster.We validated this method by reconstructing the phase and amplitude images of various samples,including blood and Pap smears and tissue sections.These results highlight that challenging problems in imaging science can be overcome through machine learning,providing new avenues to design powerful computational imaging systems.
文摘Recent advances in deep learning have given rise to a new paradigm of holographic image reconstruction and phase recovery techniques with real-time performance.Through data-driven approaches,these emerging techniques have overcome some of the challenges associated with existing holographic image reconstruction methods while also minimizing the hardware requirements of holography.These recent advances open up a myriad of new opportunities for the use of coherent imaging systems in biomedical and engineering research and related applications.
基金The Ozcan Research Group at UCLA acknowledges the support of NSF Engineering Research Center(ERC,PATHS-UP)the Army Research Office(ARO,W911NF-13-1-0419 and W911NF-13-1-0197)+8 种基金the ARO Life Sciences Divisionthe National Science Foundation(NSF)CBET Division Biophotonics Programthe NSF Emerging Frontiers in Research and Innovation(EFRI)Awardthe NSF INSPIRE Award,NSF Partnerships for Innovation:Building Innovation Capacity(PFI:BIC)Programthe National Institutes of Health(NIH,R21EB023115)the Howard Hughes Medical Institute(HHMI)Vodafone Americas Foundationthe Mary Kay FoundationSteven&Alexandra Cohen Foundation.
文摘Using a deep neural network,we demonstrate a digital staining technique,which we term PhaseStain,to transform the quantitative phase images(QPI)of label-free tissue sections into images that are equivalent to the brightfield microscopy images of the same samples that are histologically stained.Through pairs of image data(QPI and the corresponding brightfield images,acquired after staining),we train a generative adversarial network and demonstrate the effectiveness of this virtual-staining approach using sections of human skin,kidney,and liver tissue,matching the brightfield microscopy images of the same samples stained with Hematoxylin and Eosin,Jones’stain,and Masson’s trichrome stain,respectively.This digital-staining framework may further strengthen various uses of label-free QPI techniques in pathology applications and biomedical research in general,by eliminating the need for histological staining,reducing sample preparation related costs and saving time.Our results provide a powerful example of some of the unique opportunities created by data-driven image transformations enabled by deep learning.
文摘Deep learning has been transformative in many fields,motivating the emergence of various optical computing architectures.Diffractive optical network is a recently introduced optical computing framework that merges wave optics with deep-learning methods to design optical neural networks.Diffraction-based all-optical object recognition systems,designed through this framework and fabricated by 3D printing,have been reported to recognize handwritten digits and fashion products,demonstrating all-optical inference and generalization to sub-classes of data.These previous diffractive approaches employed monochromatic coherent light as the illumination source.Here,we report a broadband diffractive optical neural network design that simultaneously processes a continuum of wavelengths generated by a temporally incoherent broadband source to all-optically perform a specific task learned using deep learning.We experimentally validated the success of this broadband diffractive neural network architecture by designing,fabricating and testing seven different multi-layer,diffractive optical systems that transform the optical wavefront generated by a broadband THz pulse to realize(1)a series of tuneable,single-passband and dual-passband spectral filters and(2)spatially controlled wavelength de-multiplexing.Merging the native or engineered dispersion of various material systems with a deep-learning-based design strategy,broadband diffractive neural networks help us engineer the light–matter interaction in 3D,diverging from intuitive and analytical design methods to create taskspecific optical components that can all-optically perform deterministic tasks or statistical inference for optical machine learning.
基金The Ozcan Research Group at UCLA gratefully acknowledges the support of the Presidential Early Career Award for Scientists and Engineers(PECASE),the Army Research Office(AROW911NF-13-1-0419 and W911NF-13-1-0197)+2 种基金the ARO Life Sciences Division,the ARO Young Investigator Award,the National Science Foundation(NSF)CAREER Award,the NSF CBET Division Biophotonics Program,the NSF Emerging Frontiers in Research and Innovation(EFRI)Award,the NSF EAGER Award,Office of Naval Research(ONR),the Howard Hughes Medical Institute(HHMI)the National Institutes of Health(NIH)Director’s New Innovator Award DP2OD006427 from the Office of the Director,National Institutes of HealthThis work is based on research performed in a laboratory renovated by the National Science Foundation under Grant No.0963183,which is an award funded under the American Recovery and Reinvestment Act of 2009(ARRA).
文摘Wide field-of-view(FOV)and high-resolution imaging requires microscopy modalities to have large space-bandwidth products.Lensfree on-chip microscopy decouples resolution from FOV and can achieve a space-bandwidth product greater than one billion under unit magnification using state-of-the-art opto-electronic sensor chips and pixel super-resolution techniques.However,using vertical illumination,the effective numerical aperture(NA)that can be achieved with an on-chip microscope is limited by a poor signal-to-noise ratio(SNR)at high spatial frequencies and imaging artifacts that arise as a result of the relatively narrow acceptance angles of the sensor’s pixels.Here,we report,for the first time,a synthetic aperture-based on-chip microscope in which the illumination angle is scanned across the surface of a dome to increase the effective NA of the reconstructed lensfree image to 1.4,achieving e.g.,,250-nm resolution at 700-nm wavelength under unit magnification.This synthetic aperture approach not only represents the largest NA achieved to date using an on-chip microscope but also enables color imaging of connected tissue samples,such as pathology slides,by achieving robust phase recovery without the need for multi-height scanning or any prior information about the sample.To validate the effectiveness of this synthetic aperture-based,partially coherent,holographic on-chip microscope,we have successfully imaged color-stained cancer tissue slides as well as unstained Papanicolaou smears across a very large FOV of 20.5 mm^(2).This compact on-chip microscope based on a synthetic aperture approach could be useful for various applications in medicine,physical sciences and engineering that demand high-resolution wide-field imaging.
基金funded by the Army Research Office(ARO,W56HZV-16-C-0122)The Ozcan Research Group at UCLA acknowledges the support of the NSF Engineering Research Center(ERC,PATHS-UP)+8 种基金the ARO Life Sciences Division,the National Science Foundation(NSF)CBET Division Biophotonics Programan NSF Emerging Frontiers in Research and Innovation(EFRI)Awardan NSF INSPIRE Awardthe NSF Partnerships for Innovation:Building Innovation Capacity(PFI:BIC)Programthe National Institutes of Healththe Howard Hughes Medical Institute(HHMI)the Vodafone Americas Foundationthe Mary Kay Foundationthe Steven&Alexandra Cohen Foundation.
文摘We report a deep learning-enabled field-portable and cost-effective imaging flow cytometer that automatically captures phase-contrast color images of the contents of a continuously flowing water sample at a throughput of 100 mL/h.The device is based on partially coherent lens-free holographic microscopy and acquires the diffraction patterns of flowing micro-objects inside a microfluidic channel.These holographic diffraction patterns are reconstructed in real time using a deep learning-based phase-recovery and image-reconstruction method to produce a color image of each micro-object without the use of external labeling.Motion blur is eliminated by simultaneously illuminating the sample with red,green,and blue light-emitting diodes that are pulsed.Operated by a laptop computer,this portable device measures 15.5 cm×15 cm×12.5 cm,weighs 1 kg,and compared to standard imaging flow cytometers,it provides extreme reductions of cost,size and weight while also providing a high volumetric throughput over a large object size range.We demonstrated the capabilities of this device by measuring ocean samples at the Los Angeles coastline and obtaining images of its micro-and nanoplankton composition.Furthermore,we measured the concentration of a potentially toxic alga(Pseudo-nitzschia)in six public beaches in Los Angeles and achieved good agreement with measurements conducted by the California Department of Public Health.The cost-effectiveness,compactness,and simplicity of this computational platform might lead to the creation of a network of imaging flow cytometers for largescale and continuous monitoring of the ocean microbiome,including its plankton composition.
基金The Ozcan Group at UCLA acknowledges the support of the Koç Group,the National Science Foundation(PATHS-UP ERC)the Howard Hughes Medical Institute.Y.W.also acknowledges the support of the SPIE John Kiel Scholarship.
文摘Digital holographic microscopy enables the 3D reconstruction of volumetric samples from a single-snapshot hologram.However,unlike a conventional bright-field microscopy image,the quality of holographic reconstructions is compromised by interference fringes as a result of twin images and out-of-plane objects.Here,we demonstrate that cross-modality deep learning using a generative adversarial network(GAN)can endow holographic images of a sample volume with bright-field microscopy contrast,combining the volumetric imaging capability of holography with the speckle-and artifact-free image contrast of incoherent bright-field microscopy.We illustrate the performance of this“bright-field holography”method through the snapshot imaging of bioaerosols distributed in 3D,matching the artifact-free image contrast and axial sectioning performance of a high-NA bright-field microscope.This data-driven deep-learning-based imaging method bridges the contrast gap between coherent and incoherent imaging,and enables the snapshot 3D imaging of objects with bright-field contrast from a single hologram,benefiting from the wave-propagation framework of holography.
基金Altug Research Group acknowledges National Science Foundation(NSF)CAREER Award,Presidential Early Career Award for Scientist and Engineers(PECASE)ECCS-0954790Office of Naval Research Young Investigator Award 11PR00755-00-P00001+1 种基金NSF Engineering Research Center on Smart Lighting EEC-0812056Massachusetts Life Sciences Center Young Investigator award and Ecole Polytechnique Federale de Lausanne.Ozcan Research Group acknowledges the support of PECASE,Army Research Office(ARO)Life Sciences Division,ARO Young Investigator Award,NSF CAREER Award,ONR Young Investigator Award and the National Institute of Health(NIH)Director’s New Innovator Award DP2OD006427 from the Office of The Director,NIH and the NSF EFRI Award.
文摘We demonstrate a handheld on-chip biosensing technology that employs plasmonic microarrays coupled with a lens-free computational imaging system towards multiplexed and high-throughput screening of biomolecular interactions for point-of-care applications and resource-limited settings.This lightweight and field-portable biosensing device,weighing 60 g and 7.5 cm tall,utilizes a compact optoelectronic sensor array to record the diffraction patterns of plasmonic nanostructures under uniform illumination by a single-light emitting diode tuned to the plasmonic mode of the nanoapertures.Employing a sensitive plasmonic array design that is combined with lens-free computational imaging,we demonstrate label-free and quantitative detection of biomolecules with a protein layer thickness down to 3 nm.Integrating large-scale plasmonic microarrays,our on-chip imaging platform enables simultaneous detection of protein mono-and bilayers on the same platform over a wide range of biomolecule concentrations.In this handheld device,we also employ an iterative phase retrieval-based image reconstruction method,which offers the ability to digitally image a highly multiplexed array of sensors on the same plasmonic chip,making this approach especially suitable for high-throughput diagnostic applications in field settings.
基金The Ozcan Lab at UCLA acknowledges the support of Fujikura(Japan).O.Ksupport of the Fulbright Commission of Turkey。
文摘The precise engineering of materials and surfaces has been at the heart of some of the recent advances in optics and photonics.These advances related to the engineering of materials with new functionalities have also opened up exciting avenues for designing trainable surfaces that can perform computation and machine-learning tasks through light-matter interactions and diffraction.Here,we analyze the information-processing capacity of coherent optical networks formed by diffractive surfaces that are trained to perform an all-optical computational task between a given input and output field-of-view.We show that the dimensionality of the all-optical solution space covering the complex-valued transformations between the input and output fields-of-view is linearly proportional to the number of diffractive surfaces within the optical network,up to a limit that is dictated by the extent of the input and output fields-of-view.Deeper diffractive networks that are composed of larger numbers of trainable surfaces can cover a higher-dimensional subspace of the complex-valued linear transformations between a larger input field-of-view and a larger output field-of-view and exhibit depth advantages in terms of their statistical inference,learning,and generalization capabilities for different image classification tasks when compared with a single trainable diffractive surface.These analyses and conclusions are broadly applicable to various forms of diffractive surfaces,including,e.g.,plasmomc and/or dielectric-based metasurfaces and flat optics,which can be used to form all-optical processors.
基金The Ozcan Research Group at UCLA acknowledges the support of Fujikura(Japan).
文摘A plethora of research advances have emerged in the fields of optics and photonics that benefit from harnessing the power of machine learning.Specifically,there has been a revival of interest in optical computing hardware due to its potential advantages for machine learning tasks in terms of parallelization,power efficiency and computation speed.Diffractive deep neural networks(D^(2)NNs)form such an optical computing framework that benefits from deep learning-based design of successive diffractive layers to all-optically process information as the input light diffracts through these passive layers.D^(2)NNs have demonstrated success in various tasks,including object classification,the spectral encoding of information,optical pulse shaping and imaging.Here,we substantially improve the inference performance of diffractive optical networks using feature engineering and ensemble learning.After independently training 1252 D^(2)NNs that were diversely engineered with a variety of passive input filters,we applied a pruning algorithm to select an optimized ensemble of D^(2)NNs that collectively improved the image classification accuracy.Through this pruning,we numerically demonstrated that ensembles of N=14 and N=30 D^(2)NNs achieve blind testing accuracies of 61.14±0.23%and 62.13±0.05%,respectively,on the classification of GFAR-10 test images,providing an inference improvennent of>16%compared to the average performance of the individual D^(2)NNs within each ensemble.These results constitute the highest inference accuracies achieved to date by any diffractive optical neural network design on the same dataset and might provide a significant leap to extend the application space of diffractive optical image classification and machine vision systems.