In their recently published paper in Opto-Electronic Ad-vances,Pietro Ferraro and his colleagues report on a new high-throughput tomographic phase instrument that precisely quantifies intracellular lipid droplets(LDs)...In their recently published paper in Opto-Electronic Ad-vances,Pietro Ferraro and his colleagues report on a new high-throughput tomographic phase instrument that precisely quantifies intracellular lipid droplets(LDs)1.LDs are lipid storage organelles found in most cell types and play an active role in critical biological pro-cesses,including energy metabolism,membrane homeo-stasis.展开更多
In an age where digitization is widespread in clinical and preclinical workflows,pathology is still predominantly practiced by microscopic evaluation of stained tissue specimens affixed on glass slides.Over the last d...In an age where digitization is widespread in clinical and preclinical workflows,pathology is still predominantly practiced by microscopic evaluation of stained tissue specimens affixed on glass slides.Over the last decade,new high throughput digital scanning microscopes have ushered in the era of digital pathology that,along with recent advances in machine vision,have opened up new possibilities for Computer-Aided-Diagnoses.Despite these advances,the high infrastructural costs related to digital pathology and the perception that the digitization process is an additional and nondirectly reimbursable step have challenged its widespread adoption.Here,we discuss how emerging virtual staining technologies and machine learning can help to disrupt the standard histopathology workflow and create new avenues for the diagnostic paradigm that will benefit patients and healthcare systems alike via digital pathology.展开更多
The immunohistochemical(IHC)staining of the human epidermal growth factor receptor 2(HER2)biomarker is widely practiced in breast tissue analysis,preclinical studies,and diagnostic decisions,guiding cancer treatment a...The immunohistochemical(IHC)staining of the human epidermal growth factor receptor 2(HER2)biomarker is widely practiced in breast tissue analysis,preclinical studies,and diagnostic decisions,guiding cancer treatment and investigation of pathogenesis.HER2 staining demands laborious tissue treatment and chemical processing performed by a histotechnologist,which typically takes one day to prepare in a laboratory,increasing analysis time and associated costs.Here,we describe a deep learning-based virtual HER2 IHC staining method using a conditional generative adversarial network that is trained to rapidly transform autofluorescence microscopic images of unlabeled/label-free breast tissue sections into bright-field equivalent microscopic images,matching the standard HER2 IHC staining that is chemically performed on the same tissue sections.The efficacy of this virtual HER2 staining framework was demonstrated by quantitative analysis,in which three board-certified breast pathologists blindly graded the HER2 scores of virtually stained and immunohistochemically stained HER2 whole slide images(WSIs)to reveal that the HER2 scores determined by inspecting virtual IHC images are as accurate as their immunohistochemically stained counterparts.A second quantitative blinded study performed by the same diagnosticians further revealed that the virtually stained HER2 images exhibit a comparable staining quality in the level of nuclear detail,membrane clearness,and absence of staining artifacts with respect to their immunohistochemically stained counterparts.This virtual HER2 staining framework bypasses the costly,laborious,and time-consuming IHC staining procedures in laboratory and can be extended to other types of biomarkers to accelerate the IHC tissue staining used in life sciences and biomedical workflow.展开更多
Image denoising,one of the essential inverse problems,targets to remove noise/artifacts from input images.In general,digital image denoising algorithms,executed on computers,present latency due to several iterations i...Image denoising,one of the essential inverse problems,targets to remove noise/artifacts from input images.In general,digital image denoising algorithms,executed on computers,present latency due to several iterations implemented in,e.g.,graphics processing units(GPUs).While deep learning-enabled methods can operate non-iteratively,they also introduce latency and impose a significant computational burden,leading to increased power consumption.Here,we introduce an analog diffractive image denoiser to all-optically and non-iteratively clean various forms of noise and artifacts from input images–implemented at the speed of light propagation within a thin diffractive visual processor that axially spans<250×λ,whereλis the wavelength of light.This all-optical image denoiser comprises passive transmissive layers optimized using deep learning to physically scatter the optical modes that represent various noise features,causing them to miss the output image Field-of-View(FoV)while retaining the object features of interest.Our results show that these diffractive denoisers can efficiently remove salt and pepper noise and image rendering-related spatial artifacts from input phase or intensity images while achieving an output power efficiency of~30–40%.We experimentally demonstrated the effectiveness of this analog denoiser architecture using a 3D-printed diffractive visual processor operating at the terahertz spectrum.Owing to their speed,power-efficiency,and minimal computational overhead,all-optical diffractive denoisers can be transformative for various image display and projection systems,including,e.g.,holographic displays.展开更多
Object classification is an important aspect of machine intelligence.Current practices in object classification entail the digitization of object information followed by the application of digital algorithms such as d...Object classification is an important aspect of machine intelligence.Current practices in object classification entail the digitization of object information followed by the application of digital algorithms such as deep neural networks.The execution of digital neural networks is power-consuming,and the throughput is limited.The existing von Neumann digital computing paradigm is also less suited for the implementation of highly parallel neural network architectures.^(1)展开更多
Large-scale linear operations are the cornerstone for performing complex computational tasks.Using optical computing to perform linear transformations offers potential advantages in terms of speed,parallelism,and scal...Large-scale linear operations are the cornerstone for performing complex computational tasks.Using optical computing to perform linear transformations offers potential advantages in terms of speed,parallelism,and scalability.Previously,the design of successive spatially engineered diffractive surfaces forming an optical network was demonstrated to perform statistical inference and compute an arbitrary complex-valued linear transformation using narrowband illumination.We report deep-learning-based design of a massively parallel broadband diffractive neural network for all-optically performing a large group of arbitrarily selected,complex-valued linear transformations between an input and output field of view,each with Ni and No pixels,respectively.This broadband diffractive processor is composed of Nw wavelength channels,each of which is uniquely assigned to a distinct target transformation;a large set of arbitrarily selected linear transformations can be individually performed through the same diffractive network at different illumination wavelengths,either simultaneously or sequentially(wavelength scanning).We demonstrate that such a broadband diffractive network,regardless of its material dispersion,can successfully approximate Nw unique complex-valued linear transforms with a negligible error when the number of diffractive neurons(N)in its design is≥2NwNiNo.We further report that the spectral multiplexing capability can be increased by increasing N;our numerical analyses confirm these conclusions for Nw>180 and indicate that it can further increase to Nw∼2000,depending on the upper bound of the approximation error.Massively parallel,wavelength-multiplexed diffractive networks will be useful for designing highthroughput intelligent machine-vision systems and hyperspectral processors that can perform statistical inference and analyze objects/scenes with unique spectral properties.展开更多
Histological staining is the gold standard for tissue examination in clinical pathology and life-science research,which visualizes the tissue and cellular structures using chromatic dyes or fluorescence labels to aid ...Histological staining is the gold standard for tissue examination in clinical pathology and life-science research,which visualizes the tissue and cellular structures using chromatic dyes or fluorescence labels to aid the microscopic assessment of tissue.However,the current histological staining workflow requires tedious sample preparation steps,specialized laboratory infrastructure,and trained histotechnologists,making it expensive,time-consuming,and not accessible in resource-limited settings.Deep learning techniques created new opportunities to revolutionize staining methods by digitally generating histological stains using trained neural networks,providing rapid,cost-effective,and accurate alternatives to standard chemical staining methods.These techniques,broadly referred to as virtual staining,were extensively explored by multiple research groups and demonstrated to be successful in generating various types of histological stains from label-free microscopic images of unstained samples;similar approaches were also used for transforming images of an already stained tissue sample into another type of stain,performing virtual stain-to-stain transformations.In this Review,we provide a comprehensive overview of the recent research advances in deep learning-enabled virtual histological staining techniques.The basic concepts and the typical workflow of virtual staining are introduced,followed by a discussion of representative works and their technical innovations.We also share our perspectives on the future of this emerging field,aiming to inspire readers from diverse scientifc fields to further expand the scope of deep learning-enabled virtual histological staining techniques and their applications.展开更多
Under spatially coherent light,a diffractive optical network composed of structured surfaces can be designed to perform any arbitrary complex-valued linear transformation between its input and output fields-of-view(FO...Under spatially coherent light,a diffractive optical network composed of structured surfaces can be designed to perform any arbitrary complex-valued linear transformation between its input and output fields-of-view(FOVs)if the total number(N)of optimizable phase-only diffractive features is≥~2N_(i)N_(o),where Ni and No refer to the number of useful pixels at the input and the output FOVs,respectively.Here we report the design of a spatially incoherent diffractive optical processor that can approximate any arbitrary linear transformation in time-averaged intensity between its input and output FOVs.Under spatially incoherent monochromatic light,the spatially varying intensity point spread function(H)of a diffractive network,corresponding to a given,arbitrarily-selected linear intensity transformation,can be written as H(m,n;m′,n′)=|h(m,n;m′,n′)|^(2),where h is the spatially coherent point spread function of the same diffractive network,and(m,n)and(m′,n′)define the coordinates of the output and input FOVs,respectively.Using numerical simulations and deep learning,supervised through examples of input-output profiles,we demonstrate that a spatially incoherent diffractive network can be trained to all-optically perform any arbitrary linear intensity transformation between its input and output if N≥~2N_(i)N_(o).We also report the design of spatially incoherent diffractive networks for linear processing of intensity information at multiple illumination wavelengths,operating simultaneously.Finally,we numerically demonstrate a diffractive network design that performs all-optical classification of handwritten digits under spatially incoherent illumination,achieving a test accuracy of>95%.Spatially incoherent diffractive networks will be broadly useful for designing all-optical visual processors that can work under natural light.展开更多
Free-space optical information transfer through diffusive media is critical in many applications, such as biomedical devices and optical communication, but remains challenging due to random, unknown perturbations in t...Free-space optical information transfer through diffusive media is critical in many applications, such as biomedical devices and optical communication, but remains challenging due to random, unknown perturbations in the optical path. We demonstrate an optical diffractive decoder with electronic encoding to accurately transfer the optical information of interest, corresponding to, e.g., any arbitrary input object or message, through unknown random phase diffusers along the optical path. This hybrid electronic-optical model, trained using supervised learning, comprises a convolutional neural network-based electronic encoder and successive passive diffractive layers that are jointly optimized. After their joint training using deep learning,our hybrid model can transfer optical information through unknown phase diffusers, demonstrating generalization to new random diffusers never seen before. The resulting electronic-encoder and optical-decoder model was experimentally validated using a 3D-printed diffractive network that axially spans <70λ, whereλ = 0.75 mm is the illumination wavelength in the terahertz spectrum, carrying the desired optical information through random unknown diffusers. The presented framework can be physically scaled to operate at different parts of the electromagnetic spectrum, without retraining its components, and would offer low-power and compact solutions for optical information transfer in free space through unknown random diffusive media.展开更多
Many exciting terahertz imaging applications,such as non-destructive evaluation,biomedical diagnosis,and security screening,have been historically limited in practical usage due to the raster-scanning requirement of i...Many exciting terahertz imaging applications,such as non-destructive evaluation,biomedical diagnosis,and security screening,have been historically limited in practical usage due to the raster-scanning requirement of imaging systems,which impose very low imaging speeds.However,recent advancements in terahertz imaging systems have greatly increased the imaging throughput and brought the promising potential of terahertz radiation from research laboratories closer to real-world applications.Here,we review the development of terahertz imaging technologies from both hardware and computational imaging perspectives.We introduce and compare different types of hardware enabling frequency-domain and time-domain imaging using various thermal,photon,and field image sensor arrays.We discuss how different imaging hardware and computational imaging algorithms provide opportunities for capturing time-of-flight,spectroscopic,phase,and intensity image data at high throughputs.Furthermore,the new prospects and challenges for the development of future high-throughput terahertz imaging systems are briefly introduced.展开更多
Multispectral imaging has been used for numerous applications in e.g.,environmental monitoring,aerospace,defense,and biomedicine.Here,we present a diffractive optical network-based multispectral imaging system trained...Multispectral imaging has been used for numerous applications in e.g.,environmental monitoring,aerospace,defense,and biomedicine.Here,we present a diffractive optical network-based multispectral imaging system trained using deep learning to create a virtual spectral filter array at the output image field-of-view.This diffractive multispectral imager performs spatially-coherent imaging over a large spectrum,and at the same time,routes a pre-determined set of spectral channels onto an array of pixels at the output plane,converting a monochrome focal-plane array or image sensor into a multispectral imaging device without any spectral filters or image recovery algorithms.Furthermore,the spectral responsivity of this diffractive multispectral imager is not sensitive to input polarization states.Through numerical simulations,we present different diffractive network designs that achieve snapshot multispectral imaging with 4,9 and 16 unique spectral bands within the visible spectrum,based on passive spatially-structured diffractive surfaces,with a compact design that axially spans ~72λ_(m),where λ_(m) is the mean wavelength of the spectral band of interest.Moreover,we experimentally demonstrate a diffractive multispectral imager based on a 3D-printed diffractive network that creates at its output image plane a spatially repeating virtual spectral filter array with 2×2=4 unique bands at terahertz spectrum.Due to their compact form factor and computation-free,power-efficient and polarization-insensitive forward operation,diffractive multispectral imagers can be transformative for various imaging and sensing applications and be used at different parts of the electromagnetic spectrum where high-density and wide-area multispectral pixel arrays are not widely available.展开更多
Classification of an object behind a random and unknown scattering medium sets a challenging task for computational imaging and machine vision fields.Recent deep learning-based approaches demonstrated the classificati...Classification of an object behind a random and unknown scattering medium sets a challenging task for computational imaging and machine vision fields.Recent deep learning-based approaches demonstrated the classification of objects using diffuser-distorted patterns collected by an image sensor.These methods demand relatively large-scale computing using deep neural networks running on digital computers.Here,we present an all-optical processor to directly classify unknown objects through unknown,random phase diffusers using broadband illumination detected with a single pixel.A set of transmissive diffractive layers,optimized using deep learning,forms a physical network that all-optically maps the spatial information of an input object behind a random diffuser into the power spectrum of the output light detected through a single pixel at the output plane of the diffractive network.We numerically demonstrated the accuracy of this framework using broadband radiation to classify unknown handwritten digits through random new diffusers,never used during the training phase,and achieved a blind testing accuracy of 87.74±1.12%.We also experimentally validated our single-pixel broadband diffractive network by classifying handwritten digits"0"and"1"through a random diffuser using terahertz waves and a 3D-printed diffractive network.This single-pixel all-optical object classification system through random diffusers is based on passive diffractive layers that process broadband input light and can operate at any part of the electromagnetic spectrum by simply scaling the diffractive features proportional to the wavelength range of interest.These results have various potential applications in,e.g.,biomedical imaging,security,robotics,and autonomous driving.展开更多
Using a deep neural network,we demonstrate a digital staining technique,which we term PhaseStain,to transform the quantitative phase images(QPI)of label-free tissue sections into images that are equivalent to the brig...Using a deep neural network,we demonstrate a digital staining technique,which we term PhaseStain,to transform the quantitative phase images(QPI)of label-free tissue sections into images that are equivalent to the brightfield microscopy images of the same samples that are histologically stained.Through pairs of image data(QPI and the corresponding brightfield images,acquired after staining),we train a generative adversarial network and demonstrate the effectiveness of this virtual-staining approach using sections of human skin,kidney,and liver tissue,matching the brightfield microscopy images of the same samples stained with Hematoxylin and Eosin,Jones’stain,and Masson’s trichrome stain,respectively.This digital-staining framework may further strengthen various uses of label-free QPI techniques in pathology applications and biomedical research in general,by eliminating the need for histological staining,reducing sample preparation related costs and saving time.Our results provide a powerful example of some of the unique opportunities created by data-driven image transformations enabled by deep learning.展开更多
Recent advances in deep learning have given rise to a new paradigm of holographic image reconstruction and phase recovery techniques with real-time performance.Through data-driven approaches,these emerging techniques ...Recent advances in deep learning have given rise to a new paradigm of holographic image reconstruction and phase recovery techniques with real-time performance.Through data-driven approaches,these emerging techniques have overcome some of the challenges associated with existing holographic image reconstruction methods while also minimizing the hardware requirements of holography.These recent advances open up a myriad of new opportunities for the use of coherent imaging systems in biomedical and engineering research and related applications.展开更多
We report a deep learning-enabled field-portable and cost-effective imaging flow cytometer that automatically captures phase-contrast color images of the contents of a continuously flowing water sample at a throughput...We report a deep learning-enabled field-portable and cost-effective imaging flow cytometer that automatically captures phase-contrast color images of the contents of a continuously flowing water sample at a throughput of 100 mL/h.The device is based on partially coherent lens-free holographic microscopy and acquires the diffraction patterns of flowing micro-objects inside a microfluidic channel.These holographic diffraction patterns are reconstructed in real time using a deep learning-based phase-recovery and image-reconstruction method to produce a color image of each micro-object without the use of external labeling.Motion blur is eliminated by simultaneously illuminating the sample with red,green,and blue light-emitting diodes that are pulsed.Operated by a laptop computer,this portable device measures 15.5 cm×15 cm×12.5 cm,weighs 1 kg,and compared to standard imaging flow cytometers,it provides extreme reductions of cost,size and weight while also providing a high volumetric throughput over a large object size range.We demonstrated the capabilities of this device by measuring ocean samples at the Los Angeles coastline and obtaining images of its micro-and nanoplankton composition.Furthermore,we measured the concentration of a potentially toxic alga(Pseudo-nitzschia)in six public beaches in Los Angeles and achieved good agreement with measurements conducted by the California Department of Public Health.The cost-effectiveness,compactness,and simplicity of this computational platform might lead to the creation of a network of imaging flow cytometers for largescale and continuous monitoring of the ocean microbiome,including its plankton composition.展开更多
Digital holographic microscopy enables the 3D reconstruction of volumetric samples from a single-snapshot hologram.However,unlike a conventional bright-field microscopy image,the quality of holographic reconstructions...Digital holographic microscopy enables the 3D reconstruction of volumetric samples from a single-snapshot hologram.However,unlike a conventional bright-field microscopy image,the quality of holographic reconstructions is compromised by interference fringes as a result of twin images and out-of-plane objects.Here,we demonstrate that cross-modality deep learning using a generative adversarial network(GAN)can endow holographic images of a sample volume with bright-field microscopy contrast,combining the volumetric imaging capability of holography with the speckle-and artifact-free image contrast of incoherent bright-field microscopy.We illustrate the performance of this“bright-field holography”method through the snapshot imaging of bioaerosols distributed in 3D,matching the artifact-free image contrast and axial sectioning performance of a high-NA bright-field microscope.This data-driven deep-learning-based imaging method bridges the contrast gap between coherent and incoherent imaging,and enables the snapshot 3D imaging of objects with bright-field contrast from a single hologram,benefiting from the wave-propagation framework of holography.展开更多
Deep learning has been transformative in many fields,motivating the emergence of various optical computing architectures.Diffractive optical network is a recently introduced optical computing framework that merges wav...Deep learning has been transformative in many fields,motivating the emergence of various optical computing architectures.Diffractive optical network is a recently introduced optical computing framework that merges wave optics with deep-learning methods to design optical neural networks.Diffraction-based all-optical object recognition systems,designed through this framework and fabricated by 3D printing,have been reported to recognize handwritten digits and fashion products,demonstrating all-optical inference and generalization to sub-classes of data.These previous diffractive approaches employed monochromatic coherent light as the illumination source.Here,we report a broadband diffractive optical neural network design that simultaneously processes a continuum of wavelengths generated by a temporally incoherent broadband source to all-optically perform a specific task learned using deep learning.We experimentally validated the success of this broadband diffractive neural network architecture by designing,fabricating and testing seven different multi-layer,diffractive optical systems that transform the optical wavefront generated by a broadband THz pulse to realize(1)a series of tuneable,single-passband and dual-passband spectral filters and(2)spatially controlled wavelength de-multiplexing.Merging the native or engineered dispersion of various material systems with a deep-learning-based design strategy,broadband diffractive neural networks help us engineer the light–matter interaction in 3D,diverging from intuitive and analytical design methods to create taskspecific optical components that can all-optically perform deterministic tasks or statistical inference for optical machine learning.展开更多
We demonstrate a handheld on-chip biosensing technology that employs plasmonic microarrays coupled with a lens-free computational imaging system towards multiplexed and high-throughput screening of biomolecular intera...We demonstrate a handheld on-chip biosensing technology that employs plasmonic microarrays coupled with a lens-free computational imaging system towards multiplexed and high-throughput screening of biomolecular interactions for point-of-care applications and resource-limited settings.This lightweight and field-portable biosensing device,weighing 60 g and 7.5 cm tall,utilizes a compact optoelectronic sensor array to record the diffraction patterns of plasmonic nanostructures under uniform illumination by a single-light emitting diode tuned to the plasmonic mode of the nanoapertures.Employing a sensitive plasmonic array design that is combined with lens-free computational imaging,we demonstrate label-free and quantitative detection of biomolecules with a protein layer thickness down to 3 nm.Integrating large-scale plasmonic microarrays,our on-chip imaging platform enables simultaneous detection of protein mono-and bilayers on the same platform over a wide range of biomolecule concentrations.In this handheld device,we also employ an iterative phase retrieval-based image reconstruction method,which offers the ability to digitally image a highly multiplexed array of sensors on the same plasmonic chip,making this approach especially suitable for high-throughput diagnostic applications in field settings.展开更多
Phase recovery from intensity-only measurements forms the heart of coherent imaging techniques and holography.In this study,we demonstrate that a neural network can learn to perform phase recovery and holographic imag...Phase recovery from intensity-only measurements forms the heart of coherent imaging techniques and holography.In this study,we demonstrate that a neural network can learn to perform phase recovery and holographic image reconstruction after appropriate training.This deep learning-based approach provides an entirely new framework to conduct holographic imaging by rapidly eliminating twin-image and self-interference-related spatial artifacts.This neural network-based method is fast to compute and reconstructs phase and amplitude images of the objects using only one hologram,requiring fewer measurements in addition to being computationally faster.We validated this method by reconstructing the phase and amplitude images of various samples,including blood and Pap smears and tissue sections.These results highlight that challenging problems in imaging science can be overcome through machine learning,providing new avenues to design powerful computational imaging systems.展开更多
Histological staining is a vital step in diagnosing various diseases and has been used for more than a century to provide contrast in tissue sections,rendering the tissue constituents visible for microscopic analysis ...Histological staining is a vital step in diagnosing various diseases and has been used for more than a century to provide contrast in tissue sections,rendering the tissue constituents visible for microscopic analysis by medical experts.However,this process is time consuming,labour intensive,expensive and destructive to the specimen.Recently,the ability to virtually stain unlabelled tissue sections,entirely avoiding the histochemical staining step,has been demonstrated using tissue-stain-specific deep neural networks.Here,we present a new deep-learning-based framework that generates virtually stained images using label-free tissue images,in which different stains are merged following a micro-structure map defined by the user.This approach uses a single deep neural network that receives two different sources of information as its input:(1)autofluorescence images of the label-free tissue sample and(2)a“digital staining matrix”,which represents the desired microscopic map of the different stains to be virtually generated in the same tissue section.This digital staining matrix is also used to virtually blend existing stains,digitally synthesizing new histological stains.We trained and blindly tested this virtual-staining network using unlabelled kidney tissue sections to generate micro-structured combinations of haematoxylin and eosin(H&E),Jones’silver stain,and Masson’s trichrome stain.Using a single network,this approach multiplexes the virtual staining of label-free tissue images with multiple types of stains and paves the way for synthesizing new digital histological stains that can be created in the same tissue cross section,which is currently not feasible with standard histochemical staining methods.展开更多
文摘In their recently published paper in Opto-Electronic Ad-vances,Pietro Ferraro and his colleagues report on a new high-throughput tomographic phase instrument that precisely quantifies intracellular lipid droplets(LDs)1.LDs are lipid storage organelles found in most cell types and play an active role in critical biological pro-cesses,including energy metabolism,membrane homeo-stasis.
基金This study was financially supported by the NSF Biophotonics Program (USA).
文摘In an age where digitization is widespread in clinical and preclinical workflows,pathology is still predominantly practiced by microscopic evaluation of stained tissue specimens affixed on glass slides.Over the last decade,new high throughput digital scanning microscopes have ushered in the era of digital pathology that,along with recent advances in machine vision,have opened up new possibilities for Computer-Aided-Diagnoses.Despite these advances,the high infrastructural costs related to digital pathology and the perception that the digitization process is an additional and nondirectly reimbursable step have challenged its widespread adoption.Here,we discuss how emerging virtual staining technologies and machine learning can help to disrupt the standard histopathology workflow and create new avenues for the diagnostic paradigm that will benefit patients and healthcare systems alike via digital pathology.
基金support of NSF Biophotonics Program and the NIH/National Center for Advancing Translational Science UCLA CTSI Grant UL1TR001881.
文摘The immunohistochemical(IHC)staining of the human epidermal growth factor receptor 2(HER2)biomarker is widely practiced in breast tissue analysis,preclinical studies,and diagnostic decisions,guiding cancer treatment and investigation of pathogenesis.HER2 staining demands laborious tissue treatment and chemical processing performed by a histotechnologist,which typically takes one day to prepare in a laboratory,increasing analysis time and associated costs.Here,we describe a deep learning-based virtual HER2 IHC staining method using a conditional generative adversarial network that is trained to rapidly transform autofluorescence microscopic images of unlabeled/label-free breast tissue sections into bright-field equivalent microscopic images,matching the standard HER2 IHC staining that is chemically performed on the same tissue sections.The efficacy of this virtual HER2 staining framework was demonstrated by quantitative analysis,in which three board-certified breast pathologists blindly graded the HER2 scores of virtually stained and immunohistochemically stained HER2 whole slide images(WSIs)to reveal that the HER2 scores determined by inspecting virtual IHC images are as accurate as their immunohistochemically stained counterparts.A second quantitative blinded study performed by the same diagnosticians further revealed that the virtually stained HER2 images exhibit a comparable staining quality in the level of nuclear detail,membrane clearness,and absence of staining artifacts with respect to their immunohistochemically stained counterparts.This virtual HER2 staining framework bypasses the costly,laborious,and time-consuming IHC staining procedures in laboratory and can be extended to other types of biomarkers to accelerate the IHC tissue staining used in life sciences and biomedical workflow.
基金Research Group at UCLA acknowledges the support of U.S.Department of Energy(DOE),Office of Basic Energy Sciences,Division of Materials Sciences and Engineering under Award#DE-SC0023088.
文摘Image denoising,one of the essential inverse problems,targets to remove noise/artifacts from input images.In general,digital image denoising algorithms,executed on computers,present latency due to several iterations implemented in,e.g.,graphics processing units(GPUs).While deep learning-enabled methods can operate non-iteratively,they also introduce latency and impose a significant computational burden,leading to increased power consumption.Here,we introduce an analog diffractive image denoiser to all-optically and non-iteratively clean various forms of noise and artifacts from input images–implemented at the speed of light propagation within a thin diffractive visual processor that axially spans<250×λ,whereλis the wavelength of light.This all-optical image denoiser comprises passive transmissive layers optimized using deep learning to physically scatter the optical modes that represent various noise features,causing them to miss the output image Field-of-View(FoV)while retaining the object features of interest.Our results show that these diffractive denoisers can efficiently remove salt and pepper noise and image rendering-related spatial artifacts from input phase or intensity images while achieving an output power efficiency of~30–40%.We experimentally demonstrated the effectiveness of this analog denoiser architecture using a 3D-printed diffractive visual processor operating at the terahertz spectrum.Owing to their speed,power-efficiency,and minimal computational overhead,all-optical diffractive denoisers can be transformative for various image display and projection systems,including,e.g.,holographic displays.
文摘Object classification is an important aspect of machine intelligence.Current practices in object classification entail the digitization of object information followed by the application of digital algorithms such as deep neural networks.The execution of digital neural networks is power-consuming,and the throughput is limited.The existing von Neumann digital computing paradigm is also less suited for the implementation of highly parallel neural network architectures.^(1)
基金the US Air Force Office of Scientific Research funding(Grant No.FA9550-21-1-0324)。
文摘Large-scale linear operations are the cornerstone for performing complex computational tasks.Using optical computing to perform linear transformations offers potential advantages in terms of speed,parallelism,and scalability.Previously,the design of successive spatially engineered diffractive surfaces forming an optical network was demonstrated to perform statistical inference and compute an arbitrary complex-valued linear transformation using narrowband illumination.We report deep-learning-based design of a massively parallel broadband diffractive neural network for all-optically performing a large group of arbitrarily selected,complex-valued linear transformations between an input and output field of view,each with Ni and No pixels,respectively.This broadband diffractive processor is composed of Nw wavelength channels,each of which is uniquely assigned to a distinct target transformation;a large set of arbitrarily selected linear transformations can be individually performed through the same diffractive network at different illumination wavelengths,either simultaneously or sequentially(wavelength scanning).We demonstrate that such a broadband diffractive network,regardless of its material dispersion,can successfully approximate Nw unique complex-valued linear transforms with a negligible error when the number of diffractive neurons(N)in its design is≥2NwNiNo.We further report that the spectral multiplexing capability can be increased by increasing N;our numerical analyses confirm these conclusions for Nw>180 and indicate that it can further increase to Nw∼2000,depending on the upper bound of the approximation error.Massively parallel,wavelength-multiplexed diffractive networks will be useful for designing highthroughput intelligent machine-vision systems and hyperspectral processors that can perform statistical inference and analyze objects/scenes with unique spectral properties.
基金The Ozcan Research Group at UCLA acknowledges the support of the NSF Biophotonics Program.
文摘Histological staining is the gold standard for tissue examination in clinical pathology and life-science research,which visualizes the tissue and cellular structures using chromatic dyes or fluorescence labels to aid the microscopic assessment of tissue.However,the current histological staining workflow requires tedious sample preparation steps,specialized laboratory infrastructure,and trained histotechnologists,making it expensive,time-consuming,and not accessible in resource-limited settings.Deep learning techniques created new opportunities to revolutionize staining methods by digitally generating histological stains using trained neural networks,providing rapid,cost-effective,and accurate alternatives to standard chemical staining methods.These techniques,broadly referred to as virtual staining,were extensively explored by multiple research groups and demonstrated to be successful in generating various types of histological stains from label-free microscopic images of unstained samples;similar approaches were also used for transforming images of an already stained tissue sample into another type of stain,performing virtual stain-to-stain transformations.In this Review,we provide a comprehensive overview of the recent research advances in deep learning-enabled virtual histological staining techniques.The basic concepts and the typical workflow of virtual staining are introduced,followed by a discussion of representative works and their technical innovations.We also share our perspectives on the future of this emerging field,aiming to inspire readers from diverse scientifc fields to further expand the scope of deep learning-enabled virtual histological staining techniques and their applications.
基金The Ozcan Research Group at UCLA acknowledges the support of U.S.Department of Energy(DOE),Office of Basic Energy Sciences,Division of Materials Sciences and Engineering under Award#DE-SC0023088.
文摘Under spatially coherent light,a diffractive optical network composed of structured surfaces can be designed to perform any arbitrary complex-valued linear transformation between its input and output fields-of-view(FOVs)if the total number(N)of optimizable phase-only diffractive features is≥~2N_(i)N_(o),where Ni and No refer to the number of useful pixels at the input and the output FOVs,respectively.Here we report the design of a spatially incoherent diffractive optical processor that can approximate any arbitrary linear transformation in time-averaged intensity between its input and output FOVs.Under spatially incoherent monochromatic light,the spatially varying intensity point spread function(H)of a diffractive network,corresponding to a given,arbitrarily-selected linear intensity transformation,can be written as H(m,n;m′,n′)=|h(m,n;m′,n′)|^(2),where h is the spatially coherent point spread function of the same diffractive network,and(m,n)and(m′,n′)define the coordinates of the output and input FOVs,respectively.Using numerical simulations and deep learning,supervised through examples of input-output profiles,we demonstrate that a spatially incoherent diffractive network can be trained to all-optically perform any arbitrary linear intensity transformation between its input and output if N≥~2N_(i)N_(o).We also report the design of spatially incoherent diffractive networks for linear processing of intensity information at multiple illumination wavelengths,operating simultaneously.Finally,we numerically demonstrate a diffractive network design that performs all-optical classification of handwritten digits under spatially incoherent illumination,achieving a test accuracy of>95%.Spatially incoherent diffractive networks will be broadly useful for designing all-optical visual processors that can work under natural light.
基金supported by the U.S. Department of Energy (DOE), Office of Basic Energy Sciences, Division of Materials Sciences and Engineering under Award No. DE-SC0023088
文摘Free-space optical information transfer through diffusive media is critical in many applications, such as biomedical devices and optical communication, but remains challenging due to random, unknown perturbations in the optical path. We demonstrate an optical diffractive decoder with electronic encoding to accurately transfer the optical information of interest, corresponding to, e.g., any arbitrary input object or message, through unknown random phase diffusers along the optical path. This hybrid electronic-optical model, trained using supervised learning, comprises a convolutional neural network-based electronic encoder and successive passive diffractive layers that are jointly optimized. After their joint training using deep learning,our hybrid model can transfer optical information through unknown phase diffusers, demonstrating generalization to new random diffusers never seen before. The resulting electronic-encoder and optical-decoder model was experimentally validated using a 3D-printed diffractive network that axially spans <70λ, whereλ = 0.75 mm is the illumination wavelength in the terahertz spectrum, carrying the desired optical information through random unknown diffusers. The presented framework can be physically scaled to operate at different parts of the electromagnetic spectrum, without retraining its components, and would offer low-power and compact solutions for optical information transfer in free space through unknown random diffusive media.
基金support of the Department of Energy (grant # DE-SC0016925).
文摘Many exciting terahertz imaging applications,such as non-destructive evaluation,biomedical diagnosis,and security screening,have been historically limited in practical usage due to the raster-scanning requirement of imaging systems,which impose very low imaging speeds.However,recent advancements in terahertz imaging systems have greatly increased the imaging throughput and brought the promising potential of terahertz radiation from research laboratories closer to real-world applications.Here,we review the development of terahertz imaging technologies from both hardware and computational imaging perspectives.We introduce and compare different types of hardware enabling frequency-domain and time-domain imaging using various thermal,photon,and field image sensor arrays.We discuss how different imaging hardware and computational imaging algorithms provide opportunities for capturing time-of-flight,spectroscopic,phase,and intensity image data at high throughputs.Furthermore,the new prospects and challenges for the development of future high-throughput terahertz imaging systems are briefly introduced.
基金supported by the U.S.Department of Energy(DOE),Office of Basic Energy Sciences,Division of Materials Sciences and Engineering under Award#DE-SC0023088.
文摘Multispectral imaging has been used for numerous applications in e.g.,environmental monitoring,aerospace,defense,and biomedicine.Here,we present a diffractive optical network-based multispectral imaging system trained using deep learning to create a virtual spectral filter array at the output image field-of-view.This diffractive multispectral imager performs spatially-coherent imaging over a large spectrum,and at the same time,routes a pre-determined set of spectral channels onto an array of pixels at the output plane,converting a monochrome focal-plane array or image sensor into a multispectral imaging device without any spectral filters or image recovery algorithms.Furthermore,the spectral responsivity of this diffractive multispectral imager is not sensitive to input polarization states.Through numerical simulations,we present different diffractive network designs that achieve snapshot multispectral imaging with 4,9 and 16 unique spectral bands within the visible spectrum,based on passive spatially-structured diffractive surfaces,with a compact design that axially spans ~72λ_(m),where λ_(m) is the mean wavelength of the spectral band of interest.Moreover,we experimentally demonstrate a diffractive multispectral imager based on a 3D-printed diffractive network that creates at its output image plane a spatially repeating virtual spectral filter array with 2×2=4 unique bands at terahertz spectrum.Due to their compact form factor and computation-free,power-efficient and polarization-insensitive forward operation,diffractive multispectral imagers can be transformative for various imaging and sensing applications and be used at different parts of the electromagnetic spectrum where high-density and wide-area multispectral pixel arrays are not widely available.
基金support of the US Office of Naval Research(ONR).
文摘Classification of an object behind a random and unknown scattering medium sets a challenging task for computational imaging and machine vision fields.Recent deep learning-based approaches demonstrated the classification of objects using diffuser-distorted patterns collected by an image sensor.These methods demand relatively large-scale computing using deep neural networks running on digital computers.Here,we present an all-optical processor to directly classify unknown objects through unknown,random phase diffusers using broadband illumination detected with a single pixel.A set of transmissive diffractive layers,optimized using deep learning,forms a physical network that all-optically maps the spatial information of an input object behind a random diffuser into the power spectrum of the output light detected through a single pixel at the output plane of the diffractive network.We numerically demonstrated the accuracy of this framework using broadband radiation to classify unknown handwritten digits through random new diffusers,never used during the training phase,and achieved a blind testing accuracy of 87.74±1.12%.We also experimentally validated our single-pixel broadband diffractive network by classifying handwritten digits"0"and"1"through a random diffuser using terahertz waves and a 3D-printed diffractive network.This single-pixel all-optical object classification system through random diffusers is based on passive diffractive layers that process broadband input light and can operate at any part of the electromagnetic spectrum by simply scaling the diffractive features proportional to the wavelength range of interest.These results have various potential applications in,e.g.,biomedical imaging,security,robotics,and autonomous driving.
基金The Ozcan Research Group at UCLA acknowledges the support of NSF Engineering Research Center(ERC,PATHS-UP)the Army Research Office(ARO,W911NF-13-1-0419 and W911NF-13-1-0197)+8 种基金the ARO Life Sciences Divisionthe National Science Foundation(NSF)CBET Division Biophotonics Programthe NSF Emerging Frontiers in Research and Innovation(EFRI)Awardthe NSF INSPIRE Award,NSF Partnerships for Innovation:Building Innovation Capacity(PFI:BIC)Programthe National Institutes of Health(NIH,R21EB023115)the Howard Hughes Medical Institute(HHMI)Vodafone Americas Foundationthe Mary Kay FoundationSteven&Alexandra Cohen Foundation.
文摘Using a deep neural network,we demonstrate a digital staining technique,which we term PhaseStain,to transform the quantitative phase images(QPI)of label-free tissue sections into images that are equivalent to the brightfield microscopy images of the same samples that are histologically stained.Through pairs of image data(QPI and the corresponding brightfield images,acquired after staining),we train a generative adversarial network and demonstrate the effectiveness of this virtual-staining approach using sections of human skin,kidney,and liver tissue,matching the brightfield microscopy images of the same samples stained with Hematoxylin and Eosin,Jones’stain,and Masson’s trichrome stain,respectively.This digital-staining framework may further strengthen various uses of label-free QPI techniques in pathology applications and biomedical research in general,by eliminating the need for histological staining,reducing sample preparation related costs and saving time.Our results provide a powerful example of some of the unique opportunities created by data-driven image transformations enabled by deep learning.
文摘Recent advances in deep learning have given rise to a new paradigm of holographic image reconstruction and phase recovery techniques with real-time performance.Through data-driven approaches,these emerging techniques have overcome some of the challenges associated with existing holographic image reconstruction methods while also minimizing the hardware requirements of holography.These recent advances open up a myriad of new opportunities for the use of coherent imaging systems in biomedical and engineering research and related applications.
基金funded by the Army Research Office(ARO,W56HZV-16-C-0122)The Ozcan Research Group at UCLA acknowledges the support of the NSF Engineering Research Center(ERC,PATHS-UP)+8 种基金the ARO Life Sciences Division,the National Science Foundation(NSF)CBET Division Biophotonics Programan NSF Emerging Frontiers in Research and Innovation(EFRI)Awardan NSF INSPIRE Awardthe NSF Partnerships for Innovation:Building Innovation Capacity(PFI:BIC)Programthe National Institutes of Healththe Howard Hughes Medical Institute(HHMI)the Vodafone Americas Foundationthe Mary Kay Foundationthe Steven&Alexandra Cohen Foundation.
文摘We report a deep learning-enabled field-portable and cost-effective imaging flow cytometer that automatically captures phase-contrast color images of the contents of a continuously flowing water sample at a throughput of 100 mL/h.The device is based on partially coherent lens-free holographic microscopy and acquires the diffraction patterns of flowing micro-objects inside a microfluidic channel.These holographic diffraction patterns are reconstructed in real time using a deep learning-based phase-recovery and image-reconstruction method to produce a color image of each micro-object without the use of external labeling.Motion blur is eliminated by simultaneously illuminating the sample with red,green,and blue light-emitting diodes that are pulsed.Operated by a laptop computer,this portable device measures 15.5 cm×15 cm×12.5 cm,weighs 1 kg,and compared to standard imaging flow cytometers,it provides extreme reductions of cost,size and weight while also providing a high volumetric throughput over a large object size range.We demonstrated the capabilities of this device by measuring ocean samples at the Los Angeles coastline and obtaining images of its micro-and nanoplankton composition.Furthermore,we measured the concentration of a potentially toxic alga(Pseudo-nitzschia)in six public beaches in Los Angeles and achieved good agreement with measurements conducted by the California Department of Public Health.The cost-effectiveness,compactness,and simplicity of this computational platform might lead to the creation of a network of imaging flow cytometers for largescale and continuous monitoring of the ocean microbiome,including its plankton composition.
基金The Ozcan Group at UCLA acknowledges the support of the Koç Group,the National Science Foundation(PATHS-UP ERC)the Howard Hughes Medical Institute.Y.W.also acknowledges the support of the SPIE John Kiel Scholarship.
文摘Digital holographic microscopy enables the 3D reconstruction of volumetric samples from a single-snapshot hologram.However,unlike a conventional bright-field microscopy image,the quality of holographic reconstructions is compromised by interference fringes as a result of twin images and out-of-plane objects.Here,we demonstrate that cross-modality deep learning using a generative adversarial network(GAN)can endow holographic images of a sample volume with bright-field microscopy contrast,combining the volumetric imaging capability of holography with the speckle-and artifact-free image contrast of incoherent bright-field microscopy.We illustrate the performance of this“bright-field holography”method through the snapshot imaging of bioaerosols distributed in 3D,matching the artifact-free image contrast and axial sectioning performance of a high-NA bright-field microscope.This data-driven deep-learning-based imaging method bridges the contrast gap between coherent and incoherent imaging,and enables the snapshot 3D imaging of objects with bright-field contrast from a single hologram,benefiting from the wave-propagation framework of holography.
文摘Deep learning has been transformative in many fields,motivating the emergence of various optical computing architectures.Diffractive optical network is a recently introduced optical computing framework that merges wave optics with deep-learning methods to design optical neural networks.Diffraction-based all-optical object recognition systems,designed through this framework and fabricated by 3D printing,have been reported to recognize handwritten digits and fashion products,demonstrating all-optical inference and generalization to sub-classes of data.These previous diffractive approaches employed monochromatic coherent light as the illumination source.Here,we report a broadband diffractive optical neural network design that simultaneously processes a continuum of wavelengths generated by a temporally incoherent broadband source to all-optically perform a specific task learned using deep learning.We experimentally validated the success of this broadband diffractive neural network architecture by designing,fabricating and testing seven different multi-layer,diffractive optical systems that transform the optical wavefront generated by a broadband THz pulse to realize(1)a series of tuneable,single-passband and dual-passband spectral filters and(2)spatially controlled wavelength de-multiplexing.Merging the native or engineered dispersion of various material systems with a deep-learning-based design strategy,broadband diffractive neural networks help us engineer the light–matter interaction in 3D,diverging from intuitive and analytical design methods to create taskspecific optical components that can all-optically perform deterministic tasks or statistical inference for optical machine learning.
基金Altug Research Group acknowledges National Science Foundation(NSF)CAREER Award,Presidential Early Career Award for Scientist and Engineers(PECASE)ECCS-0954790Office of Naval Research Young Investigator Award 11PR00755-00-P00001+1 种基金NSF Engineering Research Center on Smart Lighting EEC-0812056Massachusetts Life Sciences Center Young Investigator award and Ecole Polytechnique Federale de Lausanne.Ozcan Research Group acknowledges the support of PECASE,Army Research Office(ARO)Life Sciences Division,ARO Young Investigator Award,NSF CAREER Award,ONR Young Investigator Award and the National Institute of Health(NIH)Director’s New Innovator Award DP2OD006427 from the Office of The Director,NIH and the NSF EFRI Award.
文摘We demonstrate a handheld on-chip biosensing technology that employs plasmonic microarrays coupled with a lens-free computational imaging system towards multiplexed and high-throughput screening of biomolecular interactions for point-of-care applications and resource-limited settings.This lightweight and field-portable biosensing device,weighing 60 g and 7.5 cm tall,utilizes a compact optoelectronic sensor array to record the diffraction patterns of plasmonic nanostructures under uniform illumination by a single-light emitting diode tuned to the plasmonic mode of the nanoapertures.Employing a sensitive plasmonic array design that is combined with lens-free computational imaging,we demonstrate label-free and quantitative detection of biomolecules with a protein layer thickness down to 3 nm.Integrating large-scale plasmonic microarrays,our on-chip imaging platform enables simultaneous detection of protein mono-and bilayers on the same platform over a wide range of biomolecule concentrations.In this handheld device,we also employ an iterative phase retrieval-based image reconstruction method,which offers the ability to digitally image a highly multiplexed array of sensors on the same plasmonic chip,making this approach especially suitable for high-throughput diagnostic applications in field settings.
文摘Phase recovery from intensity-only measurements forms the heart of coherent imaging techniques and holography.In this study,we demonstrate that a neural network can learn to perform phase recovery and holographic image reconstruction after appropriate training.This deep learning-based approach provides an entirely new framework to conduct holographic imaging by rapidly eliminating twin-image and self-interference-related spatial artifacts.This neural network-based method is fast to compute and reconstructs phase and amplitude images of the objects using only one hologram,requiring fewer measurements in addition to being computationally faster.We validated this method by reconstructing the phase and amplitude images of various samples,including blood and Pap smears and tissue sections.These results highlight that challenging problems in imaging science can be overcome through machine learning,providing new avenues to design powerful computational imaging systems.
文摘Histological staining is a vital step in diagnosing various diseases and has been used for more than a century to provide contrast in tissue sections,rendering the tissue constituents visible for microscopic analysis by medical experts.However,this process is time consuming,labour intensive,expensive and destructive to the specimen.Recently,the ability to virtually stain unlabelled tissue sections,entirely avoiding the histochemical staining step,has been demonstrated using tissue-stain-specific deep neural networks.Here,we present a new deep-learning-based framework that generates virtually stained images using label-free tissue images,in which different stains are merged following a micro-structure map defined by the user.This approach uses a single deep neural network that receives two different sources of information as its input:(1)autofluorescence images of the label-free tissue sample and(2)a“digital staining matrix”,which represents the desired microscopic map of the different stains to be virtually generated in the same tissue section.This digital staining matrix is also used to virtually blend existing stains,digitally synthesizing new histological stains.We trained and blindly tested this virtual-staining network using unlabelled kidney tissue sections to generate micro-structured combinations of haematoxylin and eosin(H&E),Jones’silver stain,and Masson’s trichrome stain.Using a single network,this approach multiplexes the virtual staining of label-free tissue images with multiple types of stains and paves the way for synthesizing new digital histological stains that can be created in the same tissue cross section,which is currently not feasible with standard histochemical staining methods.