In recent years,the integration of deep learning techniques with biophotonic setups has opened new horizons in bioimaging.A compelling trend in this feld involves deliberately compromising certain measurement metrics ...In recent years,the integration of deep learning techniques with biophotonic setups has opened new horizons in bioimaging.A compelling trend in this feld involves deliberately compromising certain measurement metrics to engineer better bioimaging tools in terms of e.g,cost,speed,and form-factor,followed by compensating for the resulting defects through the utilization of deep learning models trained on a large amount of ideal,superior or alternative data.This strategic approach has found increasing popularity due to its potential to enhance various aspects of biophotonic imaging.One of the primary motivations for employing this strategy is the pursuit of higher temporal resolution or increased imaging speed,critical for capturing fine dynamic biological processes.Additionally,this approach offers the prospect of simplifying hardware requirements and complexities,thereby making advanced imaging standards more accessible in terms of cost and/or size.This article provides an in-depth review of the diverse measurement aspects that researchers intentionally impair in their biophotonic setups,including the point spread function(PSF),signal-to-noise ratio(SNR),sampling density,and pixel resolution.By deliberately compromising these metrics,researchers aim to not only recuperate them through the application of deep learning networks,but also bolster in return other crucial parameters,such as the field of view(FOV),depth of field(DOF),and space-bandwidth product(SBP).Throughout this article,we discuss various biophotonic methods that have successfully employed this strategic approach.These techniques span a wide range of applications and showcase the versatility and effectiveness of deep learning in the context of compromised biophotonic data.Finally,by offering our perspectives on the exciting future possibilities of this rapidly evolving concept,we hope to motivate our readers from various disciplines to explore novel ways of balancing hardware compromises with compensation via artificial intelligence(Al).展开更多
In their recently published paper in Opto-Electronic Ad-vances,Pietro Ferraro and his colleagues report on a new high-throughput tomographic phase instrument that precisely quantifies intracellular lipid droplets(LDs)...In their recently published paper in Opto-Electronic Ad-vances,Pietro Ferraro and his colleagues report on a new high-throughput tomographic phase instrument that precisely quantifies intracellular lipid droplets(LDs)1.LDs are lipid storage organelles found in most cell types and play an active role in critical biological pro-cesses,including energy metabolism,membrane homeo-stasis.展开更多
In an age where digitization is widespread in clinical and preclinical workflows,pathology is still predominantly practiced by microscopic evaluation of stained tissue specimens affixed on glass slides.Over the last d...In an age where digitization is widespread in clinical and preclinical workflows,pathology is still predominantly practiced by microscopic evaluation of stained tissue specimens affixed on glass slides.Over the last decade,new high throughput digital scanning microscopes have ushered in the era of digital pathology that,along with recent advances in machine vision,have opened up new possibilities for Computer-Aided-Diagnoses.Despite these advances,the high infrastructural costs related to digital pathology and the perception that the digitization process is an additional and nondirectly reimbursable step have challenged its widespread adoption.Here,we discuss how emerging virtual staining technologies and machine learning can help to disrupt the standard histopathology workflow and create new avenues for the diagnostic paradigm that will benefit patients and healthcare systems alike via digital pathology.展开更多
IntroductionA rapidly increasing need for organ transplantation and short list of donated organs have led to the development of new materials and technologies for organ manufacturing. Although some simple organs, such...IntroductionA rapidly increasing need for organ transplantation and short list of donated organs have led to the development of new materials and technologies for organ manufacturing. Although some simple organs, such as skin and cartilage, have been successfully fabricated and commercialized, it is still difficult to make tissues and organs with high complexity. Engineered tissues have other biomedical applications, such as drug screening models and bio-actuators. Threedimensional (3D) bioprinting has become a great tool to fabricate tissue constructs on demand for transplantation and other biomedical applications.展开更多
The immunohistochemical(IHC)staining of the human epidermal growth factor receptor 2(HER2)biomarker is widely practiced in breast tissue analysis,preclinical studies,and diagnostic decisions,guiding cancer treatment a...The immunohistochemical(IHC)staining of the human epidermal growth factor receptor 2(HER2)biomarker is widely practiced in breast tissue analysis,preclinical studies,and diagnostic decisions,guiding cancer treatment and investigation of pathogenesis.HER2 staining demands laborious tissue treatment and chemical processing performed by a histotechnologist,which typically takes one day to prepare in a laboratory,increasing analysis time and associated costs.Here,we describe a deep learning-based virtual HER2 IHC staining method using a conditional generative adversarial network that is trained to rapidly transform autofluorescence microscopic images of unlabeled/label-free breast tissue sections into bright-field equivalent microscopic images,matching the standard HER2 IHC staining that is chemically performed on the same tissue sections.The efficacy of this virtual HER2 staining framework was demonstrated by quantitative analysis,in which three board-certified breast pathologists blindly graded the HER2 scores of virtually stained and immunohistochemically stained HER2 whole slide images(WSIs)to reveal that the HER2 scores determined by inspecting virtual IHC images are as accurate as their immunohistochemically stained counterparts.A second quantitative blinded study performed by the same diagnosticians further revealed that the virtually stained HER2 images exhibit a comparable staining quality in the level of nuclear detail,membrane clearness,and absence of staining artifacts with respect to their immunohistochemically stained counterparts.This virtual HER2 staining framework bypasses the costly,laborious,and time-consuming IHC staining procedures in laboratory and can be extended to other types of biomarkers to accelerate the IHC tissue staining used in life sciences and biomedical workflow.展开更多
The eye is an important organ,it provides vision and it is an important component of our facial identity(1,2).There is an old saying“The eye is the window of the soul”,clear and bright eyes bring esthetic pleasure t...The eye is an important organ,it provides vision and it is an important component of our facial identity(1,2).There is an old saying“The eye is the window of the soul”,clear and bright eyes bring esthetic pleasure to people.Human eye is globular and consists of two main parts,the anterior and posterior segments(1,2).Although the posterior part of the eye is comfortably located in the orbit,they are delicate because its anterior segment including the cornea is exposed to the outside world,thus accessible to wear and tear.To protect and maintain the eye functions while reduce the disruptions from outside and inside the body,the eyes are equipped with defense mechanisms for both the anterior and posterior segments(1,2).展开更多
Complex field imaging,which captures both the amplitude and phase information of input optical fields or objects,can offer rich structural insights into samples,such as their absorption and refractive index distributi...Complex field imaging,which captures both the amplitude and phase information of input optical fields or objects,can offer rich structural insights into samples,such as their absorption and refractive index distributions.However,conventional image sensors are intensity-based and inherently lack the capability to directly measure the phase distribution of a field.This limitation can be overcome using interferometric or holographic methods,often supplemented by iterative phase retrieval algorithms,leading to a considerable increase in hardware complexity and computational demand.Here,we present a complex field imager design that enables snapshot imaging of both the amplitude and quantitative phase information of input fields using an intensity-based sensor array without any digital processing.Our design utilizes successive deep learning-optimized diffractive surfaces that are structured to collectively modulate the input complex field,forming two independent imaging channels that perform amplitude-to-amplitude and phase-to-intensity transformations between the input and output planes within a compact optical design,axially spanning~100 wavelengths.The intensity distributions of the output fields at these two channels on the sensor plane directly correspond to the amplitude and quantitative phase profiles of the input complex field,eliminating the need for any digital image reconstruction algorithms.We experimentally validated the efficacy of our complex field diffractive imager designs through 3D-printed prototypes operating at the terahertz spectrum,with the output amplitude and phase channel images closely aligning with our numerical simulations.We envision that this complex field imager will have various applications in security,biomedical imaging,sensing and material science,among others.展开更多
Object classification is an important aspect of machine intelligence.Current practices in object classification entail the digitization of object information followed by the application of digital algorithms such as d...Object classification is an important aspect of machine intelligence.Current practices in object classification entail the digitization of object information followed by the application of digital algorithms such as deep neural networks.The execution of digital neural networks is power-consuming,and the throughput is limited.The existing von Neumann digital computing paradigm is also less suited for the implementation of highly parallel neural network architectures.^(1)展开更多
Image denoising,one of the essential inverse problems,targets to remove noise/artifacts from input images.In general,digital image denoising algorithms,executed on computers,present latency due to several iterations i...Image denoising,one of the essential inverse problems,targets to remove noise/artifacts from input images.In general,digital image denoising algorithms,executed on computers,present latency due to several iterations implemented in,e.g.,graphics processing units(GPUs).While deep learning-enabled methods can operate non-iteratively,they also introduce latency and impose a significant computational burden,leading to increased power consumption.Here,we introduce an analog diffractive image denoiser to all-optically and non-iteratively clean various forms of noise and artifacts from input images–implemented at the speed of light propagation within a thin diffractive visual processor that axially spans<250×λ,whereλis the wavelength of light.This all-optical image denoiser comprises passive transmissive layers optimized using deep learning to physically scatter the optical modes that represent various noise features,causing them to miss the output image Field-of-View(FoV)while retaining the object features of interest.Our results show that these diffractive denoisers can efficiently remove salt and pepper noise and image rendering-related spatial artifacts from input phase or intensity images while achieving an output power efficiency of~30–40%.We experimentally demonstrated the effectiveness of this analog denoiser architecture using a 3D-printed diffractive visual processor operating at the terahertz spectrum.Owing to their speed,power-efficiency,and minimal computational overhead,all-optical diffractive denoisers can be transformative for various image display and projection systems,including,e.g.,holographic displays.展开更多
Nonlinear encoding of optical information can be achieved using various forms of data representation.Here,we analyze the performances of different nonlinear information encoding strategies that can be employed in diff...Nonlinear encoding of optical information can be achieved using various forms of data representation.Here,we analyze the performances of different nonlinear information encoding strategies that can be employed in diffractive optical processors based on linear materials and shed light on their utility and performance gaps compared to the state-of-the-art digital deep neural networks.For a comprehensive evaluation,we used different datasets to compare the statistical inference performance of simpler-to-implement nonlinear encoding strategies that involve,e.g.,phase encoding,against data repetition-based nonlinear encoding strategies.We show that data repetition within a diffractive volume(e.g.,through an optical cavity or cascaded introduction of the input data)causes the loss of the universal linear transformation capability of a diffractive optical processor.Therefore,data repetition-based diffractive blocks cannot provide optical analogs to fully connected or convolutional layers commonly employed in digital neural networks.However,they can still be effectively trained for specific inference tasks and achieve enhanced accuracy,benefiting from the nonlinear encoding of the input information.Our results also reveal that phase encoding of input information without data repetition provides a simpler nonlinear encoding strategy with comparable statistical inference accuracy to data repetition-based diffractive processors.Our analyses and conclusions would be of broad interest to explore the push-pull relationship between linear material-based diffractive optical systems and nonlinear encoding strategies in visual information processors.展开更多
Under spatially coherent light,a diffractive optical network composed of structured surfaces can be designed to perform any arbitrary complex-valued linear transformation between its input and output fields-of-view(FO...Under spatially coherent light,a diffractive optical network composed of structured surfaces can be designed to perform any arbitrary complex-valued linear transformation between its input and output fields-of-view(FOVs)if the total number(N)of optimizable phase-only diffractive features is≥~2N_(i)N_(o),where Ni and No refer to the number of useful pixels at the input and the output FOVs,respectively.Here we report the design of a spatially incoherent diffractive optical processor that can approximate any arbitrary linear transformation in time-averaged intensity between its input and output FOVs.Under spatially incoherent monochromatic light,the spatially varying intensity point spread function(H)of a diffractive network,corresponding to a given,arbitrarily-selected linear intensity transformation,can be written as H(m,n;m′,n′)=|h(m,n;m′,n′)|^(2),where h is the spatially coherent point spread function of the same diffractive network,and(m,n)and(m′,n′)define the coordinates of the output and input FOVs,respectively.Using numerical simulations and deep learning,supervised through examples of input-output profiles,we demonstrate that a spatially incoherent diffractive network can be trained to all-optically perform any arbitrary linear intensity transformation between its input and output if N≥~2N_(i)N_(o).We also report the design of spatially incoherent diffractive networks for linear processing of intensity information at multiple illumination wavelengths,operating simultaneously.Finally,we numerically demonstrate a diffractive network design that performs all-optical classification of handwritten digits under spatially incoherent illumination,achieving a test accuracy of>95%.Spatially incoherent diffractive networks will be broadly useful for designing all-optical visual processors that can work under natural light.展开更多
Quantitative phase imaging(QPI)is a label-free computational imaging technique used in various fields,including biology and medical research.Modern QPI systems typically rely on digital processing using iterative algo...Quantitative phase imaging(QPI)is a label-free computational imaging technique used in various fields,including biology and medical research.Modern QPI systems typically rely on digital processing using iterative algorithms for phase retrieval and image reconstruction.Here,we report a diffractive optical network trained to convert the phase information of input objects positioned behind random diffusers into intensity variations at the output plane,all-optically performing phase recovery and quantitative imaging of phase objects completely hidden by unknown,random phase diffusers.This QPI diffractive network is composed of successive diffractive layers,axially spanning in total~70λ,where is the illumination wavelength;unlike existing digital image reconstruction and phase retrieval methods,it forms an all-optical processor that does not require external power beyond the illumination beam to complete its QPI reconstruction at the speed of light propagation.This all-optical diffractive processor can provide a low-power,high frame rate and compact alternative for quantitative imaging of phase objects through random,unknown diffusers and can operate at different parts of the electromagnetic spectrum for various applications in biomedical imaging and sensing.The presented QPI diffractive designs can be integrated onto the active area of standard CCD/CMOS-based image sensors to convert an existing optical microscope into a diffractive QPI microscope,performing phase recovery and image reconstruction on a chip through light diffraction within passive structured layers.展开更多
Histological staining is the gold standard for tissue examination in clinical pathology and life-science research,which visualizes the tissue and cellular structures using chromatic dyes or fluorescence labels to aid ...Histological staining is the gold standard for tissue examination in clinical pathology and life-science research,which visualizes the tissue and cellular structures using chromatic dyes or fluorescence labels to aid the microscopic assessment of tissue.However,the current histological staining workflow requires tedious sample preparation steps,specialized laboratory infrastructure,and trained histotechnologists,making it expensive,time-consuming,and not accessible in resource-limited settings.Deep learning techniques created new opportunities to revolutionize staining methods by digitally generating histological stains using trained neural networks,providing rapid,cost-effective,and accurate alternatives to standard chemical staining methods.These techniques,broadly referred to as virtual staining,were extensively explored by multiple research groups and demonstrated to be successful in generating various types of histological stains from label-free microscopic images of unstained samples;similar approaches were also used for transforming images of an already stained tissue sample into another type of stain,performing virtual stain-to-stain transformations.In this Review,we provide a comprehensive overview of the recent research advances in deep learning-enabled virtual histological staining techniques.The basic concepts and the typical workflow of virtual staining are introduced,followed by a discussion of representative works and their technical innovations.We also share our perspectives on the future of this emerging field,aiming to inspire readers from diverse scientifc fields to further expand the scope of deep learning-enabled virtual histological staining techniques and their applications.展开更多
Many exciting terahertz imaging applications,such as non-destructive evaluation,biomedical diagnosis,and security screening,have been historically limited in practical usage due to the raster-scanning requirement of i...Many exciting terahertz imaging applications,such as non-destructive evaluation,biomedical diagnosis,and security screening,have been historically limited in practical usage due to the raster-scanning requirement of imaging systems,which impose very low imaging speeds.However,recent advancements in terahertz imaging systems have greatly increased the imaging throughput and brought the promising potential of terahertz radiation from research laboratories closer to real-world applications.Here,we review the development of terahertz imaging technologies from both hardware and computational imaging perspectives.We introduce and compare different types of hardware enabling frequency-domain and time-domain imaging using various thermal,photon,and field image sensor arrays.We discuss how different imaging hardware and computational imaging algorithms provide opportunities for capturing time-of-flight,spectroscopic,phase,and intensity image data at high throughputs.Furthermore,the new prospects and challenges for the development of future high-throughput terahertz imaging systems are briefly introduced.展开更多
Phase recovery from intensity-only measurements forms the heart of coherent imaging techniques and holography.In this study,we demonstrate that a neural network can learn to perform phase recovery and holographic imag...Phase recovery from intensity-only measurements forms the heart of coherent imaging techniques and holography.In this study,we demonstrate that a neural network can learn to perform phase recovery and holographic image reconstruction after appropriate training.This deep learning-based approach provides an entirely new framework to conduct holographic imaging by rapidly eliminating twin-image and self-interference-related spatial artifacts.This neural network-based method is fast to compute and reconstructs phase and amplitude images of the objects using only one hologram,requiring fewer measurements in addition to being computationally faster.We validated this method by reconstructing the phase and amplitude images of various samples,including blood and Pap smears and tissue sections.These results highlight that challenging problems in imaging science can be overcome through machine learning,providing new avenues to design powerful computational imaging systems.展开更多
Using a deep neural network,we demonstrate a digital staining technique,which we term PhaseStain,to transform the quantitative phase images(QPI)of label-free tissue sections into images that are equivalent to the brig...Using a deep neural network,we demonstrate a digital staining technique,which we term PhaseStain,to transform the quantitative phase images(QPI)of label-free tissue sections into images that are equivalent to the brightfield microscopy images of the same samples that are histologically stained.Through pairs of image data(QPI and the corresponding brightfield images,acquired after staining),we train a generative adversarial network and demonstrate the effectiveness of this virtual-staining approach using sections of human skin,kidney,and liver tissue,matching the brightfield microscopy images of the same samples stained with Hematoxylin and Eosin,Jones’stain,and Masson’s trichrome stain,respectively.This digital-staining framework may further strengthen various uses of label-free QPI techniques in pathology applications and biomedical research in general,by eliminating the need for histological staining,reducing sample preparation related costs and saving time.Our results provide a powerful example of some of the unique opportunities created by data-driven image transformations enabled by deep learning.展开更多
We report a deep learning-enabled field-portable and cost-effective imaging flow cytometer that automatically captures phase-contrast color images of the contents of a continuously flowing water sample at a throughput...We report a deep learning-enabled field-portable and cost-effective imaging flow cytometer that automatically captures phase-contrast color images of the contents of a continuously flowing water sample at a throughput of 100 mL/h.The device is based on partially coherent lens-free holographic microscopy and acquires the diffraction patterns of flowing micro-objects inside a microfluidic channel.These holographic diffraction patterns are reconstructed in real time using a deep learning-based phase-recovery and image-reconstruction method to produce a color image of each micro-object without the use of external labeling.Motion blur is eliminated by simultaneously illuminating the sample with red,green,and blue light-emitting diodes that are pulsed.Operated by a laptop computer,this portable device measures 15.5 cm×15 cm×12.5 cm,weighs 1 kg,and compared to standard imaging flow cytometers,it provides extreme reductions of cost,size and weight while also providing a high volumetric throughput over a large object size range.We demonstrated the capabilities of this device by measuring ocean samples at the Los Angeles coastline and obtaining images of its micro-and nanoplankton composition.Furthermore,we measured the concentration of a potentially toxic alga(Pseudo-nitzschia)in six public beaches in Los Angeles and achieved good agreement with measurements conducted by the California Department of Public Health.The cost-effectiveness,compactness,and simplicity of this computational platform might lead to the creation of a network of imaging flow cytometers for largescale and continuous monitoring of the ocean microbiome,including its plankton composition.展开更多
Wide field-of-view(FOV)and high-resolution imaging requires microscopy modalities to have large space-bandwidth products.Lensfree on-chip microscopy decouples resolution from FOV and can achieve a space-bandwidth prod...Wide field-of-view(FOV)and high-resolution imaging requires microscopy modalities to have large space-bandwidth products.Lensfree on-chip microscopy decouples resolution from FOV and can achieve a space-bandwidth product greater than one billion under unit magnification using state-of-the-art opto-electronic sensor chips and pixel super-resolution techniques.However,using vertical illumination,the effective numerical aperture(NA)that can be achieved with an on-chip microscope is limited by a poor signal-to-noise ratio(SNR)at high spatial frequencies and imaging artifacts that arise as a result of the relatively narrow acceptance angles of the sensor’s pixels.Here,we report,for the first time,a synthetic aperture-based on-chip microscope in which the illumination angle is scanned across the surface of a dome to increase the effective NA of the reconstructed lensfree image to 1.4,achieving e.g.,,250-nm resolution at 700-nm wavelength under unit magnification.This synthetic aperture approach not only represents the largest NA achieved to date using an on-chip microscope but also enables color imaging of connected tissue samples,such as pathology slides,by achieving robust phase recovery without the need for multi-height scanning or any prior information about the sample.To validate the effectiveness of this synthetic aperture-based,partially coherent,holographic on-chip microscope,we have successfully imaged color-stained cancer tissue slides as well as unstained Papanicolaou smears across a very large FOV of 20.5 mm^(2).This compact on-chip microscope based on a synthetic aperture approach could be useful for various applications in medicine,physical sciences and engineering that demand high-resolution wide-field imaging.展开更多
A plethora of research advances have emerged in the fields of optics and photonics that benefit from harnessing the power of machine learning.Specifically,there has been a revival of interest in optical computing hard...A plethora of research advances have emerged in the fields of optics and photonics that benefit from harnessing the power of machine learning.Specifically,there has been a revival of interest in optical computing hardware due to its potential advantages for machine learning tasks in terms of parallelization,power efficiency and computation speed.Diffractive deep neural networks(D^(2)NNs)form such an optical computing framework that benefits from deep learning-based design of successive diffractive layers to all-optically process information as the input light diffracts through these passive layers.D^(2)NNs have demonstrated success in various tasks,including object classification,the spectral encoding of information,optical pulse shaping and imaging.Here,we substantially improve the inference performance of diffractive optical networks using feature engineering and ensemble learning.After independently training 1252 D^(2)NNs that were diversely engineered with a variety of passive input filters,we applied a pruning algorithm to select an optimized ensemble of D^(2)NNs that collectively improved the image classification accuracy.Through this pruning,we numerically demonstrated that ensembles of N=14 and N=30 D^(2)NNs achieve blind testing accuracies of 61.14±0.23%and 62.13±0.05%,respectively,on the classification of GFAR-10 test images,providing an inference improvennent of>16%compared to the average performance of the individual D^(2)NNs within each ensemble.These results constitute the highest inference accuracies achieved to date by any diffractive optical neural network design on the same dataset and might provide a significant leap to extend the application space of diffractive optical image classification and machine vision systems.展开更多
Controlling postprandial glucose levels for diabetic patients is critical to achieve the tight glycemic control that decreases the risk for developing long-term micro- and macrovascular complications.Herein,we report ...Controlling postprandial glucose levels for diabetic patients is critical to achieve the tight glycemic control that decreases the risk for developing long-term micro- and macrovascular complications.Herein,we report a glucose-responsive oral insulin delivery system based on Fc receptor (FcRn)-targeted liposomes with glucose-sensitive hyaluronic acid (HA) shell for postprandial glycemic regulation.After oral administration,the HA shell can quickly detach in the presence of increasing intestinal glucose concentration due to the competitive binding of glucose with the phenylboronic acid groups conjugated with HA.The exposed Fc groups on the surface of liposomes then facilitate enhanced intestinal absorption in an FcRn-mediated transport pathway.In vivo studies on chemically-induced type 1 diabetic mice show this oral glucose-responsive delivery approach can effectively reduce postprandial blood glucose excursions.This work is the first demonstration of an oral insulin delivery system directly triggered by increasing postprandial glucose concentrations in the intestine to provide an on-demand insulin release with ease of administration.展开更多
文摘In recent years,the integration of deep learning techniques with biophotonic setups has opened new horizons in bioimaging.A compelling trend in this feld involves deliberately compromising certain measurement metrics to engineer better bioimaging tools in terms of e.g,cost,speed,and form-factor,followed by compensating for the resulting defects through the utilization of deep learning models trained on a large amount of ideal,superior or alternative data.This strategic approach has found increasing popularity due to its potential to enhance various aspects of biophotonic imaging.One of the primary motivations for employing this strategy is the pursuit of higher temporal resolution or increased imaging speed,critical for capturing fine dynamic biological processes.Additionally,this approach offers the prospect of simplifying hardware requirements and complexities,thereby making advanced imaging standards more accessible in terms of cost and/or size.This article provides an in-depth review of the diverse measurement aspects that researchers intentionally impair in their biophotonic setups,including the point spread function(PSF),signal-to-noise ratio(SNR),sampling density,and pixel resolution.By deliberately compromising these metrics,researchers aim to not only recuperate them through the application of deep learning networks,but also bolster in return other crucial parameters,such as the field of view(FOV),depth of field(DOF),and space-bandwidth product(SBP).Throughout this article,we discuss various biophotonic methods that have successfully employed this strategic approach.These techniques span a wide range of applications and showcase the versatility and effectiveness of deep learning in the context of compromised biophotonic data.Finally,by offering our perspectives on the exciting future possibilities of this rapidly evolving concept,we hope to motivate our readers from various disciplines to explore novel ways of balancing hardware compromises with compensation via artificial intelligence(Al).
文摘In their recently published paper in Opto-Electronic Ad-vances,Pietro Ferraro and his colleagues report on a new high-throughput tomographic phase instrument that precisely quantifies intracellular lipid droplets(LDs)1.LDs are lipid storage organelles found in most cell types and play an active role in critical biological pro-cesses,including energy metabolism,membrane homeo-stasis.
基金This study was financially supported by the NSF Biophotonics Program (USA).
文摘In an age where digitization is widespread in clinical and preclinical workflows,pathology is still predominantly practiced by microscopic evaluation of stained tissue specimens affixed on glass slides.Over the last decade,new high throughput digital scanning microscopes have ushered in the era of digital pathology that,along with recent advances in machine vision,have opened up new possibilities for Computer-Aided-Diagnoses.Despite these advances,the high infrastructural costs related to digital pathology and the perception that the digitization process is an additional and nondirectly reimbursable step have challenged its widespread adoption.Here,we discuss how emerging virtual staining technologies and machine learning can help to disrupt the standard histopathology workflow and create new avenues for the diagnostic paradigm that will benefit patients and healthcare systems alike via digital pathology.
文摘IntroductionA rapidly increasing need for organ transplantation and short list of donated organs have led to the development of new materials and technologies for organ manufacturing. Although some simple organs, such as skin and cartilage, have been successfully fabricated and commercialized, it is still difficult to make tissues and organs with high complexity. Engineered tissues have other biomedical applications, such as drug screening models and bio-actuators. Threedimensional (3D) bioprinting has become a great tool to fabricate tissue constructs on demand for transplantation and other biomedical applications.
基金support of NSF Biophotonics Program and the NIH/National Center for Advancing Translational Science UCLA CTSI Grant UL1TR001881.
文摘The immunohistochemical(IHC)staining of the human epidermal growth factor receptor 2(HER2)biomarker is widely practiced in breast tissue analysis,preclinical studies,and diagnostic decisions,guiding cancer treatment and investigation of pathogenesis.HER2 staining demands laborious tissue treatment and chemical processing performed by a histotechnologist,which typically takes one day to prepare in a laboratory,increasing analysis time and associated costs.Here,we describe a deep learning-based virtual HER2 IHC staining method using a conditional generative adversarial network that is trained to rapidly transform autofluorescence microscopic images of unlabeled/label-free breast tissue sections into bright-field equivalent microscopic images,matching the standard HER2 IHC staining that is chemically performed on the same tissue sections.The efficacy of this virtual HER2 staining framework was demonstrated by quantitative analysis,in which three board-certified breast pathologists blindly graded the HER2 scores of virtually stained and immunohistochemically stained HER2 whole slide images(WSIs)to reveal that the HER2 scores determined by inspecting virtual IHC images are as accurate as their immunohistochemically stained counterparts.A second quantitative blinded study performed by the same diagnosticians further revealed that the virtually stained HER2 images exhibit a comparable staining quality in the level of nuclear detail,membrane clearness,and absence of staining artifacts with respect to their immunohistochemically stained counterparts.This virtual HER2 staining framework bypasses the costly,laborious,and time-consuming IHC staining procedures in laboratory and can be extended to other types of biomarkers to accelerate the IHC tissue staining used in life sciences and biomedical workflow.
基金This work was primarily supported by the National Institute of Environmental Health Sciences(NIEHS)R01 ES016746 and ES022698 as well as U01 ES027237with leveraged support from the National Science Foundation(NSF)and the Environmental Protection Agency(EPA)under Cooperative Agreement Number DBI 0830117 and 1266377Additional support is from“Hundred Talents Program”of Chinese Academy of Sciences and National Natural Science Foundation of China(31570899).
文摘The eye is an important organ,it provides vision and it is an important component of our facial identity(1,2).There is an old saying“The eye is the window of the soul”,clear and bright eyes bring esthetic pleasure to people.Human eye is globular and consists of two main parts,the anterior and posterior segments(1,2).Although the posterior part of the eye is comfortably located in the orbit,they are delicate because its anterior segment including the cornea is exposed to the outside world,thus accessible to wear and tear.To protect and maintain the eye functions while reduce the disruptions from outside and inside the body,the eyes are equipped with defense mechanisms for both the anterior and posterior segments(1,2).
基金The Ozcan Research Group at UCLA acknowledges the support of ONR (Grant#N00014-22-1-2016).
文摘Complex field imaging,which captures both the amplitude and phase information of input optical fields or objects,can offer rich structural insights into samples,such as their absorption and refractive index distributions.However,conventional image sensors are intensity-based and inherently lack the capability to directly measure the phase distribution of a field.This limitation can be overcome using interferometric or holographic methods,often supplemented by iterative phase retrieval algorithms,leading to a considerable increase in hardware complexity and computational demand.Here,we present a complex field imager design that enables snapshot imaging of both the amplitude and quantitative phase information of input fields using an intensity-based sensor array without any digital processing.Our design utilizes successive deep learning-optimized diffractive surfaces that are structured to collectively modulate the input complex field,forming two independent imaging channels that perform amplitude-to-amplitude and phase-to-intensity transformations between the input and output planes within a compact optical design,axially spanning~100 wavelengths.The intensity distributions of the output fields at these two channels on the sensor plane directly correspond to the amplitude and quantitative phase profiles of the input complex field,eliminating the need for any digital image reconstruction algorithms.We experimentally validated the efficacy of our complex field diffractive imager designs through 3D-printed prototypes operating at the terahertz spectrum,with the output amplitude and phase channel images closely aligning with our numerical simulations.We envision that this complex field imager will have various applications in security,biomedical imaging,sensing and material science,among others.
文摘Object classification is an important aspect of machine intelligence.Current practices in object classification entail the digitization of object information followed by the application of digital algorithms such as deep neural networks.The execution of digital neural networks is power-consuming,and the throughput is limited.The existing von Neumann digital computing paradigm is also less suited for the implementation of highly parallel neural network architectures.^(1)
基金Research Group at UCLA acknowledges the support of U.S.Department of Energy(DOE),Office of Basic Energy Sciences,Division of Materials Sciences and Engineering under Award#DE-SC0023088.
文摘Image denoising,one of the essential inverse problems,targets to remove noise/artifacts from input images.In general,digital image denoising algorithms,executed on computers,present latency due to several iterations implemented in,e.g.,graphics processing units(GPUs).While deep learning-enabled methods can operate non-iteratively,they also introduce latency and impose a significant computational burden,leading to increased power consumption.Here,we introduce an analog diffractive image denoiser to all-optically and non-iteratively clean various forms of noise and artifacts from input images–implemented at the speed of light propagation within a thin diffractive visual processor that axially spans<250×λ,whereλis the wavelength of light.This all-optical image denoiser comprises passive transmissive layers optimized using deep learning to physically scatter the optical modes that represent various noise features,causing them to miss the output image Field-of-View(FoV)while retaining the object features of interest.Our results show that these diffractive denoisers can efficiently remove salt and pepper noise and image rendering-related spatial artifacts from input phase or intensity images while achieving an output power efficiency of~30–40%.We experimentally demonstrated the effectiveness of this analog denoiser architecture using a 3D-printed diffractive visual processor operating at the terahertz spectrum.Owing to their speed,power-efficiency,and minimal computational overhead,all-optical diffractive denoisers can be transformative for various image display and projection systems,including,e.g.,holographic displays.
基金supported by the U.S.Department of Energy(DOE),Office of Basic Energy Sciences,Division of Materials Sciences and Engineering under Award#DE-SC0023088.
文摘Nonlinear encoding of optical information can be achieved using various forms of data representation.Here,we analyze the performances of different nonlinear information encoding strategies that can be employed in diffractive optical processors based on linear materials and shed light on their utility and performance gaps compared to the state-of-the-art digital deep neural networks.For a comprehensive evaluation,we used different datasets to compare the statistical inference performance of simpler-to-implement nonlinear encoding strategies that involve,e.g.,phase encoding,against data repetition-based nonlinear encoding strategies.We show that data repetition within a diffractive volume(e.g.,through an optical cavity or cascaded introduction of the input data)causes the loss of the universal linear transformation capability of a diffractive optical processor.Therefore,data repetition-based diffractive blocks cannot provide optical analogs to fully connected or convolutional layers commonly employed in digital neural networks.However,they can still be effectively trained for specific inference tasks and achieve enhanced accuracy,benefiting from the nonlinear encoding of the input information.Our results also reveal that phase encoding of input information without data repetition provides a simpler nonlinear encoding strategy with comparable statistical inference accuracy to data repetition-based diffractive processors.Our analyses and conclusions would be of broad interest to explore the push-pull relationship between linear material-based diffractive optical systems and nonlinear encoding strategies in visual information processors.
基金The Ozcan Research Group at UCLA acknowledges the support of U.S.Department of Energy(DOE),Office of Basic Energy Sciences,Division of Materials Sciences and Engineering under Award#DE-SC0023088.
文摘Under spatially coherent light,a diffractive optical network composed of structured surfaces can be designed to perform any arbitrary complex-valued linear transformation between its input and output fields-of-view(FOVs)if the total number(N)of optimizable phase-only diffractive features is≥~2N_(i)N_(o),where Ni and No refer to the number of useful pixels at the input and the output FOVs,respectively.Here we report the design of a spatially incoherent diffractive optical processor that can approximate any arbitrary linear transformation in time-averaged intensity between its input and output FOVs.Under spatially incoherent monochromatic light,the spatially varying intensity point spread function(H)of a diffractive network,corresponding to a given,arbitrarily-selected linear intensity transformation,can be written as H(m,n;m′,n′)=|h(m,n;m′,n′)|^(2),where h is the spatially coherent point spread function of the same diffractive network,and(m,n)and(m′,n′)define the coordinates of the output and input FOVs,respectively.Using numerical simulations and deep learning,supervised through examples of input-output profiles,we demonstrate that a spatially incoherent diffractive network can be trained to all-optically perform any arbitrary linear intensity transformation between its input and output if N≥~2N_(i)N_(o).We also report the design of spatially incoherent diffractive networks for linear processing of intensity information at multiple illumination wavelengths,operating simultaneously.Finally,we numerically demonstrate a diffractive network design that performs all-optical classification of handwritten digits under spatially incoherent illumination,achieving a test accuracy of>95%.Spatially incoherent diffractive networks will be broadly useful for designing all-optical visual processors that can work under natural light.
文摘Quantitative phase imaging(QPI)is a label-free computational imaging technique used in various fields,including biology and medical research.Modern QPI systems typically rely on digital processing using iterative algorithms for phase retrieval and image reconstruction.Here,we report a diffractive optical network trained to convert the phase information of input objects positioned behind random diffusers into intensity variations at the output plane,all-optically performing phase recovery and quantitative imaging of phase objects completely hidden by unknown,random phase diffusers.This QPI diffractive network is composed of successive diffractive layers,axially spanning in total~70λ,where is the illumination wavelength;unlike existing digital image reconstruction and phase retrieval methods,it forms an all-optical processor that does not require external power beyond the illumination beam to complete its QPI reconstruction at the speed of light propagation.This all-optical diffractive processor can provide a low-power,high frame rate and compact alternative for quantitative imaging of phase objects through random,unknown diffusers and can operate at different parts of the electromagnetic spectrum for various applications in biomedical imaging and sensing.The presented QPI diffractive designs can be integrated onto the active area of standard CCD/CMOS-based image sensors to convert an existing optical microscope into a diffractive QPI microscope,performing phase recovery and image reconstruction on a chip through light diffraction within passive structured layers.
基金The Ozcan Research Group at UCLA acknowledges the support of the NSF Biophotonics Program.
文摘Histological staining is the gold standard for tissue examination in clinical pathology and life-science research,which visualizes the tissue and cellular structures using chromatic dyes or fluorescence labels to aid the microscopic assessment of tissue.However,the current histological staining workflow requires tedious sample preparation steps,specialized laboratory infrastructure,and trained histotechnologists,making it expensive,time-consuming,and not accessible in resource-limited settings.Deep learning techniques created new opportunities to revolutionize staining methods by digitally generating histological stains using trained neural networks,providing rapid,cost-effective,and accurate alternatives to standard chemical staining methods.These techniques,broadly referred to as virtual staining,were extensively explored by multiple research groups and demonstrated to be successful in generating various types of histological stains from label-free microscopic images of unstained samples;similar approaches were also used for transforming images of an already stained tissue sample into another type of stain,performing virtual stain-to-stain transformations.In this Review,we provide a comprehensive overview of the recent research advances in deep learning-enabled virtual histological staining techniques.The basic concepts and the typical workflow of virtual staining are introduced,followed by a discussion of representative works and their technical innovations.We also share our perspectives on the future of this emerging field,aiming to inspire readers from diverse scientifc fields to further expand the scope of deep learning-enabled virtual histological staining techniques and their applications.
基金support of the Department of Energy (grant # DE-SC0016925).
文摘Many exciting terahertz imaging applications,such as non-destructive evaluation,biomedical diagnosis,and security screening,have been historically limited in practical usage due to the raster-scanning requirement of imaging systems,which impose very low imaging speeds.However,recent advancements in terahertz imaging systems have greatly increased the imaging throughput and brought the promising potential of terahertz radiation from research laboratories closer to real-world applications.Here,we review the development of terahertz imaging technologies from both hardware and computational imaging perspectives.We introduce and compare different types of hardware enabling frequency-domain and time-domain imaging using various thermal,photon,and field image sensor arrays.We discuss how different imaging hardware and computational imaging algorithms provide opportunities for capturing time-of-flight,spectroscopic,phase,and intensity image data at high throughputs.Furthermore,the new prospects and challenges for the development of future high-throughput terahertz imaging systems are briefly introduced.
文摘Phase recovery from intensity-only measurements forms the heart of coherent imaging techniques and holography.In this study,we demonstrate that a neural network can learn to perform phase recovery and holographic image reconstruction after appropriate training.This deep learning-based approach provides an entirely new framework to conduct holographic imaging by rapidly eliminating twin-image and self-interference-related spatial artifacts.This neural network-based method is fast to compute and reconstructs phase and amplitude images of the objects using only one hologram,requiring fewer measurements in addition to being computationally faster.We validated this method by reconstructing the phase and amplitude images of various samples,including blood and Pap smears and tissue sections.These results highlight that challenging problems in imaging science can be overcome through machine learning,providing new avenues to design powerful computational imaging systems.
基金The Ozcan Research Group at UCLA acknowledges the support of NSF Engineering Research Center(ERC,PATHS-UP)the Army Research Office(ARO,W911NF-13-1-0419 and W911NF-13-1-0197)+8 种基金the ARO Life Sciences Divisionthe National Science Foundation(NSF)CBET Division Biophotonics Programthe NSF Emerging Frontiers in Research and Innovation(EFRI)Awardthe NSF INSPIRE Award,NSF Partnerships for Innovation:Building Innovation Capacity(PFI:BIC)Programthe National Institutes of Health(NIH,R21EB023115)the Howard Hughes Medical Institute(HHMI)Vodafone Americas Foundationthe Mary Kay FoundationSteven&Alexandra Cohen Foundation.
文摘Using a deep neural network,we demonstrate a digital staining technique,which we term PhaseStain,to transform the quantitative phase images(QPI)of label-free tissue sections into images that are equivalent to the brightfield microscopy images of the same samples that are histologically stained.Through pairs of image data(QPI and the corresponding brightfield images,acquired after staining),we train a generative adversarial network and demonstrate the effectiveness of this virtual-staining approach using sections of human skin,kidney,and liver tissue,matching the brightfield microscopy images of the same samples stained with Hematoxylin and Eosin,Jones’stain,and Masson’s trichrome stain,respectively.This digital-staining framework may further strengthen various uses of label-free QPI techniques in pathology applications and biomedical research in general,by eliminating the need for histological staining,reducing sample preparation related costs and saving time.Our results provide a powerful example of some of the unique opportunities created by data-driven image transformations enabled by deep learning.
基金funded by the Army Research Office(ARO,W56HZV-16-C-0122)The Ozcan Research Group at UCLA acknowledges the support of the NSF Engineering Research Center(ERC,PATHS-UP)+8 种基金the ARO Life Sciences Division,the National Science Foundation(NSF)CBET Division Biophotonics Programan NSF Emerging Frontiers in Research and Innovation(EFRI)Awardan NSF INSPIRE Awardthe NSF Partnerships for Innovation:Building Innovation Capacity(PFI:BIC)Programthe National Institutes of Healththe Howard Hughes Medical Institute(HHMI)the Vodafone Americas Foundationthe Mary Kay Foundationthe Steven&Alexandra Cohen Foundation.
文摘We report a deep learning-enabled field-portable and cost-effective imaging flow cytometer that automatically captures phase-contrast color images of the contents of a continuously flowing water sample at a throughput of 100 mL/h.The device is based on partially coherent lens-free holographic microscopy and acquires the diffraction patterns of flowing micro-objects inside a microfluidic channel.These holographic diffraction patterns are reconstructed in real time using a deep learning-based phase-recovery and image-reconstruction method to produce a color image of each micro-object without the use of external labeling.Motion blur is eliminated by simultaneously illuminating the sample with red,green,and blue light-emitting diodes that are pulsed.Operated by a laptop computer,this portable device measures 15.5 cm×15 cm×12.5 cm,weighs 1 kg,and compared to standard imaging flow cytometers,it provides extreme reductions of cost,size and weight while also providing a high volumetric throughput over a large object size range.We demonstrated the capabilities of this device by measuring ocean samples at the Los Angeles coastline and obtaining images of its micro-and nanoplankton composition.Furthermore,we measured the concentration of a potentially toxic alga(Pseudo-nitzschia)in six public beaches in Los Angeles and achieved good agreement with measurements conducted by the California Department of Public Health.The cost-effectiveness,compactness,and simplicity of this computational platform might lead to the creation of a network of imaging flow cytometers for largescale and continuous monitoring of the ocean microbiome,including its plankton composition.
基金The Ozcan Research Group at UCLA gratefully acknowledges the support of the Presidential Early Career Award for Scientists and Engineers(PECASE),the Army Research Office(AROW911NF-13-1-0419 and W911NF-13-1-0197)+2 种基金the ARO Life Sciences Division,the ARO Young Investigator Award,the National Science Foundation(NSF)CAREER Award,the NSF CBET Division Biophotonics Program,the NSF Emerging Frontiers in Research and Innovation(EFRI)Award,the NSF EAGER Award,Office of Naval Research(ONR),the Howard Hughes Medical Institute(HHMI)the National Institutes of Health(NIH)Director’s New Innovator Award DP2OD006427 from the Office of the Director,National Institutes of HealthThis work is based on research performed in a laboratory renovated by the National Science Foundation under Grant No.0963183,which is an award funded under the American Recovery and Reinvestment Act of 2009(ARRA).
文摘Wide field-of-view(FOV)and high-resolution imaging requires microscopy modalities to have large space-bandwidth products.Lensfree on-chip microscopy decouples resolution from FOV and can achieve a space-bandwidth product greater than one billion under unit magnification using state-of-the-art opto-electronic sensor chips and pixel super-resolution techniques.However,using vertical illumination,the effective numerical aperture(NA)that can be achieved with an on-chip microscope is limited by a poor signal-to-noise ratio(SNR)at high spatial frequencies and imaging artifacts that arise as a result of the relatively narrow acceptance angles of the sensor’s pixels.Here,we report,for the first time,a synthetic aperture-based on-chip microscope in which the illumination angle is scanned across the surface of a dome to increase the effective NA of the reconstructed lensfree image to 1.4,achieving e.g.,,250-nm resolution at 700-nm wavelength under unit magnification.This synthetic aperture approach not only represents the largest NA achieved to date using an on-chip microscope but also enables color imaging of connected tissue samples,such as pathology slides,by achieving robust phase recovery without the need for multi-height scanning or any prior information about the sample.To validate the effectiveness of this synthetic aperture-based,partially coherent,holographic on-chip microscope,we have successfully imaged color-stained cancer tissue slides as well as unstained Papanicolaou smears across a very large FOV of 20.5 mm^(2).This compact on-chip microscope based on a synthetic aperture approach could be useful for various applications in medicine,physical sciences and engineering that demand high-resolution wide-field imaging.
基金The Ozcan Research Group at UCLA acknowledges the support of Fujikura(Japan).
文摘A plethora of research advances have emerged in the fields of optics and photonics that benefit from harnessing the power of machine learning.Specifically,there has been a revival of interest in optical computing hardware due to its potential advantages for machine learning tasks in terms of parallelization,power efficiency and computation speed.Diffractive deep neural networks(D^(2)NNs)form such an optical computing framework that benefits from deep learning-based design of successive diffractive layers to all-optically process information as the input light diffracts through these passive layers.D^(2)NNs have demonstrated success in various tasks,including object classification,the spectral encoding of information,optical pulse shaping and imaging.Here,we substantially improve the inference performance of diffractive optical networks using feature engineering and ensemble learning.After independently training 1252 D^(2)NNs that were diversely engineered with a variety of passive input filters,we applied a pruning algorithm to select an optimized ensemble of D^(2)NNs that collectively improved the image classification accuracy.Through this pruning,we numerically demonstrated that ensembles of N=14 and N=30 D^(2)NNs achieve blind testing accuracies of 61.14±0.23%and 62.13±0.05%,respectively,on the classification of GFAR-10 test images,providing an inference improvennent of>16%compared to the average performance of the individual D^(2)NNs within each ensemble.These results constitute the highest inference accuracies achieved to date by any diffractive optical neural network design on the same dataset and might provide a significant leap to extend the application space of diffractive optical image classification and machine vision systems.
基金supported by the grants from NC TraCS,NIHs Clinical and Translational Science Awards (CTSA,NIH grant 1UL1TR001111)the use of the Analytical Instrumentation Facility (AIF) at NC State,which is supported by the State of North Carolina and the National Science Foundation (NSF).
文摘Controlling postprandial glucose levels for diabetic patients is critical to achieve the tight glycemic control that decreases the risk for developing long-term micro- and macrovascular complications.Herein,we report a glucose-responsive oral insulin delivery system based on Fc receptor (FcRn)-targeted liposomes with glucose-sensitive hyaluronic acid (HA) shell for postprandial glycemic regulation.After oral administration,the HA shell can quickly detach in the presence of increasing intestinal glucose concentration due to the competitive binding of glucose with the phenylboronic acid groups conjugated with HA.The exposed Fc groups on the surface of liposomes then facilitate enhanced intestinal absorption in an FcRn-mediated transport pathway.In vivo studies on chemically-induced type 1 diabetic mice show this oral glucose-responsive delivery approach can effectively reduce postprandial blood glucose excursions.This work is the first demonstration of an oral insulin delivery system directly triggered by increasing postprandial glucose concentrations in the intestine to provide an on-demand insulin release with ease of administration.