期刊文献+
共找到41篇文章
< 1 2 3 >
每页显示 20 50 100
In-flow holographic tomography boosts lipid droplet quantification
1
作者 Michael John Fanous aydogan ozcan 《Opto-Electronic Advances》 SCIE EI CAS CSCD 2023年第6期1-3,共3页
In their recently published paper in Opto-Electronic Ad-vances,Pietro Ferraro and his colleagues report on a new high-throughput tomographic phase instrument that precisely quantifies intracellular lipid droplets(LDs)... In their recently published paper in Opto-Electronic Ad-vances,Pietro Ferraro and his colleagues report on a new high-throughput tomographic phase instrument that precisely quantifies intracellular lipid droplets(LDs)1.LDs are lipid storage organelles found in most cell types and play an active role in critical biological pro-cesses,including energy metabolism,membrane homeo-stasis. 展开更多
关键词 HOLOGRAPHIC FLOW precisely
下载PDF
Emerging Advances to Transform Histopathology Using Virtual Staining 被引量:4
2
作者 Yair Rivenson Kevin de Haan +1 位作者 W.Dean Wallace aydogan ozcan 《Biomedical Engineering Frontiers》 2020年第1期13-23,共11页
In an age where digitization is widespread in clinical and preclinical workflows,pathology is still predominantly practiced by microscopic evaluation of stained tissue specimens affixed on glass slides.Over the last d... In an age where digitization is widespread in clinical and preclinical workflows,pathology is still predominantly practiced by microscopic evaluation of stained tissue specimens affixed on glass slides.Over the last decade,new high throughput digital scanning microscopes have ushered in the era of digital pathology that,along with recent advances in machine vision,have opened up new possibilities for Computer-Aided-Diagnoses.Despite these advances,the high infrastructural costs related to digital pathology and the perception that the digitization process is an additional and nondirectly reimbursable step have challenged its widespread adoption.Here,we discuss how emerging virtual staining technologies and machine learning can help to disrupt the standard histopathology workflow and create new avenues for the diagnostic paradigm that will benefit patients and healthcare systems alike via digital pathology. 展开更多
关键词 stained WIDESPREAD DIGIT
下载PDF
Label-Free Virtual HER2 Immunohistochemical Staining of Breast Tissue using Deep Learning
3
作者 Bijie Bai Hongda Wang +15 位作者 Yuzhu Li Kevin de Haan Francesco Colonnese Yujie Wan Jingyi Zuo Ngan B.Doan Xiaoran Zhang Yijie Zhang Jingxi Li Xilin Yang Wenjie Dong Morgan Angus Darrow Elham Kamangar Han Sung Lee Yair Rivenson aydogan ozcan 《Biomedical Engineering Frontiers》 2022年第1期422-436,共15页
The immunohistochemical(IHC)staining of the human epidermal growth factor receptor 2(HER2)biomarker is widely practiced in breast tissue analysis,preclinical studies,and diagnostic decisions,guiding cancer treatment a... The immunohistochemical(IHC)staining of the human epidermal growth factor receptor 2(HER2)biomarker is widely practiced in breast tissue analysis,preclinical studies,and diagnostic decisions,guiding cancer treatment and investigation of pathogenesis.HER2 staining demands laborious tissue treatment and chemical processing performed by a histotechnologist,which typically takes one day to prepare in a laboratory,increasing analysis time and associated costs.Here,we describe a deep learning-based virtual HER2 IHC staining method using a conditional generative adversarial network that is trained to rapidly transform autofluorescence microscopic images of unlabeled/label-free breast tissue sections into bright-field equivalent microscopic images,matching the standard HER2 IHC staining that is chemically performed on the same tissue sections.The efficacy of this virtual HER2 staining framework was demonstrated by quantitative analysis,in which three board-certified breast pathologists blindly graded the HER2 scores of virtually stained and immunohistochemically stained HER2 whole slide images(WSIs)to reveal that the HER2 scores determined by inspecting virtual IHC images are as accurate as their immunohistochemically stained counterparts.A second quantitative blinded study performed by the same diagnosticians further revealed that the virtually stained HER2 images exhibit a comparable staining quality in the level of nuclear detail,membrane clearness,and absence of staining artifacts with respect to their immunohistochemically stained counterparts.This virtual HER2 staining framework bypasses the costly,laborious,and time-consuming IHC staining procedures in laboratory and can be extended to other types of biomarkers to accelerate the IHC tissue staining used in life sciences and biomedical workflow. 展开更多
关键词 HER2 consuming DEEP
下载PDF
All-optical image denoising using a diffractive visual processor
4
作者 Çağatay Işıl Tianyi Gan +9 位作者 Fazil Onuralp Ardic Koray Mentesoglu Jagrit Digani Huseyin Karaca Hanlong Chen Jingxi Li Deniz Mengu Mona Jarrahi Kaan Akşit aydogan ozcan 《Light(Science & Applications)》 SCIE EI CSCD 2024年第3期429-445,共17页
Image denoising,one of the essential inverse problems,targets to remove noise/artifacts from input images.In general,digital image denoising algorithms,executed on computers,present latency due to several iterations i... Image denoising,one of the essential inverse problems,targets to remove noise/artifacts from input images.In general,digital image denoising algorithms,executed on computers,present latency due to several iterations implemented in,e.g.,graphics processing units(GPUs).While deep learning-enabled methods can operate non-iteratively,they also introduce latency and impose a significant computational burden,leading to increased power consumption.Here,we introduce an analog diffractive image denoiser to all-optically and non-iteratively clean various forms of noise and artifacts from input images–implemented at the speed of light propagation within a thin diffractive visual processor that axially spans<250×λ,whereλis the wavelength of light.This all-optical image denoiser comprises passive transmissive layers optimized using deep learning to physically scatter the optical modes that represent various noise features,causing them to miss the output image Field-of-View(FoV)while retaining the object features of interest.Our results show that these diffractive denoisers can efficiently remove salt and pepper noise and image rendering-related spatial artifacts from input phase or intensity images while achieving an output power efficiency of~30–40%.We experimentally demonstrated the effectiveness of this analog denoiser architecture using a 3D-printed diffractive visual processor operating at the terahertz spectrum.Owing to their speed,power-efficiency,and minimal computational overhead,all-optical diffractive denoisers can be transformative for various image display and projection systems,including,e.g.,holographic displays. 展开更多
关键词 REMOVE RENDERING HOLOGRAPHIC
原文传递
OAM-based diffractive all-optical classification
5
作者 Md Sadman Sakib Rahman aydogan ozcan 《Advanced Photonics》 SCIE EI CAS CSCD 2024年第1期6-7,共2页
Object classification is an important aspect of machine intelligence.Current practices in object classification entail the digitization of object information followed by the application of digital algorithms such as d... Object classification is an important aspect of machine intelligence.Current practices in object classification entail the digitization of object information followed by the application of digital algorithms such as deep neural networks.The execution of digital neural networks is power-consuming,and the throughput is limited.The existing von Neumann digital computing paradigm is also less suited for the implementation of highly parallel neural network architectures.^(1) 展开更多
关键词 consuming DIGIT NEURAL
原文传递
Massively parallel universal linear transformations using a wavelength-multiplexed diffractive optical network 被引量:2
6
作者 Jingxi Li Tianyi Gan +3 位作者 Bijie Bai Yi Luo Mona Jarrahi aydogan ozcan 《Advanced Photonics》 SCIE EI CAS CSCD 2023年第1期27-49,共23页
Large-scale linear operations are the cornerstone for performing complex computational tasks.Using optical computing to perform linear transformations offers potential advantages in terms of speed,parallelism,and scal... Large-scale linear operations are the cornerstone for performing complex computational tasks.Using optical computing to perform linear transformations offers potential advantages in terms of speed,parallelism,and scalability.Previously,the design of successive spatially engineered diffractive surfaces forming an optical network was demonstrated to perform statistical inference and compute an arbitrary complex-valued linear transformation using narrowband illumination.We report deep-learning-based design of a massively parallel broadband diffractive neural network for all-optically performing a large group of arbitrarily selected,complex-valued linear transformations between an input and output field of view,each with Ni and No pixels,respectively.This broadband diffractive processor is composed of Nw wavelength channels,each of which is uniquely assigned to a distinct target transformation;a large set of arbitrarily selected linear transformations can be individually performed through the same diffractive network at different illumination wavelengths,either simultaneously or sequentially(wavelength scanning).We demonstrate that such a broadband diffractive network,regardless of its material dispersion,can successfully approximate Nw unique complex-valued linear transforms with a negligible error when the number of diffractive neurons(N)in its design is≥2NwNiNo.We further report that the spectral multiplexing capability can be increased by increasing N;our numerical analyses confirm these conclusions for Nw>180 and indicate that it can further increase to Nw∼2000,depending on the upper bound of the approximation error.Massively parallel,wavelength-multiplexed diffractive networks will be useful for designing highthroughput intelligent machine-vision systems and hyperspectral processors that can perform statistical inference and analyze objects/scenes with unique spectral properties. 展开更多
关键词 optical neural network deep learning diffractive optical network wavelength multiplexing optical computing
原文传递
Deep learning-enabled virtual histological staining of biological samples 被引量:1
7
作者 Bijie Bai Xilin Yang +3 位作者 Yuzhu Li Yijie Zhang Nir Pillar aydogan ozcan 《Light(Science & Applications)》 SCIE EI CAS CSCD 2023年第3期335-354,共20页
Histological staining is the gold standard for tissue examination in clinical pathology and life-science research,which visualizes the tissue and cellular structures using chromatic dyes or fluorescence labels to aid ... Histological staining is the gold standard for tissue examination in clinical pathology and life-science research,which visualizes the tissue and cellular structures using chromatic dyes or fluorescence labels to aid the microscopic assessment of tissue.However,the current histological staining workflow requires tedious sample preparation steps,specialized laboratory infrastructure,and trained histotechnologists,making it expensive,time-consuming,and not accessible in resource-limited settings.Deep learning techniques created new opportunities to revolutionize staining methods by digitally generating histological stains using trained neural networks,providing rapid,cost-effective,and accurate alternatives to standard chemical staining methods.These techniques,broadly referred to as virtual staining,were extensively explored by multiple research groups and demonstrated to be successful in generating various types of histological stains from label-free microscopic images of unstained samples;similar approaches were also used for transforming images of an already stained tissue sample into another type of stain,performing virtual stain-to-stain transformations.In this Review,we provide a comprehensive overview of the recent research advances in deep learning-enabled virtual histological staining techniques.The basic concepts and the typical workflow of virtual staining are introduced,followed by a discussion of representative works and their technical innovations.We also share our perspectives on the future of this emerging field,aiming to inspire readers from diverse scientifc fields to further expand the scope of deep learning-enabled virtual histological staining techniques and their applications. 展开更多
关键词 DEEP GENERATING consuming
原文传递
Universal linear intensity transformations using spatially incoherent diffractive processors 被引量:1
8
作者 Md Sadman Sakib Rahman Xilin Yang +2 位作者 Jingxi Li Bijie Bai aydogan ozcan 《Light(Science & Applications)》 SCIE EI CSCD 2023年第9期1830-1856,共27页
Under spatially coherent light,a diffractive optical network composed of structured surfaces can be designed to perform any arbitrary complex-valued linear transformation between its input and output fields-of-view(FO... Under spatially coherent light,a diffractive optical network composed of structured surfaces can be designed to perform any arbitrary complex-valued linear transformation between its input and output fields-of-view(FOVs)if the total number(N)of optimizable phase-only diffractive features is≥~2N_(i)N_(o),where Ni and No refer to the number of useful pixels at the input and the output FOVs,respectively.Here we report the design of a spatially incoherent diffractive optical processor that can approximate any arbitrary linear transformation in time-averaged intensity between its input and output FOVs.Under spatially incoherent monochromatic light,the spatially varying intensity point spread function(H)of a diffractive network,corresponding to a given,arbitrarily-selected linear intensity transformation,can be written as H(m,n;m′,n′)=|h(m,n;m′,n′)|^(2),where h is the spatially coherent point spread function of the same diffractive network,and(m,n)and(m′,n′)define the coordinates of the output and input FOVs,respectively.Using numerical simulations and deep learning,supervised through examples of input-output profiles,we demonstrate that a spatially incoherent diffractive network can be trained to all-optically perform any arbitrary linear intensity transformation between its input and output if N≥~2N_(i)N_(o).We also report the design of spatially incoherent diffractive networks for linear processing of intensity information at multiple illumination wavelengths,operating simultaneously.Finally,we numerically demonstrate a diffractive network design that performs all-optical classification of handwritten digits under spatially incoherent illumination,achieving a test accuracy of>95%.Spatially incoherent diffractive networks will be broadly useful for designing all-optical visual processors that can work under natural light. 展开更多
关键词 spatially INTENSITY ARBITRARY
原文传递
Optical information transfer through random unknown diffusers using electronic encoding and diffractive decoding
9
作者 Yuhang Li Tianyi Gan +3 位作者 Bijie Bai Cagatay Isıl Mona Jarrahi aydogan ozcan 《Advanced Photonics》 SCIE EI CAS CSCD 2023年第4期85-99,共15页
Free-space optical information transfer through diffusive media is critical in many applications, such as biomedical devices and optical communication, but remains challenging due to random, unknown perturbations in t... Free-space optical information transfer through diffusive media is critical in many applications, such as biomedical devices and optical communication, but remains challenging due to random, unknown perturbations in the optical path. We demonstrate an optical diffractive decoder with electronic encoding to accurately transfer the optical information of interest, corresponding to, e.g., any arbitrary input object or message, through unknown random phase diffusers along the optical path. This hybrid electronic-optical model, trained using supervised learning, comprises a convolutional neural network-based electronic encoder and successive passive diffractive layers that are jointly optimized. After their joint training using deep learning,our hybrid model can transfer optical information through unknown phase diffusers, demonstrating generalization to new random diffusers never seen before. The resulting electronic-encoder and optical-decoder model was experimentally validated using a 3D-printed diffractive network that axially spans <70λ, whereλ = 0.75 mm is the illumination wavelength in the terahertz spectrum, carrying the desired optical information through random unknown diffusers. The presented framework can be physically scaled to operate at different parts of the electromagnetic spectrum, without retraining its components, and would offer low-power and compact solutions for optical information transfer in free space through unknown random diffusive media. 展开更多
关键词 optical information transfer electronic encoding optical decoder diffractive neural network DIFFUSERS
原文传递
High-throughput terahertz imaging:progress and challenges
10
作者 Xurong Li Jingxi Li +2 位作者 Yuhang Li aydogan ozcan Mona Jarrahi 《Light(Science & Applications)》 SCIE EI CSCD 2023年第10期2053-2073,共21页
Many exciting terahertz imaging applications,such as non-destructive evaluation,biomedical diagnosis,and security screening,have been historically limited in practical usage due to the raster-scanning requirement of i... Many exciting terahertz imaging applications,such as non-destructive evaluation,biomedical diagnosis,and security screening,have been historically limited in practical usage due to the raster-scanning requirement of imaging systems,which impose very low imaging speeds.However,recent advancements in terahertz imaging systems have greatly increased the imaging throughput and brought the promising potential of terahertz radiation from research laboratories closer to real-world applications.Here,we review the development of terahertz imaging technologies from both hardware and computational imaging perspectives.We introduce and compare different types of hardware enabling frequency-domain and time-domain imaging using various thermal,photon,and field image sensor arrays.We discuss how different imaging hardware and computational imaging algorithms provide opportunities for capturing time-of-flight,spectroscopic,phase,and intensity image data at high throughputs.Furthermore,the new prospects and challenges for the development of future high-throughput terahertz imaging systems are briefly introduced. 展开更多
关键词 HARDWARE CLOSER TERAHERTZ
原文传递
Snapshot multispectral imaging using a diffractive optical network
11
作者 Deniz Mengu Anika Tabassum +1 位作者 Mona Jarrahi aydogan ozcan 《Light(Science & Applications)》 SCIE EI CSCD 2023年第5期789-808,共20页
Multispectral imaging has been used for numerous applications in e.g.,environmental monitoring,aerospace,defense,and biomedicine.Here,we present a diffractive optical network-based multispectral imaging system trained... Multispectral imaging has been used for numerous applications in e.g.,environmental monitoring,aerospace,defense,and biomedicine.Here,we present a diffractive optical network-based multispectral imaging system trained using deep learning to create a virtual spectral filter array at the output image field-of-view.This diffractive multispectral imager performs spatially-coherent imaging over a large spectrum,and at the same time,routes a pre-determined set of spectral channels onto an array of pixels at the output plane,converting a monochrome focal-plane array or image sensor into a multispectral imaging device without any spectral filters or image recovery algorithms.Furthermore,the spectral responsivity of this diffractive multispectral imager is not sensitive to input polarization states.Through numerical simulations,we present different diffractive network designs that achieve snapshot multispectral imaging with 4,9 and 16 unique spectral bands within the visible spectrum,based on passive spatially-structured diffractive surfaces,with a compact design that axially spans ~72λ_(m),where λ_(m) is the mean wavelength of the spectral band of interest.Moreover,we experimentally demonstrate a diffractive multispectral imager based on a 3D-printed diffractive network that creates at its output image plane a spatially repeating virtual spectral filter array with 2×2=4 unique bands at terahertz spectrum.Due to their compact form factor and computation-free,power-efficient and polarization-insensitive forward operation,diffractive multispectral imagers can be transformative for various imaging and sensing applications and be used at different parts of the electromagnetic spectrum where high-density and wide-area multispectral pixel arrays are not widely available. 展开更多
关键词 SPECTRUM SPECTRAL spatially
原文传递
All-optical image classification through unknown random diffusers using a single-pixel diffractive network
12
作者 Bijie Bai Yuhang Li +4 位作者 Yi Luo Xurong Li Ege Cetintas Mona Jarrrahi aydogan ozcan 《Light(Science & Applications)》 SCIE EI CSCD 2023年第4期570-584,共15页
Classification of an object behind a random and unknown scattering medium sets a challenging task for computational imaging and machine vision fields.Recent deep learning-based approaches demonstrated the classificati... Classification of an object behind a random and unknown scattering medium sets a challenging task for computational imaging and machine vision fields.Recent deep learning-based approaches demonstrated the classification of objects using diffuser-distorted patterns collected by an image sensor.These methods demand relatively large-scale computing using deep neural networks running on digital computers.Here,we present an all-optical processor to directly classify unknown objects through unknown,random phase diffusers using broadband illumination detected with a single pixel.A set of transmissive diffractive layers,optimized using deep learning,forms a physical network that all-optically maps the spatial information of an input object behind a random diffuser into the power spectrum of the output light detected through a single pixel at the output plane of the diffractive network.We numerically demonstrated the accuracy of this framework using broadband radiation to classify unknown handwritten digits through random new diffusers,never used during the training phase,and achieved a blind testing accuracy of 87.74±1.12%.We also experimentally validated our single-pixel broadband diffractive network by classifying handwritten digits"0"and"1"through a random diffuser using terahertz waves and a 3D-printed diffractive network.This single-pixel all-optical object classification system through random diffusers is based on passive diffractive layers that process broadband input light and can operate at any part of the electromagnetic spectrum by simply scaling the diffractive features proportional to the wavelength range of interest.These results have various potential applications in,e.g.,biomedical imaging,security,robotics,and autonomous driving. 展开更多
关键词 network PROCESSOR RANDOM
原文传递
PhaseStain:the digital staining of label-free quantitative phase microscopy images using deep learning 被引量:22
13
作者 Yair Rivenson Tairan Liu +3 位作者 Zhensong Wei Yibo Zhang Kevin de Haan aydogan ozcan 《Light(Science & Applications)》 SCIE EI CAS CSCD 2019年第1期983-993,共11页
Using a deep neural network,we demonstrate a digital staining technique,which we term PhaseStain,to transform the quantitative phase images(QPI)of label-free tissue sections into images that are equivalent to the brig... Using a deep neural network,we demonstrate a digital staining technique,which we term PhaseStain,to transform the quantitative phase images(QPI)of label-free tissue sections into images that are equivalent to the brightfield microscopy images of the same samples that are histologically stained.Through pairs of image data(QPI and the corresponding brightfield images,acquired after staining),we train a generative adversarial network and demonstrate the effectiveness of this virtual-staining approach using sections of human skin,kidney,and liver tissue,matching the brightfield microscopy images of the same samples stained with Hematoxylin and Eosin,Jones’stain,and Masson’s trichrome stain,respectively.This digital-staining framework may further strengthen various uses of label-free QPI techniques in pathology applications and biomedical research in general,by eliminating the need for histological staining,reducing sample preparation related costs and saving time.Our results provide a powerful example of some of the unique opportunities created by data-driven image transformations enabled by deep learning. 展开更多
关键词 network IMAGE PHASE
原文传递
Deep learning in holography and coherent imaging 被引量:18
14
作者 Yair Rivenson Yichen Wu aydogan ozcan 《Light(Science & Applications)》 SCIE EI CAS CSCD 2019年第1期437-444,共8页
Recent advances in deep learning have given rise to a new paradigm of holographic image reconstruction and phase recovery techniques with real-time performance.Through data-driven approaches,these emerging techniques ... Recent advances in deep learning have given rise to a new paradigm of holographic image reconstruction and phase recovery techniques with real-time performance.Through data-driven approaches,these emerging techniques have overcome some of the challenges associated with existing holographic image reconstruction methods while also minimizing the hardware requirements of holography.These recent advances open up a myriad of new opportunities for the use of coherent imaging systems in biomedical and engineering research and related applications. 展开更多
关键词 COHERENT HOLOGRAPHIC OVERCOME
原文传递
A deep learning-enabled portable imaging flow cytometer for cost-effective, highthroughput, and label-free analysis of natural water samples 被引量:15
15
作者 Zoltán Gӧrӧcs Miu Tamamitsu +8 位作者 Vittorio Bianco Patrick Wolf Shounak Roy Koyoshi Shindo Kyrollos Yanny Yichen Wu Hatice Ceylan Koydemir Yair Rivenson aydogan ozcan 《Light(Science & Applications)》 SCIE EI CAS CSCD 2018年第1期416-427,共12页
We report a deep learning-enabled field-portable and cost-effective imaging flow cytometer that automatically captures phase-contrast color images of the contents of a continuously flowing water sample at a throughput... We report a deep learning-enabled field-portable and cost-effective imaging flow cytometer that automatically captures phase-contrast color images of the contents of a continuously flowing water sample at a throughput of 100 mL/h.The device is based on partially coherent lens-free holographic microscopy and acquires the diffraction patterns of flowing micro-objects inside a microfluidic channel.These holographic diffraction patterns are reconstructed in real time using a deep learning-based phase-recovery and image-reconstruction method to produce a color image of each micro-object without the use of external labeling.Motion blur is eliminated by simultaneously illuminating the sample with red,green,and blue light-emitting diodes that are pulsed.Operated by a laptop computer,this portable device measures 15.5 cm×15 cm×12.5 cm,weighs 1 kg,and compared to standard imaging flow cytometers,it provides extreme reductions of cost,size and weight while also providing a high volumetric throughput over a large object size range.We demonstrated the capabilities of this device by measuring ocean samples at the Los Angeles coastline and obtaining images of its micro-and nanoplankton composition.Furthermore,we measured the concentration of a potentially toxic alga(Pseudo-nitzschia)in six public beaches in Los Angeles and achieved good agreement with measurements conducted by the California Department of Public Health.The cost-effectiveness,compactness,and simplicity of this computational platform might lead to the creation of a network of imaging flow cytometers for largescale and continuous monitoring of the ocean microbiome,including its plankton composition. 展开更多
关键词 FLOW HOLOGRAPHIC SIMPLICITY
原文传递
Bright-field holography:cross-modality deep learning enables snapshot 3D imaging with bright-field contrast using a single hologram 被引量:13
16
作者 Yichen Wu Yilin Luo +4 位作者 Gunvant Chaudhari Yair Rivenson Ayfer Calis Kevin de Haan aydogan ozcan 《Light(Science & Applications)》 SCIE EI CAS CSCD 2019年第1期936-942,共7页
Digital holographic microscopy enables the 3D reconstruction of volumetric samples from a single-snapshot hologram.However,unlike a conventional bright-field microscopy image,the quality of holographic reconstructions... Digital holographic microscopy enables the 3D reconstruction of volumetric samples from a single-snapshot hologram.However,unlike a conventional bright-field microscopy image,the quality of holographic reconstructions is compromised by interference fringes as a result of twin images and out-of-plane objects.Here,we demonstrate that cross-modality deep learning using a generative adversarial network(GAN)can endow holographic images of a sample volume with bright-field microscopy contrast,combining the volumetric imaging capability of holography with the speckle-and artifact-free image contrast of incoherent bright-field microscopy.We illustrate the performance of this“bright-field holography”method through the snapshot imaging of bioaerosols distributed in 3D,matching the artifact-free image contrast and axial sectioning performance of a high-NA bright-field microscope.This data-driven deep-learning-based imaging method bridges the contrast gap between coherent and incoherent imaging,and enables the snapshot 3D imaging of objects with bright-field contrast from a single hologram,benefiting from the wave-propagation framework of holography. 展开更多
关键词 enable holographic bridges
原文传递
Design of task-specific optical systems using broadband diffractive neural networks 被引量:11
17
作者 Yi Luo Deniz Mengu +4 位作者 Nezih T.Yardimci Yair Rivenson Muhammed Veli Mona Jarrahi aydogan ozcan 《Light(Science & Applications)》 SCIE EI CAS CSCD 2019年第1期124-137,共14页
Deep learning has been transformative in many fields,motivating the emergence of various optical computing architectures.Diffractive optical network is a recently introduced optical computing framework that merges wav... Deep learning has been transformative in many fields,motivating the emergence of various optical computing architectures.Diffractive optical network is a recently introduced optical computing framework that merges wave optics with deep-learning methods to design optical neural networks.Diffraction-based all-optical object recognition systems,designed through this framework and fabricated by 3D printing,have been reported to recognize handwritten digits and fashion products,demonstrating all-optical inference and generalization to sub-classes of data.These previous diffractive approaches employed monochromatic coherent light as the illumination source.Here,we report a broadband diffractive optical neural network design that simultaneously processes a continuum of wavelengths generated by a temporally incoherent broadband source to all-optically perform a specific task learned using deep learning.We experimentally validated the success of this broadband diffractive neural network architecture by designing,fabricating and testing seven different multi-layer,diffractive optical systems that transform the optical wavefront generated by a broadband THz pulse to realize(1)a series of tuneable,single-passband and dual-passband spectral filters and(2)spatially controlled wavelength de-multiplexing.Merging the native or engineered dispersion of various material systems with a deep-learning-based design strategy,broadband diffractive neural networks help us engineer the light–matter interaction in 3D,diverging from intuitive and analytical design methods to create taskspecific optical components that can all-optically perform deterministic tasks or statistical inference for optical machine learning. 展开更多
关键词 NEURAL networks SPECIFIC
原文传递
Handheld high-throughput plasmonic biosensor using computational on-chip imaging 被引量:6
18
作者 Arif E Cetin Ahmet F Coskun +4 位作者 Betty C Galarreta Min Huang David Herman aydogan ozcan Hatice Altug 《Light(Science & Applications)》 SCIE EI CAS 2014年第1期388-397,共10页
We demonstrate a handheld on-chip biosensing technology that employs plasmonic microarrays coupled with a lens-free computational imaging system towards multiplexed and high-throughput screening of biomolecular intera... We demonstrate a handheld on-chip biosensing technology that employs plasmonic microarrays coupled with a lens-free computational imaging system towards multiplexed and high-throughput screening of biomolecular interactions for point-of-care applications and resource-limited settings.This lightweight and field-portable biosensing device,weighing 60 g and 7.5 cm tall,utilizes a compact optoelectronic sensor array to record the diffraction patterns of plasmonic nanostructures under uniform illumination by a single-light emitting diode tuned to the plasmonic mode of the nanoapertures.Employing a sensitive plasmonic array design that is combined with lens-free computational imaging,we demonstrate label-free and quantitative detection of biomolecules with a protein layer thickness down to 3 nm.Integrating large-scale plasmonic microarrays,our on-chip imaging platform enables simultaneous detection of protein mono-and bilayers on the same platform over a wide range of biomolecule concentrations.In this handheld device,we also employ an iterative phase retrieval-based image reconstruction method,which offers the ability to digitally image a highly multiplexed array of sensors on the same plasmonic chip,making this approach especially suitable for high-throughput diagnostic applications in field settings. 展开更多
关键词 computational imaging high-throughput biodetection lens-free imaging on-chip sensing plasmonics point of care diagnostics TELEMEDICINE
原文传递
Phase recovery and holographic image reconstruction using deep learning in neural networks 被引量:5
19
作者 Yair Rivenson Yibo Zhang +2 位作者 Harun Günaydın Da Teng aydogan ozcan 《Light(Science & Applications)》 SCIE EI CAS CSCD 2017年第1期192-200,共9页
Phase recovery from intensity-only measurements forms the heart of coherent imaging techniques and holography.In this study,we demonstrate that a neural network can learn to perform phase recovery and holographic imag... Phase recovery from intensity-only measurements forms the heart of coherent imaging techniques and holography.In this study,we demonstrate that a neural network can learn to perform phase recovery and holographic image reconstruction after appropriate training.This deep learning-based approach provides an entirely new framework to conduct holographic imaging by rapidly eliminating twin-image and self-interference-related spatial artifacts.This neural network-based method is fast to compute and reconstructs phase and amplitude images of the objects using only one hologram,requiring fewer measurements in addition to being computationally faster.We validated this method by reconstructing the phase and amplitude images of various samples,including blood and Pap smears and tissue sections.These results highlight that challenging problems in imaging science can be overcome through machine learning,providing new avenues to design powerful computational imaging systems. 展开更多
关键词 deep learning HOLOGRAPHY machine learning neural networks phase recovery
原文传递
Digital synthesis of histological stains using micro-structured and multiplexed virtual staining of label-free tissue 被引量:7
20
作者 Yijie Zhang Kevin de Haan +3 位作者 Yair Rivenson Jingxi Li Apostolos Delis aydogan ozcan 《Light(Science & Applications)》 SCIE EI CAS CSCD 2020年第1期1273-1285,共13页
Histological staining is a vital step in diagnosing various diseases and has been used for more than a century to provide contrast in tissue sections,rendering the tissue constituents visible for microscopic analysis ... Histological staining is a vital step in diagnosing various diseases and has been used for more than a century to provide contrast in tissue sections,rendering the tissue constituents visible for microscopic analysis by medical experts.However,this process is time consuming,labour intensive,expensive and destructive to the specimen.Recently,the ability to virtually stain unlabelled tissue sections,entirely avoiding the histochemical staining step,has been demonstrated using tissue-stain-specific deep neural networks.Here,we present a new deep-learning-based framework that generates virtually stained images using label-free tissue images,in which different stains are merged following a micro-structure map defined by the user.This approach uses a single deep neural network that receives two different sources of information as its input:(1)autofluorescence images of the label-free tissue sample and(2)a“digital staining matrix”,which represents the desired microscopic map of the different stains to be virtually generated in the same tissue section.This digital staining matrix is also used to virtually blend existing stains,digitally synthesizing new histological stains.We trained and blindly tested this virtual-staining network using unlabelled kidney tissue sections to generate micro-structured combinations of haematoxylin and eosin(H&E),Jones’silver stain,and Masson’s trichrome stain.Using a single network,this approach multiplexes the virtual staining of label-free tissue images with multiple types of stains and paves the way for synthesizing new digital histological stains that can be created in the same tissue cross section,which is currently not feasible with standard histochemical staining methods. 展开更多
关键词 network RENDERING synthesis
原文传递
上一页 1 2 3 下一页 到第
使用帮助 返回顶部