The immunohistochemical(IHC)staining of the human epidermal growth factor receptor 2(HER2)biomarker is widely practiced in breast tissue analysis,preclinical studies,and diagnostic decisions,guiding cancer treatment a...The immunohistochemical(IHC)staining of the human epidermal growth factor receptor 2(HER2)biomarker is widely practiced in breast tissue analysis,preclinical studies,and diagnostic decisions,guiding cancer treatment and investigation of pathogenesis.HER2 staining demands laborious tissue treatment and chemical processing performed by a histotechnologist,which typically takes one day to prepare in a laboratory,increasing analysis time and associated costs.Here,we describe a deep learning-based virtual HER2 IHC staining method using a conditional generative adversarial network that is trained to rapidly transform autofluorescence microscopic images of unlabeled/label-free breast tissue sections into bright-field equivalent microscopic images,matching the standard HER2 IHC staining that is chemically performed on the same tissue sections.The efficacy of this virtual HER2 staining framework was demonstrated by quantitative analysis,in which three board-certified breast pathologists blindly graded the HER2 scores of virtually stained and immunohistochemically stained HER2 whole slide images(WSIs)to reveal that the HER2 scores determined by inspecting virtual IHC images are as accurate as their immunohistochemically stained counterparts.A second quantitative blinded study performed by the same diagnosticians further revealed that the virtually stained HER2 images exhibit a comparable staining quality in the level of nuclear detail,membrane clearness,and absence of staining artifacts with respect to their immunohistochemically stained counterparts.This virtual HER2 staining framework bypasses the costly,laborious,and time-consuming IHC staining procedures in laboratory and can be extended to other types of biomarkers to accelerate the IHC tissue staining used in life sciences and biomedical workflow.展开更多
In an age where digitization is widespread in clinical and preclinical workflows,pathology is still predominantly practiced by microscopic evaluation of stained tissue specimens affixed on glass slides.Over the last d...In an age where digitization is widespread in clinical and preclinical workflows,pathology is still predominantly practiced by microscopic evaluation of stained tissue specimens affixed on glass slides.Over the last decade,new high throughput digital scanning microscopes have ushered in the era of digital pathology that,along with recent advances in machine vision,have opened up new possibilities for Computer-Aided-Diagnoses.Despite these advances,the high infrastructural costs related to digital pathology and the perception that the digitization process is an additional and nondirectly reimbursable step have challenged its widespread adoption.Here,we discuss how emerging virtual staining technologies and machine learning can help to disrupt the standard histopathology workflow and create new avenues for the diagnostic paradigm that will benefit patients and healthcare systems alike via digital pathology.展开更多
Using a deep neural network,we demonstrate a digital staining technique,which we term PhaseStain,to transform the quantitative phase images(QPI)of label-free tissue sections into images that are equivalent to the brig...Using a deep neural network,we demonstrate a digital staining technique,which we term PhaseStain,to transform the quantitative phase images(QPI)of label-free tissue sections into images that are equivalent to the brightfield microscopy images of the same samples that are histologically stained.Through pairs of image data(QPI and the corresponding brightfield images,acquired after staining),we train a generative adversarial network and demonstrate the effectiveness of this virtual-staining approach using sections of human skin,kidney,and liver tissue,matching the brightfield microscopy images of the same samples stained with Hematoxylin and Eosin,Jones’stain,and Masson’s trichrome stain,respectively.This digital-staining framework may further strengthen various uses of label-free QPI techniques in pathology applications and biomedical research in general,by eliminating the need for histological staining,reducing sample preparation related costs and saving time.Our results provide a powerful example of some of the unique opportunities created by data-driven image transformations enabled by deep learning.展开更多
Digital holographic microscopy enables the 3D reconstruction of volumetric samples from a single-snapshot hologram.However,unlike a conventional bright-field microscopy image,the quality of holographic reconstructions...Digital holographic microscopy enables the 3D reconstruction of volumetric samples from a single-snapshot hologram.However,unlike a conventional bright-field microscopy image,the quality of holographic reconstructions is compromised by interference fringes as a result of twin images and out-of-plane objects.Here,we demonstrate that cross-modality deep learning using a generative adversarial network(GAN)can endow holographic images of a sample volume with bright-field microscopy contrast,combining the volumetric imaging capability of holography with the speckle-and artifact-free image contrast of incoherent bright-field microscopy.We illustrate the performance of this“bright-field holography”method through the snapshot imaging of bioaerosols distributed in 3D,matching the artifact-free image contrast and axial sectioning performance of a high-NA bright-field microscope.This data-driven deep-learning-based imaging method bridges the contrast gap between coherent and incoherent imaging,and enables the snapshot 3D imaging of objects with bright-field contrast from a single hologram,benefiting from the wave-propagation framework of holography.展开更多
An invasive biopsy followed by histological staining is the benchmark for pathological diagnosis of skin tumors.The process is cumbersome and time-consuming,often leading to unnecessary biopsies and scars.Emerging non...An invasive biopsy followed by histological staining is the benchmark for pathological diagnosis of skin tumors.The process is cumbersome and time-consuming,often leading to unnecessary biopsies and scars.Emerging noninvasive optical technologies such as reflectance confocal microscopy(RCM)can provide label-free,cellular-level resolution,in vivo images of skin without performing a biopsy.Although RCM is a useful diagnostic tool,it requires specialized training because the acquired images are grayscale,lack nuclear features,and are difficult to correlate with tissue pathology.Here,we present a deep learning-based framework that uses a convolutional neural network to rapidly transform in vivo RCM images of unstained skin into virtually-stained hematoxylin and eosin-like images with microscopic resolution,enabling visualization of the epidermis,dermal-epidermal junction,and superficial dermis layers.The network was trained under an adversarial learning scheme,which takes ex vivo RCM images of excised unstained/label-free tissue as inputs and uses the microscopic images of the same tissue labeled with acetic acid nuclear contrast staining as the ground truth.We show that this trained neural network can be used to rapidly perform virtual histology of in vivo,label-free RCM images of normal skin structure,basal cell carcinoma,and melanocytic nevi with pigmented melanocytes,demonstrating similar histological features to traditional histology from the same excised tissue.This application of deep learning-based virtual staining to noninvasive imaging technologies may permit more rapid diagnoses of malignant skin neoplasms and reduce invasive skin biopsies.展开更多
Histological staining is a vital step in diagnosing various diseases and has been used for more than a century to provide contrast in tissue sections,rendering the tissue constituents visible for microscopic analysis ...Histological staining is a vital step in diagnosing various diseases and has been used for more than a century to provide contrast in tissue sections,rendering the tissue constituents visible for microscopic analysis by medical experts.However,this process is time consuming,labour intensive,expensive and destructive to the specimen.Recently,the ability to virtually stain unlabelled tissue sections,entirely avoiding the histochemical staining step,has been demonstrated using tissue-stain-specific deep neural networks.Here,we present a new deep-learning-based framework that generates virtually stained images using label-free tissue images,in which different stains are merged following a micro-structure map defined by the user.This approach uses a single deep neural network that receives two different sources of information as its input:(1)autofluorescence images of the label-free tissue sample and(2)a“digital staining matrix”,which represents the desired microscopic map of the different stains to be virtually generated in the same tissue section.This digital staining matrix is also used to virtually blend existing stains,digitally synthesizing new histological stains.We trained and blindly tested this virtual-staining network using unlabelled kidney tissue sections to generate micro-structured combinations of haematoxylin and eosin(H&E),Jones’silver stain,and Masson’s trichrome stain.Using a single network,this approach multiplexes the virtual staining of label-free tissue images with multiple types of stains and paves the way for synthesizing new digital histological stains that can be created in the same tissue cross section,which is currently not feasible with standard histochemical staining methods.展开更多
基金support of NSF Biophotonics Program and the NIH/National Center for Advancing Translational Science UCLA CTSI Grant UL1TR001881.
文摘The immunohistochemical(IHC)staining of the human epidermal growth factor receptor 2(HER2)biomarker is widely practiced in breast tissue analysis,preclinical studies,and diagnostic decisions,guiding cancer treatment and investigation of pathogenesis.HER2 staining demands laborious tissue treatment and chemical processing performed by a histotechnologist,which typically takes one day to prepare in a laboratory,increasing analysis time and associated costs.Here,we describe a deep learning-based virtual HER2 IHC staining method using a conditional generative adversarial network that is trained to rapidly transform autofluorescence microscopic images of unlabeled/label-free breast tissue sections into bright-field equivalent microscopic images,matching the standard HER2 IHC staining that is chemically performed on the same tissue sections.The efficacy of this virtual HER2 staining framework was demonstrated by quantitative analysis,in which three board-certified breast pathologists blindly graded the HER2 scores of virtually stained and immunohistochemically stained HER2 whole slide images(WSIs)to reveal that the HER2 scores determined by inspecting virtual IHC images are as accurate as their immunohistochemically stained counterparts.A second quantitative blinded study performed by the same diagnosticians further revealed that the virtually stained HER2 images exhibit a comparable staining quality in the level of nuclear detail,membrane clearness,and absence of staining artifacts with respect to their immunohistochemically stained counterparts.This virtual HER2 staining framework bypasses the costly,laborious,and time-consuming IHC staining procedures in laboratory and can be extended to other types of biomarkers to accelerate the IHC tissue staining used in life sciences and biomedical workflow.
基金This study was financially supported by the NSF Biophotonics Program (USA).
文摘In an age where digitization is widespread in clinical and preclinical workflows,pathology is still predominantly practiced by microscopic evaluation of stained tissue specimens affixed on glass slides.Over the last decade,new high throughput digital scanning microscopes have ushered in the era of digital pathology that,along with recent advances in machine vision,have opened up new possibilities for Computer-Aided-Diagnoses.Despite these advances,the high infrastructural costs related to digital pathology and the perception that the digitization process is an additional and nondirectly reimbursable step have challenged its widespread adoption.Here,we discuss how emerging virtual staining technologies and machine learning can help to disrupt the standard histopathology workflow and create new avenues for the diagnostic paradigm that will benefit patients and healthcare systems alike via digital pathology.
基金The Ozcan Research Group at UCLA acknowledges the support of NSF Engineering Research Center(ERC,PATHS-UP)the Army Research Office(ARO,W911NF-13-1-0419 and W911NF-13-1-0197)+8 种基金the ARO Life Sciences Divisionthe National Science Foundation(NSF)CBET Division Biophotonics Programthe NSF Emerging Frontiers in Research and Innovation(EFRI)Awardthe NSF INSPIRE Award,NSF Partnerships for Innovation:Building Innovation Capacity(PFI:BIC)Programthe National Institutes of Health(NIH,R21EB023115)the Howard Hughes Medical Institute(HHMI)Vodafone Americas Foundationthe Mary Kay FoundationSteven&Alexandra Cohen Foundation.
文摘Using a deep neural network,we demonstrate a digital staining technique,which we term PhaseStain,to transform the quantitative phase images(QPI)of label-free tissue sections into images that are equivalent to the brightfield microscopy images of the same samples that are histologically stained.Through pairs of image data(QPI and the corresponding brightfield images,acquired after staining),we train a generative adversarial network and demonstrate the effectiveness of this virtual-staining approach using sections of human skin,kidney,and liver tissue,matching the brightfield microscopy images of the same samples stained with Hematoxylin and Eosin,Jones’stain,and Masson’s trichrome stain,respectively.This digital-staining framework may further strengthen various uses of label-free QPI techniques in pathology applications and biomedical research in general,by eliminating the need for histological staining,reducing sample preparation related costs and saving time.Our results provide a powerful example of some of the unique opportunities created by data-driven image transformations enabled by deep learning.
基金The Ozcan Group at UCLA acknowledges the support of the Koç Group,the National Science Foundation(PATHS-UP ERC)the Howard Hughes Medical Institute.Y.W.also acknowledges the support of the SPIE John Kiel Scholarship.
文摘Digital holographic microscopy enables the 3D reconstruction of volumetric samples from a single-snapshot hologram.However,unlike a conventional bright-field microscopy image,the quality of holographic reconstructions is compromised by interference fringes as a result of twin images and out-of-plane objects.Here,we demonstrate that cross-modality deep learning using a generative adversarial network(GAN)can endow holographic images of a sample volume with bright-field microscopy contrast,combining the volumetric imaging capability of holography with the speckle-and artifact-free image contrast of incoherent bright-field microscopy.We illustrate the performance of this“bright-field holography”method through the snapshot imaging of bioaerosols distributed in 3D,matching the artifact-free image contrast and axial sectioning performance of a high-NA bright-field microscope.This data-driven deep-learning-based imaging method bridges the contrast gap between coherent and incoherent imaging,and enables the snapshot 3D imaging of objects with bright-field contrast from a single hologram,benefiting from the wave-propagation framework of holography.
基金The authors acknowledge the funding of the National Science Foundation(USA).
文摘An invasive biopsy followed by histological staining is the benchmark for pathological diagnosis of skin tumors.The process is cumbersome and time-consuming,often leading to unnecessary biopsies and scars.Emerging noninvasive optical technologies such as reflectance confocal microscopy(RCM)can provide label-free,cellular-level resolution,in vivo images of skin without performing a biopsy.Although RCM is a useful diagnostic tool,it requires specialized training because the acquired images are grayscale,lack nuclear features,and are difficult to correlate with tissue pathology.Here,we present a deep learning-based framework that uses a convolutional neural network to rapidly transform in vivo RCM images of unstained skin into virtually-stained hematoxylin and eosin-like images with microscopic resolution,enabling visualization of the epidermis,dermal-epidermal junction,and superficial dermis layers.The network was trained under an adversarial learning scheme,which takes ex vivo RCM images of excised unstained/label-free tissue as inputs and uses the microscopic images of the same tissue labeled with acetic acid nuclear contrast staining as the ground truth.We show that this trained neural network can be used to rapidly perform virtual histology of in vivo,label-free RCM images of normal skin structure,basal cell carcinoma,and melanocytic nevi with pigmented melanocytes,demonstrating similar histological features to traditional histology from the same excised tissue.This application of deep learning-based virtual staining to noninvasive imaging technologies may permit more rapid diagnoses of malignant skin neoplasms and reduce invasive skin biopsies.
文摘Histological staining is a vital step in diagnosing various diseases and has been used for more than a century to provide contrast in tissue sections,rendering the tissue constituents visible for microscopic analysis by medical experts.However,this process is time consuming,labour intensive,expensive and destructive to the specimen.Recently,the ability to virtually stain unlabelled tissue sections,entirely avoiding the histochemical staining step,has been demonstrated using tissue-stain-specific deep neural networks.Here,we present a new deep-learning-based framework that generates virtually stained images using label-free tissue images,in which different stains are merged following a micro-structure map defined by the user.This approach uses a single deep neural network that receives two different sources of information as its input:(1)autofluorescence images of the label-free tissue sample and(2)a“digital staining matrix”,which represents the desired microscopic map of the different stains to be virtually generated in the same tissue section.This digital staining matrix is also used to virtually blend existing stains,digitally synthesizing new histological stains.We trained and blindly tested this virtual-staining network using unlabelled kidney tissue sections to generate micro-structured combinations of haematoxylin and eosin(H&E),Jones’silver stain,and Masson’s trichrome stain.Using a single network,this approach multiplexes the virtual staining of label-free tissue images with multiple types of stains and paves the way for synthesizing new digital histological stains that can be created in the same tissue cross section,which is currently not feasible with standard histochemical staining methods.