期刊文献+

利用深度学习扩展双光子成像视场

Extending Field⁃of⁃View of Two⁃Photon Microscopy Using Deep Learning
原文传递
导出
摘要 双光子成像技术已被广泛应用于活体肿瘤成像、神经功能成像以及大脑疾病研究等领域,但双光子成像视场较小(视场直径一般在1 mm以内),限制了其进一步应用。虽然通过特殊的光学设计或者自适应光学技术能够有效增大视场,但复杂的光路设计、高昂的器件成本以及繁琐的操作过程限制了这些技术的推广。提出了一种利用深度学习技术替代自适应光学技术扩展双光子成像视场的新思路,在低成本(无须特殊物镜,无须相位补偿装置)、易操作的前提下实现了大视场双光子成像。设计了一种适用于光学显微系统中扩展双光子成像视场的nBRAnet网络框架,为使该网络框架可以更好地利用特征图信息,在该框架中引入残差模块和空间注意力机制,同时去除了数据归一化处理,以增加图像对比度信息。实验结果表明:所提深度学习方法可以有效地代替自适应光学技术,增强扩展视场中的精细结构特征,并恢复扩展视场的成像分辨率和信噪比,使双光子成像视场直径扩展到3.46 mm,峰值信噪比超过27 dB。深度学习方法具有成本低、操作简单、图像增强效果显著等特点,有望为跨区域脑成像或全脑成像提供一种经济实用的方案。 Objective Two-photon microscopy(TPM)imaging has been widely used in many fields,such as in vivo tumor imaging,neuroimaging,and brain disease research.However,the small field-of-view(FOV)in two-photon imaging(typically within diameter of 1 mm)limits its further application.Although the FOV can be extended through adaptive optics technology,the complex optical paths,additional device costs,and cumbersome operating procedures limit its promotion.In this study,we propose the use of deep learning technology instead of adaptive optics technology to expand the FOV of two-photon imaging.The large FOV of TPM can be realized without additional hardware(such as a special objective lens or phase compensation device).In addition,a BN-free attention activation residual U-Net(nBRAnet)network framework is designed for this imaging method,which can efficiently correct aberrations without requiring wavefront detection.Methods Commercially available objectives have a nominal imaging FOV that has been calibrated by the manufacturer.Within the nominal FOV,the objective lens exhibits negligible aberrations.However,the aberrations increase dramatically beyond the nominal FOV.Therefore,the imaging FOV of the objective lens is limited to its nominal FOV.In this study,we improved the imaging quality of the FOV outside the nominal region by combining adaptive optics(AO)and deep learning.Aberrant and AO-corrected images were collected outside the nominal FOV.Thus,we obtained a paired dataset consisting of AO-corrected and uncorrected images.A supervised neural network was trained using the aberrated images as the input and the AO-corrected images as the output.After training,the images collected from regions outside the nominal FOV could be fed directly to the network.Aberration-corrected images were produced,and the imaging system could be used without AO hardware.Results and Discussions The experimental test results include the imaging results of samples such as fluorescent beads with diameter of 1μm and Thy1-GFP and CX3CR1-GFP mouse brain slices,and the results of the corresponding network output.The high peak signal-to-noise ratio(PSNR)values of the test output and ground truth demonstrate the feasibility of extending the FOV to TPM imaging using deep learning.At the same time,the intensity contrast between the nBRAnet network output image and the ground truth on the horizontal line is compared in detail(Figs.3,4,and 5).The extended FOV of different samples is randomly selected for analysis,and a high degree of coincidence is observed in the intensity comparison.The experimental results show that after using the network,both the resolution and fluorescence intensity can be restored to a level where there is almost no aberration,which is close to the result after correcting using AO hardware.To demonstrate the advantages of the network framework designed in this study,the traditional U-Net structure and the very deep super-resolution(VDSR)model are used to compare with ours.When using the same training dataset to train different models,we find that the experimental results of the VDSR model contain a considerable amount of noise,whereas the experimental results of the U-Net network lose some details(Fig.6).The high PSNR values clearly demonstrate the strength of our nBRAnet network framework(Table 3).Conclusions This study provides a novel method to effectively extend the FOV of TPM imaging by designing an nBRAnet network framework.In other words,deep learning was used to enhance the acquired image and expand the nominal FOV for commercial objectives.The experimental results show that images from extended FOVs can be restored to their AO-corrected versions using the trained network.That is,deep learning technology could be used instead of AO hardware technology to expand the FOV of commercially available objectives.This simplifies the operation and reduces the system cost.An extended FOV obtained using deep learning can be employed for cross-regional or whole-brain imaging.
作者 李迟件 姚靖 高玉峰 赖溥祥 何悦之 齐苏敏 郑炜 Li Chijian;Yao Jing;Gao Yufeng;Lai Puxiang;He Yuezhi;Qi Sumin;Zheng Wei(School of Cyber Science and Engineering,Qufu Normal University,Jining 273100,Shandong,China;Research Center for Biomedical Optics and Molecular Imaging,Shenzhen Institute of Advanced Technology,Chinese Academy of Sciences,Shenzhen 518055,Guangdong,China;Department of Biomedical Engineering,The Hong Kong Polytechnic University,Hong Kong SAR 999077,China;Shenzhen Research Institute,The Hong Kong Polytechnic University,Shenzhen 518055,Guangdong,China)
出处 《中国激光》 EI CAS CSCD 北大核心 2023年第9期72-81,共10页 Chinese Journal of Lasers
基金 国家自然科学基金(81927803,81930048)、香港Research Grant Coucil(RGC)(15217721)、深圳市科技计划基础研究项目(RCJC20200714114433058,ZDSY20130401165820357)。
关键词 显微 深度学习 自适应光学 大视场 双光子成像 microscopy deep learning adaptive optics large field-of-view two-photon microscopy
  • 相关文献

参考文献5

二级参考文献20

共引文献22

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部