期刊文献+
共找到15篇文章
< 1 >
每页显示 20 50 100
Analog Optical Computing for Artificial Intelligence 被引量:4
1
作者 Jiamin Wu Xing Lin +4 位作者 Yuchen Guo Junwei Liu Lu Fang Shuming Jiao Qionghai Dai 《Engineering》 SCIE EI 2022年第3期133-145,共13页
The rapid development of artificial intelligence(AI)facilitates various applications from all areas but also poses great challenges in its hardware implementation in terms of speed and energy because of the explosive ... The rapid development of artificial intelligence(AI)facilitates various applications from all areas but also poses great challenges in its hardware implementation in terms of speed and energy because of the explosive growth of data.Optical computing provides a distinctive perspective to address this bottleneck by harnessing the unique properties of photons including broad bandwidth,low latency,and high energy efficiency.In this review,we introduce the latest developments of optical computing for different AI models,including feedforward neural networks,reservoir computing,and spiking neural networks(SNNs).Recent progress in integrated photonic devices,combined with the rise of AI,provides a great opportunity for the renaissance of optical computing in practical applications.This effort requires multidisciplinary efforts from a broad community.This review provides an overview of the state-of-the-art accomplishments in recent years,discusses the availability of current technologies,and points out various remaining challenges in different aspects to push the frontier.We anticipate that the era of large-scale integrated photonics processors will soon arrive for practical AI applications in the form of hybrid optoelectronic frameworks. 展开更多
关键词 Artificial intelligence Optical computing Opto-electronic framework Neural network Neuromorphic computing Reservoir computing Photonics processor
下载PDF
记忆成像
2
作者 方璐 季梦奇 +7 位作者 袁肖赟 贺敬 张嘉凝 朱胤恒 郑添 刘乐遥 王滨 戴琼海 《Engineering》 SCIE EI CAS CSCD 2023年第6期101-109,M0005,共10页
感知和理解大规模动态场景需要高性能的成像系统。传统的成像系统通过简单地通过拼接相机提高像素分辨率来追求更高的性能,而牺牲了庞大的系统。此外,它们严格遵循前馈路径,即它们的像素级感知独立于语义理解。不同的是,人类视觉系统在... 感知和理解大规模动态场景需要高性能的成像系统。传统的成像系统通过简单地通过拼接相机提高像素分辨率来追求更高的性能,而牺牲了庞大的系统。此外,它们严格遵循前馈路径,即它们的像素级感知独立于语义理解。不同的是,人类视觉系统在前馈和反馈两种通路上都具有优势:前馈通路从视觉输入中提取物体表征(称为记忆印痕),而在反馈通路中,相关的印痕被重新激活以产生关于物体的假设。受此启发,我们提出了一种双通道成像机制,称为刻痕驱动摄像。我们从抽象场景的整体表示开始,它与本地细节双向关联,由实例级印痕驱动。从技术上讲,整个系统的工作原理是在兴奋-抑制和联想状态之间交替进行。在前一种状态下,像素级细节被动态整合或抑制,以加强实例级印记。在关联状态下,空间和时间上一致的内容在其印痕的驱动下被合成,以获得未来场景出色的录像质量。联想状态通过综合由其印痕驱动的空间和时间上一致的内容,作为未来场景的成像。大量的仿真和实验结果表明,该系统彻底改变了传统的录像模式,在多目标大场景的录像中显示出巨大的潜力。 展开更多
关键词 人类视觉系统 像素分辨率 成像系统 像素级 双向关联 动态场景 前馈 成像机制
下载PDF
Deep coded exposure: end-to-end co-optimization of flutter shutter and deblurring processing for general motion blur removal
3
作者 ZHIHONG ZHANG KAIMING DONG +1 位作者 JINLI SUO QIONGHAI DAI 《Photonics Research》 SCIE EI CAS CSCD 2023年第10期1678-1686,共9页
Coded exposure photography is a promising computational imaging technique capable of addressing motion blur much better than using a conventional camera, via tailoring invertible blur kernels. However, existing method... Coded exposure photography is a promising computational imaging technique capable of addressing motion blur much better than using a conventional camera, via tailoring invertible blur kernels. However, existing methods suffer from restrictive assumptions, complicated preprocessing, and inferior performance. To address these issues,we proposed an end-to-end framework to handle general motion blurs with a unified deep neural network, and optimize the shutter's encoding pattern together with the deblurring processing to achieve high-quality sharp images. The framework incorporates a learnable flutter shutter sequence to capture coded exposure snapshots and a learning-based deblurring network to restore the sharp images from the blurry inputs. By co-optimizing the encoding and the deblurring modules jointly, our approach avoids exhaustively searching for encoding sequences and achieves an optimal overall deblurring performance. Compared with existing coded exposure based motion deblurring methods, the proposed framework eliminates tedious preprocessing steps such as foreground segmentation and blur kernel estimation, and extends coded exposure deblurring to more general blind and nonuniform cases. Both simulation and real-data experiments demonstrate the superior performance and flexibility of the proposed method. 展开更多
关键词 motion optimization DEEP
原文传递
Photonic neuromorphic architecture for tens-of-task lifelong learning
4
作者 Yuan Cheng Jianing Zhang +4 位作者 Tiankuang Zhou Yuyan Wang Zhihao Xu Xiaoyun Yuan Lu Fang 《Light(Science & Applications)》 SCIE EI CSCD 2024年第3期519-530,共12页
Scalable,high-capacity,and low-power computing architecture is the primary assurance for increasingly manifold and large-scale machine learning tasks.Traditional electronic artificial agents by conventional power-hung... Scalable,high-capacity,and low-power computing architecture is the primary assurance for increasingly manifold and large-scale machine learning tasks.Traditional electronic artificial agents by conventional power-hungry processors have faced the issues of energy and scaling walls,hindering them from the sustainable performance improvement and iterative multi-task learning.Referring to another modality of light,photonic computing has been progressively applied in high-efficient neuromorphic systems.Here,we innovate a reconfigurable lifelong-learning optical neural network(L2 ONN),for highly-integrated tens-of-task machine intelligence with elaborated algorithm-hardware codesign.Benefiting from the inherent sparsity and parallelism in massive photonic connections,L2 ONN learns each single task by adaptively activating sparse photonic neuron connections in the coherent light field,while incrementally acquiring expertise on various tasks by gradually enlarging the activation.The multi-task optical features are parallelly processed by multi-spectrum representations allocated with different wavelengths.Extensive evaluations on freespace and on-chip architectures confirm that for the first time,L2 ONN avoided the catastrophic forgetting issue of photonic computing,owning versatile skills on challenging tens-of-tasks(vision classification,voice recognition,medical diagnosis,etc.)with a single model.Particularly,L2 ONN achieves more than an order of magnitude higher efficiency than the representative electronic artificial neural networks,and 14×larger capacity than existing optical neural networks while maintaining competitive performance on each individual task.The proposed photonic neuromorphic architecture points out a new form of lifelong learning scheme,permitting terminal/edge AI systems with light-speed efficiency and unprecedented scalability. 展开更多
关键词 LIFE NEURAL VERSATILE
原文传递
Deep learning-based optical aberration estimation enables offline digital adaptive optics and super-resolution imaging
5
作者 CHANG QIAO HAOYU CHEN +3 位作者 RUN WANG TAO JIANG YUWANG WANG DONG LI 《Photonics Research》 SCIE EI CAS CSCD 2024年第3期474-484,共11页
Optical aberrations degrade the performance of fluorescence microscopy.Conventional adaptive optics(AO)leverages specific devices,such as the Shack–Hartmann wavefront sensor and deformable mirror,to measure and corre... Optical aberrations degrade the performance of fluorescence microscopy.Conventional adaptive optics(AO)leverages specific devices,such as the Shack–Hartmann wavefront sensor and deformable mirror,to measure and correct optical aberrations.However,conventional AO requires either additional hardware or a more complicated imaging procedure,resulting in higher cost or a lower acquisition speed.In this study,we proposed a novel space-frequency encoding network(SFE-Net)that can directly estimate the aberrated point spread functions(PSFs)from biological images,enabling fast optical aberration estimation with high accuracy without engaging extra optics and image acquisition.We showed that with the estimated PSFs,the optical aberration can be computationally removed by the deconvolution algorithm.Furthermore,to fully exploit the benefits of SFE-Net,we incorporated the estimated PSF with neural network architecture design to devise an aberration-aware deeplearning super-resolution model,dubbed SFT-DFCAN.We demonstrated that the combination of SFE-Net and SFT-DFCAN enables instant digital AO and optical aberration-aware super-resolution reconstruction for live-cell imaging. 展开更多
关键词 OPTICS RESOLUTION enable
原文传递
A multichannel optical computing architecture for advanced machine vision 被引量:2
6
作者 ZHIHAO XU XIAOYUN YUAN +1 位作者 TIANKUANG ZHOU LU FANG 《Light(Science & Applications)》 SCIE EI CAS CSCD 2022年第9期2235-2247,共13页
Endowed with the superior computing speed and energy efficiency,optical neural networks(ONNs)have attracted ever-growing attention in recent years.Existing optical computing architectures are mainly single-channel due... Endowed with the superior computing speed and energy efficiency,optical neural networks(ONNs)have attracted ever-growing attention in recent years.Existing optical computing architectures are mainly single-channel due to the lack of advanced optical connection and interaction operators,solving simple tasks such as hand-written digit classification,saliency detection,etc.The limited computing capacity and scalability of single-channel ONNs restrict the optical implementation of advanced machine vision.Herein,we develop Monet:a multichannel optical neural network architecture for a universal multiple-input multiple-channel optical computing based on a novel projection-interference-prediction framework where the inter-and intra-channel connections are mapped to optical interference and diffraction.In our Monet,optical interference patterns are generated by projecting and interfering the multichannel inputs in a shared domain.These patterns encoding the correspondences together with feature embeddings are iteratively produced through the projection-interference process to predict the final output optically.For the first time,Monet validates that multichannel processing properties can be optically implemented with high-efficiency,enabling real-world intelligent multichannel-processing tasks solved via optical computing,including 3D/motion detections.Extensive experiments on different scenarios demonstrate the effectiveness of Monet in handling advanced machine vision tasks with comparative accuracy as the electronic counterparts yet achieving a ten-fold improvement in computing efficiency.For intelligent computing,the trends of dealing with real-world advanced tasks are irreversible.Breaking the capacity and scalability limitations of single-channel ONN and further exploring the multichannel processing potential of wave optics,we anticipate that the proposed technique will accelerate the development of more powerful optical Al as critical support for modern advanced machine vision. 展开更多
关键词 COMPUTING MULTICHANNEL PROJECTION
原文传递
Mirror-enhanced scanning light-field microscopy for long-term high-speed 3D imaging with isotropic resolution 被引量:1
7
作者 Bo Xiong Tianyi Zhu +9 位作者 Yuhan Xiang Xiaopeng Li Jinqiang Yu Zheng Jiang Yihan Niu Dong Jiang Xu Zhang Lu Fang Jiamin Wu Qionghai Dai 《Light(Science & Applications)》 SCIE EI CAS CSCD 2021年第12期2369-2379,共11页
Various biological behaviors can only be observed in 3D at high speed over the long term with low phototoxicity.Light-field microscopy(LFM)provides an elegant compact solution to record 3D information in a tomographic... Various biological behaviors can only be observed in 3D at high speed over the long term with low phototoxicity.Light-field microscopy(LFM)provides an elegant compact solution to record 3D information in a tomographic manner simultaneously,which can facilitate high photon efficiency.However,LFM still suffers from the missing-cone problem,leading to degraded axial resolution and ringing effects after deconvolution.Here,we propose a mirrorenhanced scanning LFM(MiSLFM)to achieve long-term high-speed 3D imaging at super-resolved axial resolution with a single objective,by fully exploiting the extended depth of field of LFM with a tilted mirror placed below samples.To establish the unique capabilities of MiSLFM,we performed extensive experiments,we observed various organelle interactions and intercellular interactions in different types of photosensitive cells under extremely low light conditions.Moreover,we demonstrated that superior axial resolution facilitates more robust blood cell tracking in zebrafish larvae at high speed. 展开更多
关键词 RESOLUTION MIRROR FIELD
原文传递
Unsupervised content-preserving transformation for optical microscopy 被引量:1
8
作者 Xinyang Li Guoxun Zhang +9 位作者 Hui Qiao Feng Bao Yue Deng Jiamin Wu Yangfan He Jingping Yun Xing Lin Hao Xie Haoqian Wang Qionghai Dai 《Light(Science & Applications)》 SCIE EI CAS CSCD 2021年第3期390-400,共11页
The development of deep learning and open access to a substantial collection of imaging data together provide a potential solution for computational image transformation,which is gradually changing the landscape of op... The development of deep learning and open access to a substantial collection of imaging data together provide a potential solution for computational image transformation,which is gradually changing the landscape of optical imaging and biomedical research.However,current implementations of deep learning usually operate in a supervised manner,and their reliance on laborious and error-prone data annotation procedures remains a barrier to more general applicability.Here,we propose an unsupervised image transformation to facilitate the utilization of deep learning for optical microscopy,even in some cases in which supervised models cannot be applied.Through the introduction of a saliency constraint,the unsupervised model,named Unsupervised content-preserving Transformation for Optical Microscopy(UTOM);can learn the mapping between two image domains without requiring paired training data while avoiding distortions of the image content.UTOM shows promising performance in a wide range of biomedical image transformation tasks,including in silico histological staining,fluorescence image restoration,and virtual fluorescence labeling.Quantitative evaluations reveal that UTOM achieves stable and high-fidelity image transformations across different imaging conditions and modalities.We anticipate that our framework will encourage a paradigm shift in training neural networks and enable more applications of artificial intelligence in biomedical imaging. 展开更多
关键词 TRANSFORMATION PRESERVING IMAGE
原文传递
Superresolution structured illumination microscopy reconstruction algorithms:a review
9
作者 Xin Chen Suyi Zhong +6 位作者 Yiwei Hou Ruijie Cao Wenyi Wang Dong Li Qionghai Dai Donghyun Kim Peng Xi 《Light(Science & Applications)》 SCIE EI CSCD 2023年第8期1510-1543,共34页
Structured illumination microscopy(SIM)has become the standard for next-generation wide-field microscopy,offering ultrahigh imaging speed,superresolution,a large fiield-of-view,and long-term imaging.Over the past deca... Structured illumination microscopy(SIM)has become the standard for next-generation wide-field microscopy,offering ultrahigh imaging speed,superresolution,a large fiield-of-view,and long-term imaging.Over the past decade,SIM hardware and software have flourished,leading to successful applications in various biological questions.However,unlocking the full potential of SIM system hardware requires the development of advanced reconstruction algorithms.Here,we introduce the basic theory of two SIM algorithms,namely,optical sectioning SIM(OS-SIM)and superresolution SIM(SR-SIM),and summarize their implementation modalities.We then provide a brief overview of existing OS-SIM processing algorithms and review the development of SR-SIM reconstruction algorithms,focusing primarily on 2D-SIM,3D-SIM,and blind-SIM.To showcase the state-of-the-art development of SIM systems and assist users in selecting a commercial SIM system for a specific application,we compare the features of representative off-the-shelf SIM systems.Finally,we provide perspectives on the potential future developments of SIM. 展开更多
关键词 ILLUMINATION HARDWARE offering
原文传递
In situ optical backpropagation training of diffractive optical neural networks 被引量:11
10
作者 TIANKUANG ZHOU LU FANG +6 位作者 TAO YAN JIAMIN WU YIPENG LI JINGTAO FAN HUAQIANG WU XING LIN QIONGHAI DA 《Photonics Research》 SCIE EI CSCD 2020年第6期940-953,共14页
Training an artificial neural network with backpropagation algorithms to perform advanced machine learning tasks requires an extensive computational process.This paper proposes to implement the backpropagation algorit... Training an artificial neural network with backpropagation algorithms to perform advanced machine learning tasks requires an extensive computational process.This paper proposes to implement the backpropagation algorithm optically for in situ training of both linear and nonlinear diffractive optical neural networlks,which enables the acceleration of training speed and improvement in energy efficiency on core computing modules.We demonstrate that the gradient of a loss function with respect to the weights of diffractive layers can be accurately calculated by measuring the forward and backward propagated optical fields based on light reciprocity and phase conjunction principles.The diffractive modulation weights are updated by programming a high-speed spatial light modulator to minimize the error between prediction and target output and perform inference tasks at the speed of light.We numerically validate the effectiveness of our approach on simulated networks for various applications.The proposed in situ optical learning architecture achieves accuracy comparable to in silico training with an electronic computer on the tasks of object dlassification and matrix-vector multiplication,which further allows the diffractive optical neural network to adapt to system imperfections.Also,the self-adaptive property of our approach facilitates the novel application of the network for all-optical imaging through scattering media.The proposed approach paves the way for robust implementation of large-scale difractive neural networks to perform distinctive tasks all-optically. 展开更多
关键词 process. WEIGHTS BACKWARD
原文传递
In situ optical backpropagation training of diffractive optical neural networks:publisher’s note 被引量:2
11
作者 TIANKUANG ZHOU LU FANG +6 位作者 TAO YAN JIAMIN WU YIPENG LI JINGTAO FAN HUAQIANG WU XING LIN QIONGHAI DAI 《Photonics Research》 SCIE EI CSCD 2020年第8期1323-1323,共1页
This publisher’s note corrects the authors’affiliations in Photon.Res.8,940(2020).
关键词 OPTICAL networks NEURAL
原文传递
Light-field micro-endoscopy using a fiber bundle:a snapshot 3D epi-fluorescence endoscope 被引量:1
12
作者 YOU ZHOU BO XIONG +4 位作者 WEIZHI SONG XU ZHANG GUOAN ZHENG QIONGHAI DAI XUN CAO 《Photonics Research》 SCIE EI CAS CSCD 2022年第9期2247-2260,共14页
Micro-endoscopes are widely used for detecting and visualizing hard-to-reach areas of the human body and for in vivo observation of animals.A micro-endoscope that can realize 3D imaging at the camera framerate could b... Micro-endoscopes are widely used for detecting and visualizing hard-to-reach areas of the human body and for in vivo observation of animals.A micro-endoscope that can realize 3D imaging at the camera framerate could benefit various clinical and biological applications.In this work,we report the development of a compact light-field micro-endoscope(LFME)that can obtain snapshot 3D fluorescence imaging,by jointly using a single-mode fiber bundle and a small-size light-field configuration.To demonstrate the real imaging performance of our method,we put a resolution chart in different z positions and capture the z-stack images successively for reconstruction,achieving 333-μm-diameter field of view,24μm optimal depth of field,and up to 3.91μm spatial resolution near the focal plane.We also test our method on a human skin tissue section and He La cells.Our LFME prototype provides epi-fluorescence imaging ability with a relatively small(2-mm-diameter)imaging probe,making it suitable for in vivo detection of brain activity and gastrointestinal diseases of animals. 展开更多
关键词 ENDOSCOPE BUNDLE fiber
原文传递
Ten-mega-pixel snapshot compressive imaging with a hybrid coded aperture 被引量:1
13
作者 ZHIHONG ZHANG CHAO DENG +3 位作者 YANG LIU XIN YUAN JINLI SUO QIONGHAI DAI 《Photonics Research》 SCIE EI CAS CSCD 2021年第11期2277-2287,共11页
High-resolution images are widely used in our everyday life;however,high-speed video capture is more challenging due to the low frame rate of cameras working at the high-resolution mode.The main bottleneck lies in the... High-resolution images are widely used in our everyday life;however,high-speed video capture is more challenging due to the low frame rate of cameras working at the high-resolution mode.The main bottleneck lies in the low throughput of existing imaging systems.Toward this end,snapshot compressive imaging(SCI)was proposed as a promising solution to improve the throughput of imaging systems by compressive sampling and computational reconstruction.During acquisition,multiple high-speed images are encoded and collapsed to a single measurement.Then,algorithms are employed to retrieve the video frames from the coded snapshot.Recently developed plug-and-play algorithms made the SCI reconstruction possible in large-scale problems.However,the lack of high-resolution encoding systems still precludes SCI’s wide application.Thus,in this paper,we build,to the best of our knowledge,a novel hybrid coded aperture snapshot compressive imaging(HCA-SCI)system by incorporating a dynamic liquid crystal on silicon and a high-resolution lithography mask.We further implement a Pn P reconstruction algorithm with cascaded denoisers for high-quality reconstruction.Based on the proposed HCA-SCI system and algorithm,we obtain a 10-mega-pixel SCI system to capture high-speed scenes,leading to a high throughput of 4.6×10^(9) voxels per second.Both simulation and real-data experiments verify the feasibility and performance of our proposed HCA-SCI scheme. 展开更多
关键词 LITHOGRAPHY RESOLUTION EVERYDAY
原文传递
Image De-occlusion via Event-enhanced Multi-modal Fusion Hybrid Network
14
作者 Si-Qi Li Yue Gao Qiong-Hai Dai 《Machine Intelligence Research》 EI CSCD 2022年第4期307-318,共12页
Seeing through dense occlusions and reconstructing scene images is an important but challenging task.Traditional framebased image de-occlusion methods may lead to fatal errors when facing extremely dense occlusions du... Seeing through dense occlusions and reconstructing scene images is an important but challenging task.Traditional framebased image de-occlusion methods may lead to fatal errors when facing extremely dense occlusions due to the lack of valid information available from the limited input occluded frames.Event cameras are bio-inspired vision sensors that record the brightness changes at each pixel asynchronously with high temporal resolution.However,synthesizing images solely from event streams is ill-posed since only the brightness changes are recorded in the event stream,and the initial brightness is unknown.In this paper,we propose an event-enhanced multi-modal fusion hybrid network for image de-occlusion,which uses event streams to provide complete scene information and frames to provide color and texture information.An event stream encoder based on the spiking neural network(SNN)is proposed to encode and denoise the event stream efficiently.A comparison loss is proposed to generate clearer results.Experimental results on a largescale event-based and frame-based image de-occlusion dataset demonstrate that our proposed method achieves state-of-the-art performance. 展开更多
关键词 Event camera multi-modal fusion image de-occlusion spiking neural network(SNN) image reconstruction
原文传递
CNS Organoid Surpasses Cell-Laden Microgel Assembly to Promote Spinal Cord Injury Repair
15
作者 Zitian Wang Haoran Zhao +4 位作者 iaowei Tang Tianyu Meng Davit Khutsishvili Bing Xu Shaohua Ma 《Research》 EI CAS CSCD 2022年第4期677-692,共16页
The choice of therapeutic agents remains an unsolved issue in the repair of spinal cord injury.In this work,various agents and configurations were investigated and compared for their performance in promoting nerve reg... The choice of therapeutic agents remains an unsolved issue in the repair of spinal cord injury.In this work,various agents and configurations were investigated and compared for their performance in promoting nerve regeneration,including bead assembly and bulk gel of collagen and Matrigel,under acellular and cell-laden conditions,and cerebral organoid(CO)as the in vitro preorganized agent.First,in Matrigel-based agents and the CO transplantations,the recipient animal gained more axon regeneration and the higher Basso,Beattie,and Bresnahan(BBB)scoring than the grafted collagen gels.Second,new nerves more uniformly infiltrated into the transplants in bead form assembly than the molded chunks.Third,the materials loaded the neural progenitor cells(NPCs)or the CO implantation groups received more regenerated nerve fibers than their acellular counterparts,suggesting the necessity to transplant exogenous cells for large trauma(e.g.,a 5 mm long spinal cord transect).In addition,the activated microglial cells might benefit from neural regeneration after receiving CO transplantation in the recipient animals.The organoid augmentation may suggest that in vitro maturation of a microtissue complex is necessary before transplantation and proposes organoids as the premium therapeutic agents for nerve regeneration. 展开更多
关键词 neural gained IMPLANTATION
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部