Practical real-world scenarios such as the Internet,social networks,and biological networks present the challenges of data scarcity and complex correlations,which limit the applications of artificial intelligence.The ...Practical real-world scenarios such as the Internet,social networks,and biological networks present the challenges of data scarcity and complex correlations,which limit the applications of artificial intelligence.The graph structure is a typical tool used to formulate such correlations,it is incapable of modeling highorder correlations among different objects in systems;thus,the graph structure cannot fully convey the intricate correlations among objects.Confronted with the aforementioned two challenges,hypergraph computation models high-order correlations among data,knowledge,and rules through hyperedges and leverages these high-order correlations to enhance the data.Additionally,hypergraph computation achieves collaborative computation using data and high-order correlations,thereby offering greater modeling flexibility.In particular,we introduce three types of hypergraph computation methods:①hypergraph structure modeling,②hypergraph semantic computing,and③efficient hypergraph computing.We then specify how to adopt hypergraph computation in practice by focusing on specific tasks such as three-dimensional(3D)object recognition,revealing that hypergraph computation can reduce the data requirement by 80%while achieving comparable performance or improve the performance by 52%given the same data,compared with a traditional data-based method.A comprehensive overview of the applications of hypergraph computation in diverse domains,such as intelligent medicine and computer vision,is also provided.Finally,we introduce an open-source deep learning library,DeepHypergraph(DHG),which can serve as a tool for the practical usage of hypergraph computation.展开更多
Electroencephalography(EEG)analysis extracts critical information from brain signals,enabling brain disease diagnosis and providing fundamental support for brain–computer interfaces.However,performing an artificial i...Electroencephalography(EEG)analysis extracts critical information from brain signals,enabling brain disease diagnosis and providing fundamental support for brain–computer interfaces.However,performing an artificial intelligence analysis of EEG signals with high energy efficiency poses significant challenges for electronic processors on edge computing devices,especially with large neural network models.Herein,we propose an EEG opto-processor based on diffractive photonic computing units(DPUs)to process extracranial and intracranial EEG signals effectively and to detect epileptic seizures.The signals of the EEG channels within a second-time window are optically encoded as inputs to the constructed diffractive neural networks for classification,which monitors the brain state to identify symptoms of an epileptic seizure.We developed both free-space and integrated DPUs as edge computing systems and demonstrated their applications for real-time epileptic seizure detection using benchmark datasets,that is,the Children’s Hospital Boston(CHB)–Massachusetts Institute of Technology(MIT)extracranial and Epilepsy-iEEG-Multicenter intracranial EEG datasets,with excellent computing performance results.Along with the channel selection mechanism,both numerical evaluations and experimental results validated the sufficiently high classification accuracies of the proposed opto-processors for supervising clinical diagnosis.Our study opens a new research direction for utilizing photonic computing techniques to process large-scale EEG signals and promote broader applications.展开更多
The rapid development of artificial intelligence(AI)facilitates various applications from all areas but also poses great challenges in its hardware implementation in terms of speed and energy because of the explosive ...The rapid development of artificial intelligence(AI)facilitates various applications from all areas but also poses great challenges in its hardware implementation in terms of speed and energy because of the explosive growth of data.Optical computing provides a distinctive perspective to address this bottleneck by harnessing the unique properties of photons including broad bandwidth,low latency,and high energy efficiency.In this review,we introduce the latest developments of optical computing for different AI models,including feedforward neural networks,reservoir computing,and spiking neural networks(SNNs).Recent progress in integrated photonic devices,combined with the rise of AI,provides a great opportunity for the renaissance of optical computing in practical applications.This effort requires multidisciplinary efforts from a broad community.This review provides an overview of the state-of-the-art accomplishments in recent years,discusses the availability of current technologies,and points out various remaining challenges in different aspects to push the frontier.We anticipate that the era of large-scale integrated photonics processors will soon arrive for practical AI applications in the form of hybrid optoelectronic frameworks.展开更多
Reviewing the history of the development of artificial intelligence(AI)clearly reveals that brain science has resulted in breakthroughs in AI,such as deep learning.At present,although the developmental trend in AI and...Reviewing the history of the development of artificial intelligence(AI)clearly reveals that brain science has resulted in breakthroughs in AI,such as deep learning.At present,although the developmental trend in AI and its applications has surpassed expectations,an insurmountable gap remains between AI and human intelligence.It is urgent to establish a bridge between brain science and AI research,including a link from brain science to AI,and a connection from knowing the brain to simulating the brain.The first steps toward this goal are to explore the secrets of brain science by studying new brain-imaging technology;to establish a dynamic connection diagram of the brain;and to integrate neuroscience experiments with theory,models,and statistics.Based on these steps,a new generation of AI theory and methods can be studied,and a subversive model and working mode from machine perception and learning to machine thinking and decision-making can be established.This article discusses the opportunities and challenges of adapting brain science to AI.展开更多
The quantum dot spectrometer,fabricated by integrating different quantum dots with an image sensor to reconstruct the target spectrum from spectral-coupled measurements,is an emerging and promising hyperspectrometry t...The quantum dot spectrometer,fabricated by integrating different quantum dots with an image sensor to reconstruct the target spectrum from spectral-coupled measurements,is an emerging and promising hyperspectrometry technology with high resolution and a compact size.The spectral resolution and spectral range of quantum dot spectrometers have been limited by the spectral variety of the available quantum dots and the robustness of algorithmic reconstruction.Moreover,the spectrometer integration of quantum dots also suffers from inherent photoluminescence emission and poor batch-to-batch repeatability.In this work,we developed nonemissive in situ fabricated MA_(3)Bi_(2)X_(9) and Cs_(2)SnX_(6)(MA=CH_(3)NH_(3);X=Cl,Br,I)perovskite-quantum-dot-embedded films(PQDFs)with precisely tunable transmittance spectra for quantum dot spectrometer applications.The resulting PQDFs contain in situ fabricated perovskite nanocrystals with homogenous dispersion in a polymeric matrix,giving them advantageous features such as high transmittance efficiency and good batch-to-batch repeatability.By integrating a filter array of 361 kinds of PQDFs with a silicon-based photodetector array,we successfully demonstrated the construction of a perovskite quantum dot spectrometer combined with a compressive-sensing-based total-variation optimization algorithm.A spectral resolution of ~1.6 nm was achieved in the broadband of 250-1000 nm.The performance of the perovskite quantum dot spectrometer is well beyond that of human eyes in terms of both the spectral range and spectral resolution.This advancement will not only pave the way for using quantum dot spectrometers for practical applications but also significantly impact the development of artificial intelligence products,clinical treatment equipment,scientific instruments,etc.展开更多
Structured illumination microscopy(SIM)has become the standard for next-generation wide-field microscopy,offering ultrahigh imaging speed,superresolution,a large fiield-of-view,and long-term imaging.Over the past deca...Structured illumination microscopy(SIM)has become the standard for next-generation wide-field microscopy,offering ultrahigh imaging speed,superresolution,a large fiield-of-view,and long-term imaging.Over the past decade,SIM hardware and software have flourished,leading to successful applications in various biological questions.However,unlocking the full potential of SIM system hardware requires the development of advanced reconstruction algorithms.Here,we introduce the basic theory of two SIM algorithms,namely,optical sectioning SIM(OS-SIM)and superresolution SIM(SR-SIM),and summarize their implementation modalities.We then provide a brief overview of existing OS-SIM processing algorithms and review the development of SR-SIM reconstruction algorithms,focusing primarily on 2D-SIM,3D-SIM,and blind-SIM.To showcase the state-of-the-art development of SIM systems and assist users in selecting a commercial SIM system for a specific application,we compare the features of representative off-the-shelf SIM systems.Finally,we provide perspectives on the potential future developments of SIM.展开更多
Array cameras removed the optical limitations of a single camera and paved the way for high-performance imaging via the combination of micro-cameras and computation to fuse multiple aperture images.However,existing so...Array cameras removed the optical limitations of a single camera and paved the way for high-performance imaging via the combination of micro-cameras and computation to fuse multiple aperture images.However,existing solutions use dense arrays of cameras that require laborious calibration and lack flexibility and practicality.Inspired by the cognition function principle of the human brain,we develop an unstructured array camera system that adopts a hierarchical modular design with multiscale hybrid cameras composing different modules.Intelligent computations are designed to collaboratively operate along both intra-and intermodule pathways.This system can adaptively allocate imagery resources to dramatically reduce the hardware cost and possesses unprecedented flexibility,robustness,and versatility.Large scenes of real-world data were acquired to perform human-centric studies for the assessment of human behaviours at the individual level and crowd behaviours at the population level requiring highresolution long-term monitoring of dynamic wide-area scenes.展开更多
The development of deep learning and open access to a substantial collection of imaging data together provide a potential solution for computational image transformation,which is gradually changing the landscape of op...The development of deep learning and open access to a substantial collection of imaging data together provide a potential solution for computational image transformation,which is gradually changing the landscape of optical imaging and biomedical research.However,current implementations of deep learning usually operate in a supervised manner,and their reliance on laborious and error-prone data annotation procedures remains a barrier to more general applicability.Here,we propose an unsupervised image transformation to facilitate the utilization of deep learning for optical microscopy,even in some cases in which supervised models cannot be applied.Through the introduction of a saliency constraint,the unsupervised model,named Unsupervised content-preserving Transformation for Optical Microscopy(UTOM);can learn the mapping between two image domains without requiring paired training data while avoiding distortions of the image content.UTOM shows promising performance in a wide range of biomedical image transformation tasks,including in silico histological staining,fluorescence image restoration,and virtual fluorescence labeling.Quantitative evaluations reveal that UTOM achieves stable and high-fidelity image transformations across different imaging conditions and modalities.We anticipate that our framework will encourage a paradigm shift in training neural networks and enable more applications of artificial intelligence in biomedical imaging.展开更多
Light field microscopy(LFM)has been widely used for recording 3D biological dynamics at camera frame rate.However,LFM suffers from artifact contaminations due to the illness of the reconstruction problem via naive Ric...Light field microscopy(LFM)has been widely used for recording 3D biological dynamics at camera frame rate.However,LFM suffers from artifact contaminations due to the illness of the reconstruction problem via naive Richardson-Lucy(RL)deconvolution.Moreover,the performance of LFM significantly dropped in low-light conditions due to the absence of sample priors.In this paper,we thoroughly analyze different kinds of artifacts and present a new LFM technique termed dictionary LFM(DiLFM)that substantially suppresses various kinds of reconstruction artifacts and improves the noise robustness with an over-complete dictionary.We demonstrate artifact-suppressed reconstructions in scattering samples such as Drosophila embryos and brains.Furthermore,we show our DiLFM can achieve robust blood cell counting in noisy conditions by imaging blood cell dynamic at 100 Hz and unveil more neurons in whole-brain calcium recording of zebrafish with low illumination power in vivo.展开更多
Various biological behaviors can only be observed in 3D at high speed over the long term with low phototoxicity.Light-field microscopy(LFM)provides an elegant compact solution to record 3D information in a tomographic...Various biological behaviors can only be observed in 3D at high speed over the long term with low phototoxicity.Light-field microscopy(LFM)provides an elegant compact solution to record 3D information in a tomographic manner simultaneously,which can facilitate high photon efficiency.However,LFM still suffers from the missing-cone problem,leading to degraded axial resolution and ringing effects after deconvolution.Here,we propose a mirrorenhanced scanning LFM(MiSLFM)to achieve long-term high-speed 3D imaging at super-resolved axial resolution with a single objective,by fully exploiting the extended depth of field of LFM with a tilted mirror placed below samples.To establish the unique capabilities of MiSLFM,we performed extensive experiments,we observed various organelle interactions and intercellular interactions in different types of photosensitive cells under extremely low light conditions.Moreover,we demonstrated that superior axial resolution facilitates more robust blood cell tracking in zebrafish larvae at high speed.展开更多
Micro-endoscopes are widely used for detecting and visualizing hard-to-reach areas of the human body and for in vivo observation of animals.A micro-endoscope that can realize 3D imaging at the camera framerate could b...Micro-endoscopes are widely used for detecting and visualizing hard-to-reach areas of the human body and for in vivo observation of animals.A micro-endoscope that can realize 3D imaging at the camera framerate could benefit various clinical and biological applications.In this work,we report the development of a compact light-field micro-endoscope(LFME)that can obtain snapshot 3D fluorescence imaging,by jointly using a single-mode fiber bundle and a small-size light-field configuration.To demonstrate the real imaging performance of our method,we put a resolution chart in different z positions and capture the z-stack images successively for reconstruction,achieving 333-μm-diameter field of view,24μm optimal depth of field,and up to 3.91μm spatial resolution near the focal plane.We also test our method on a human skin tissue section and He La cells.Our LFME prototype provides epi-fluorescence imaging ability with a relatively small(2-mm-diameter)imaging probe,making it suitable for in vivo detection of brain activity and gastrointestinal diseases of animals.展开更多
Silicon-based digital cameras can record visible and near-infrared (NIR) information, in which the full color visible image (RGB) must be restored from color filter ar- ray (CFA) interpolation. In this paper, we...Silicon-based digital cameras can record visible and near-infrared (NIR) information, in which the full color visible image (RGB) must be restored from color filter ar- ray (CFA) interpolation. In this paper, we propose a uni- fied framework for CFA interpolation and visible/NIR image combination. To obtain a high quality color image, the tra- ditional color interpolation from raw CFA data is improved at each pixel, which is constrained by the corresponding monochromatic NIR image in gradient difference. The ex- periments indicate the effectiveness of this hybrid scheme to acquire joint color and NIR information in real-time, and show that this hybrid process can generate a better color im- age when compared to treating interpolation and fusion sep- arately.展开更多
View-based 3-D object retrieval has become an emerging topic in recent years,especially with the fast development of visual content acquisition devices,such as mobile phones with cameras.Extensive research efforts hav...View-based 3-D object retrieval has become an emerging topic in recent years,especially with the fast development of visual content acquisition devices,such as mobile phones with cameras.Extensive research efforts have been dedicated to this task,while it is still difficult to measure the relevance between two objects with multiple views.In recent years,learning-based methods have been investigated in view-based 3-D object retrieval,such as graph-based learning.It is noted that the graph-based methods suffer from the high computational cost from the graph construction and the corresponding learning process.In this paper,we introduce a general framework to accelerate the learning-based view-based 3-D object matching in large scale data.Given a query object Q and one object O from a 3-D dataset D,the first step is to extract a small set of candidate relevant 3-D objects for object O.Then multiple hypergraphs can be constructed based on this small set of 3-D objects and the learning on the fused hypergraph is conducted to generate the relevance between Q and O,which can be further used in the retrieval procedure.Experiments demonstrate the effectiveness of the proposed framework.展开更多
Coded exposure photography is a promising computational imaging technique capable of addressing motion blur much better than using a conventional camera, via tailoring invertible blur kernels. However, existing method...Coded exposure photography is a promising computational imaging technique capable of addressing motion blur much better than using a conventional camera, via tailoring invertible blur kernels. However, existing methods suffer from restrictive assumptions, complicated preprocessing, and inferior performance. To address these issues,we proposed an end-to-end framework to handle general motion blurs with a unified deep neural network, and optimize the shutter's encoding pattern together with the deblurring processing to achieve high-quality sharp images. The framework incorporates a learnable flutter shutter sequence to capture coded exposure snapshots and a learning-based deblurring network to restore the sharp images from the blurry inputs. By co-optimizing the encoding and the deblurring modules jointly, our approach avoids exhaustively searching for encoding sequences and achieves an optimal overall deblurring performance. Compared with existing coded exposure based motion deblurring methods, the proposed framework eliminates tedious preprocessing steps such as foreground segmentation and blur kernel estimation, and extends coded exposure deblurring to more general blind and nonuniform cases. Both simulation and real-data experiments demonstrate the superior performance and flexibility of the proposed method.展开更多
High-resolution images are widely used in our everyday life;however,high-speed video capture is more challenging due to the low frame rate of cameras working at the high-resolution mode.The main bottleneck lies in the...High-resolution images are widely used in our everyday life;however,high-speed video capture is more challenging due to the low frame rate of cameras working at the high-resolution mode.The main bottleneck lies in the low throughput of existing imaging systems.Toward this end,snapshot compressive imaging(SCI)was proposed as a promising solution to improve the throughput of imaging systems by compressive sampling and computational reconstruction.During acquisition,multiple high-speed images are encoded and collapsed to a single measurement.Then,algorithms are employed to retrieve the video frames from the coded snapshot.Recently developed plug-and-play algorithms made the SCI reconstruction possible in large-scale problems.However,the lack of high-resolution encoding systems still precludes SCI’s wide application.Thus,in this paper,we build,to the best of our knowledge,a novel hybrid coded aperture snapshot compressive imaging(HCA-SCI)system by incorporating a dynamic liquid crystal on silicon and a high-resolution lithography mask.We further implement a Pn P reconstruction algorithm with cascaded denoisers for high-quality reconstruction.Based on the proposed HCA-SCI system and algorithm,we obtain a 10-mega-pixel SCI system to capture high-speed scenes,leading to a high throughput of 4.6×10^(9) voxels per second.Both simulation and real-data experiments verify the feasibility and performance of our proposed HCA-SCI scheme.展开更多
There is an increasing interest in understanding how three-dimensional(3D)organization of the genome is regulated.Different strategies have been employed to identify genome-wide chromatin interactions.However,due to c...There is an increasing interest in understanding how three-dimensional(3D)organization of the genome is regulated.Different strategies have been employed to identify genome-wide chromatin interactions.However,due to current limitations in resolving genomic contacts,visualization and validation of these genomic loci with sub-kilobase resolution remain unsolved to date.Here,we describe Tn5 transposase-based Fluorescencein situhybridization(Tn5-FISH),a PCR-based,cost-effective imaging method,which can co-localize the genomic loci with sub-kilobase resolution,dissect genome architecture,and verify chromatin interactions detected by chromatin configuration capture(3C)-derived methods.To validate this method,short-range interactions in keratin-encoding gene(KRT)locus in topologically associated domain(TAD)were imaged by triple-color Tn5-FISH,indicating that Tn5-FISH is very useful to verify short-range chromatin interactions inside the contact domain and TAD.Therefore,Tn5-FISH can be a powerful molecular tool for the clinical detection of cytogenetic changes in numerous genetic diseases such as cancers.展开更多
Application layer multicast routing is a multiobjective optimization problem.Three routing constraints,tree’s cost,tree’s balance and network layer load distribution are analyzed in this paper.The three fitness func...Application layer multicast routing is a multiobjective optimization problem.Three routing constraints,tree’s cost,tree’s balance and network layer load distribution are analyzed in this paper.The three fitness functions are used to evaluate a multicast tree on the three indexes respectively and one general fitness function is generated.A novel approach based on genetic algorithms is proposed.Numerical simulations show that,compared with geometrical routing rules,the proposed algorithm improve all three indexes,especially on cost and network layer load distribution indexes.展开更多
The metaverse is attracting considerable attention recently.It aims to build a virtual environment that people can interact with the world and cooperate with each other.In this survey paper,we re-introduce metaverse i...The metaverse is attracting considerable attention recently.It aims to build a virtual environment that people can interact with the world and cooperate with each other.In this survey paper,we re-introduce metaverse in a new framework based on a broad range of technologies,including perception which enables us to precisely capture the characteristics of the real world,computation which supports the large computation requirement over large-scale data,reconstruction which builds the virtual world from the real one,cooperation which facilitates long-distance communication and teamwork between users,and interaction which bridges users and the virtual world.Despite its popularity,the fundamental techniques in this framework are still immature.Innovating new techniques to facilitate the applications of metaverse is necessary.In recent years,artificial intelligence(AI),especially deep learning,has shown promising results for empowering various areas,from science to industry.It is reasonable to imagine how we can combine AI with the framework in order to promote the development of metaverse.In this survey,we present the recent achievement by AI for metaverse in the proposed framework,including perception,computation,reconstruction,cooperation,and interaction.We also discuss some future works that AI can contribute to metaverse.展开更多
文摘Practical real-world scenarios such as the Internet,social networks,and biological networks present the challenges of data scarcity and complex correlations,which limit the applications of artificial intelligence.The graph structure is a typical tool used to formulate such correlations,it is incapable of modeling highorder correlations among different objects in systems;thus,the graph structure cannot fully convey the intricate correlations among objects.Confronted with the aforementioned two challenges,hypergraph computation models high-order correlations among data,knowledge,and rules through hyperedges and leverages these high-order correlations to enhance the data.Additionally,hypergraph computation achieves collaborative computation using data and high-order correlations,thereby offering greater modeling flexibility.In particular,we introduce three types of hypergraph computation methods:①hypergraph structure modeling,②hypergraph semantic computing,and③efficient hypergraph computing.We then specify how to adopt hypergraph computation in practice by focusing on specific tasks such as three-dimensional(3D)object recognition,revealing that hypergraph computation can reduce the data requirement by 80%while achieving comparable performance or improve the performance by 52%given the same data,compared with a traditional data-based method.A comprehensive overview of the applications of hypergraph computation in diverse domains,such as intelligent medicine and computer vision,is also provided.Finally,we introduce an open-source deep learning library,DeepHypergraph(DHG),which can serve as a tool for the practical usage of hypergraph computation.
基金supported by the National Major Science and Technology Projects of China(2021ZD0109902 and 2020AA0105500)the National Natural Science Fundation of China(62275139 and 62088102)the Tsinghua University Initiative Scientific Research Program.
文摘Electroencephalography(EEG)analysis extracts critical information from brain signals,enabling brain disease diagnosis and providing fundamental support for brain–computer interfaces.However,performing an artificial intelligence analysis of EEG signals with high energy efficiency poses significant challenges for electronic processors on edge computing devices,especially with large neural network models.Herein,we propose an EEG opto-processor based on diffractive photonic computing units(DPUs)to process extracranial and intracranial EEG signals effectively and to detect epileptic seizures.The signals of the EEG channels within a second-time window are optically encoded as inputs to the constructed diffractive neural networks for classification,which monitors the brain state to identify symptoms of an epileptic seizure.We developed both free-space and integrated DPUs as edge computing systems and demonstrated their applications for real-time epileptic seizure detection using benchmark datasets,that is,the Children’s Hospital Boston(CHB)–Massachusetts Institute of Technology(MIT)extracranial and Epilepsy-iEEG-Multicenter intracranial EEG datasets,with excellent computing performance results.Along with the channel selection mechanism,both numerical evaluations and experimental results validated the sufficiently high classification accuracies of the proposed opto-processors for supervising clinical diagnosis.Our study opens a new research direction for utilizing photonic computing techniques to process large-scale EEG signals and promote broader applications.
基金supported by the National Natural Science Foundation of China(61927802,61722209,and 61805145)the Beijing Municipal Science and Technology Commission(Z181100003118014)+3 种基金the National Key Research and Development Program of China(2020AAA0130000)the support from the National Postdoctoral Program for Innovative TalentShuimu Tsinghua Scholar Programthe support from the Hong Kong Research Grants Council(16306220)。
文摘The rapid development of artificial intelligence(AI)facilitates various applications from all areas but also poses great challenges in its hardware implementation in terms of speed and energy because of the explosive growth of data.Optical computing provides a distinctive perspective to address this bottleneck by harnessing the unique properties of photons including broad bandwidth,low latency,and high energy efficiency.In this review,we introduce the latest developments of optical computing for different AI models,including feedforward neural networks,reservoir computing,and spiking neural networks(SNNs).Recent progress in integrated photonic devices,combined with the rise of AI,provides a great opportunity for the renaissance of optical computing in practical applications.This effort requires multidisciplinary efforts from a broad community.This review provides an overview of the state-of-the-art accomplishments in recent years,discusses the availability of current technologies,and points out various remaining challenges in different aspects to push the frontier.We anticipate that the era of large-scale integrated photonics processors will soon arrive for practical AI applications in the form of hybrid optoelectronic frameworks.
基金the Consulting Research Project of the Chinese Academy of Engineering(2019-XZ-9)the National Natural Science Foundation of China(61327902)the Beijing Municipal Science&Technology Commission(Z181100003118014).
文摘Reviewing the history of the development of artificial intelligence(AI)clearly reveals that brain science has resulted in breakthroughs in AI,such as deep learning.At present,although the developmental trend in AI and its applications has surpassed expectations,an insurmountable gap remains between AI and human intelligence.It is urgent to establish a bridge between brain science and AI research,including a link from brain science to AI,and a connection from knowing the brain to simulating the brain.The first steps toward this goal are to explore the secrets of brain science by studying new brain-imaging technology;to establish a dynamic connection diagram of the brain;and to integrate neuroscience experiments with theory,models,and statistics.Based on these steps,a new generation of AI theory and methods can be studied,and a subversive model and working mode from machine perception and learning to machine thinking and decision-making can be established.This article discusses the opportunities and challenges of adapting brain science to AI.
基金supported by the National Key R&D Program(No.2017YFB0404600)National Natural Science Foundation of China(61722502,61971045,61827901)Fundamental Research Funds for the Central Universities(3052019024).
文摘The quantum dot spectrometer,fabricated by integrating different quantum dots with an image sensor to reconstruct the target spectrum from spectral-coupled measurements,is an emerging and promising hyperspectrometry technology with high resolution and a compact size.The spectral resolution and spectral range of quantum dot spectrometers have been limited by the spectral variety of the available quantum dots and the robustness of algorithmic reconstruction.Moreover,the spectrometer integration of quantum dots also suffers from inherent photoluminescence emission and poor batch-to-batch repeatability.In this work,we developed nonemissive in situ fabricated MA_(3)Bi_(2)X_(9) and Cs_(2)SnX_(6)(MA=CH_(3)NH_(3);X=Cl,Br,I)perovskite-quantum-dot-embedded films(PQDFs)with precisely tunable transmittance spectra for quantum dot spectrometer applications.The resulting PQDFs contain in situ fabricated perovskite nanocrystals with homogenous dispersion in a polymeric matrix,giving them advantageous features such as high transmittance efficiency and good batch-to-batch repeatability.By integrating a filter array of 361 kinds of PQDFs with a silicon-based photodetector array,we successfully demonstrated the construction of a perovskite quantum dot spectrometer combined with a compressive-sensing-based total-variation optimization algorithm.A spectral resolution of ~1.6 nm was achieved in the broadband of 250-1000 nm.The performance of the perovskite quantum dot spectrometer is well beyond that of human eyes in terms of both the spectral range and spectral resolution.This advancement will not only pave the way for using quantum dot spectrometers for practical applications but also significantly impact the development of artificial intelligence products,clinical treatment equipment,scientific instruments,etc.
基金This work was supported by the Ministry of Science and Technology(2022YFC3401100)the National Natural Science Foundation of China(62025501,31971376,and 92150301)the fellowship of China Postdoctoral ScienceFoundation(2021M700243).
文摘Structured illumination microscopy(SIM)has become the standard for next-generation wide-field microscopy,offering ultrahigh imaging speed,superresolution,a large fiield-of-view,and long-term imaging.Over the past decade,SIM hardware and software have flourished,leading to successful applications in various biological questions.However,unlocking the full potential of SIM system hardware requires the development of advanced reconstruction algorithms.Here,we introduce the basic theory of two SIM algorithms,namely,optical sectioning SIM(OS-SIM)and superresolution SIM(SR-SIM),and summarize their implementation modalities.We then provide a brief overview of existing OS-SIM processing algorithms and review the development of SR-SIM reconstruction algorithms,focusing primarily on 2D-SIM,3D-SIM,and blind-SIM.To showcase the state-of-the-art development of SIM systems and assist users in selecting a commercial SIM system for a specific application,we compare the features of representative off-the-shelf SIM systems.Finally,we provide perspectives on the potential future developments of SIM.
基金the Natural Science Foundation of China(NSFC)under contract No.61860206003,in part by China Postdoctoral Science Foundation No.2020TQ0172 and No.2020M670338in part by the Shenzhen Science and Technology Research and Developm ent Funds(JCYJ20180507183706645).
文摘Array cameras removed the optical limitations of a single camera and paved the way for high-performance imaging via the combination of micro-cameras and computation to fuse multiple aperture images.However,existing solutions use dense arrays of cameras that require laborious calibration and lack flexibility and practicality.Inspired by the cognition function principle of the human brain,we develop an unstructured array camera system that adopts a hierarchical modular design with multiscale hybrid cameras composing different modules.Intelligent computations are designed to collaboratively operate along both intra-and intermodule pathways.This system can adaptively allocate imagery resources to dramatically reduce the hardware cost and possesses unprecedented flexibility,robustness,and versatility.Large scenes of real-world data were acquired to perform human-centric studies for the assessment of human behaviours at the individual level and crowd behaviours at the population level requiring highresolution long-term monitoring of dynamic wide-area scenes.
基金We would like to acknowledge Weigert et al.for making their source code and data related to image restoration openly available to the comm unity.We thank the Rubin Lab at Harvard,the Finkbeiner Lab at Gladstone,and Google Accelerated Science for releasing their datasets on virtual cell staining.We thank Jingjing Wang,affiliated with the apparatus sharing platform of Tsinghua University,for assistance with the imaging of histopathology slides.This work was supported by the National Natural Science Foundation of China(62088102,61831014,62071271,and 62071272)Projects of MOST(2020AA0105500 and 2020AAA0130000)+1 种基金Shenzhen Science and Technology Projects(ZDYBH201900000002 and JCYJ20180508152042002)the National Postdoctoral Program for Innovative Talents(BX20190173).
文摘The development of deep learning and open access to a substantial collection of imaging data together provide a potential solution for computational image transformation,which is gradually changing the landscape of optical imaging and biomedical research.However,current implementations of deep learning usually operate in a supervised manner,and their reliance on laborious and error-prone data annotation procedures remains a barrier to more general applicability.Here,we propose an unsupervised image transformation to facilitate the utilization of deep learning for optical microscopy,even in some cases in which supervised models cannot be applied.Through the introduction of a saliency constraint,the unsupervised model,named Unsupervised content-preserving Transformation for Optical Microscopy(UTOM);can learn the mapping between two image domains without requiring paired training data while avoiding distortions of the image content.UTOM shows promising performance in a wide range of biomedical image transformation tasks,including in silico histological staining,fluorescence image restoration,and virtual fluorescence labeling.Quantitative evaluations reveal that UTOM achieves stable and high-fidelity image transformations across different imaging conditions and modalities.We anticipate that our framework will encourage a paradigm shift in training neural networks and enable more applications of artificial intelligence in biomedical imaging.
基金the National Natural Science Foundation of China(62088102,62071272.and 61927802)the National Key Research and Development Program of China(2020AAA0130000)the Postdoctoral Science Foundation of China(2019M660644)。
文摘Light field microscopy(LFM)has been widely used for recording 3D biological dynamics at camera frame rate.However,LFM suffers from artifact contaminations due to the illness of the reconstruction problem via naive Richardson-Lucy(RL)deconvolution.Moreover,the performance of LFM significantly dropped in low-light conditions due to the absence of sample priors.In this paper,we thoroughly analyze different kinds of artifacts and present a new LFM technique termed dictionary LFM(DiLFM)that substantially suppresses various kinds of reconstruction artifacts and improves the noise robustness with an over-complete dictionary.We demonstrate artifact-suppressed reconstructions in scattering samples such as Drosophila embryos and brains.Furthermore,we show our DiLFM can achieve robust blood cell counting in noisy conditions by imaging blood cell dynamic at 100 Hz and unveil more neurons in whole-brain calcium recording of zebrafish with low illumination power in vivo.
文摘Various biological behaviors can only be observed in 3D at high speed over the long term with low phototoxicity.Light-field microscopy(LFM)provides an elegant compact solution to record 3D information in a tomographic manner simultaneously,which can facilitate high photon efficiency.However,LFM still suffers from the missing-cone problem,leading to degraded axial resolution and ringing effects after deconvolution.Here,we propose a mirrorenhanced scanning LFM(MiSLFM)to achieve long-term high-speed 3D imaging at super-resolved axial resolution with a single objective,by fully exploiting the extended depth of field of LFM with a tilted mirror placed below samples.To establish the unique capabilities of MiSLFM,we performed extensive experiments,we observed various organelle interactions and intercellular interactions in different types of photosensitive cells under extremely low light conditions.Moreover,we demonstrated that superior axial resolution facilitates more robust blood cell tracking in zebrafish larvae at high speed.
基金National Natural Science Foundation of China(62071219,62025108)Natural Science Foundation of Jiangsu Province(BK20190292)。
文摘Micro-endoscopes are widely used for detecting and visualizing hard-to-reach areas of the human body and for in vivo observation of animals.A micro-endoscope that can realize 3D imaging at the camera framerate could benefit various clinical and biological applications.In this work,we report the development of a compact light-field micro-endoscope(LFME)that can obtain snapshot 3D fluorescence imaging,by jointly using a single-mode fiber bundle and a small-size light-field configuration.To demonstrate the real imaging performance of our method,we put a resolution chart in different z positions and capture the z-stack images successively for reconstruction,achieving 333-μm-diameter field of view,24μm optimal depth of field,and up to 3.91μm spatial resolution near the focal plane.We also test our method on a human skin tissue section and He La cells.Our LFME prototype provides epi-fluorescence imaging ability with a relatively small(2-mm-diameter)imaging probe,making it suitable for in vivo detection of brain activity and gastrointestinal diseases of animals.
文摘Silicon-based digital cameras can record visible and near-infrared (NIR) information, in which the full color visible image (RGB) must be restored from color filter ar- ray (CFA) interpolation. In this paper, we propose a uni- fied framework for CFA interpolation and visible/NIR image combination. To obtain a high quality color image, the tra- ditional color interpolation from raw CFA data is improved at each pixel, which is constrained by the corresponding monochromatic NIR image in gradient difference. The ex- periments indicate the effectiveness of this hybrid scheme to acquire joint color and NIR information in real-time, and show that this hybrid process can generate a better color im- age when compared to treating interpolation and fusion sep- arately.
文摘View-based 3-D object retrieval has become an emerging topic in recent years,especially with the fast development of visual content acquisition devices,such as mobile phones with cameras.Extensive research efforts have been dedicated to this task,while it is still difficult to measure the relevance between two objects with multiple views.In recent years,learning-based methods have been investigated in view-based 3-D object retrieval,such as graph-based learning.It is noted that the graph-based methods suffer from the high computational cost from the graph construction and the corresponding learning process.In this paper,we introduce a general framework to accelerate the learning-based view-based 3-D object matching in large scale data.Given a query object Q and one object O from a 3-D dataset D,the first step is to extract a small set of candidate relevant 3-D objects for object O.Then multiple hypergraphs can be constructed based on this small set of 3-D objects and the learning on the fused hypergraph is conducted to generate the relevance between Q and O,which can be further used in the retrieval procedure.Experiments demonstrate the effectiveness of the proposed framework.
基金Ministry of Science and Technology of the People's Republic of China (2020AAA0108202)National Natural Science Foundation of China (61931012, 62088102)。
文摘Coded exposure photography is a promising computational imaging technique capable of addressing motion blur much better than using a conventional camera, via tailoring invertible blur kernels. However, existing methods suffer from restrictive assumptions, complicated preprocessing, and inferior performance. To address these issues,we proposed an end-to-end framework to handle general motion blurs with a unified deep neural network, and optimize the shutter's encoding pattern together with the deblurring processing to achieve high-quality sharp images. The framework incorporates a learnable flutter shutter sequence to capture coded exposure snapshots and a learning-based deblurring network to restore the sharp images from the blurry inputs. By co-optimizing the encoding and the deblurring modules jointly, our approach avoids exhaustively searching for encoding sequences and achieves an optimal overall deblurring performance. Compared with existing coded exposure based motion deblurring methods, the proposed framework eliminates tedious preprocessing steps such as foreground segmentation and blur kernel estimation, and extends coded exposure deblurring to more general blind and nonuniform cases. Both simulation and real-data experiments demonstrate the superior performance and flexibility of the proposed method.
基金Ministry of Science and Technology of the People’s Republic of China(2020AAA0108202)National Natural Science Foundation of China(62088102,61931012)。
文摘High-resolution images are widely used in our everyday life;however,high-speed video capture is more challenging due to the low frame rate of cameras working at the high-resolution mode.The main bottleneck lies in the low throughput of existing imaging systems.Toward this end,snapshot compressive imaging(SCI)was proposed as a promising solution to improve the throughput of imaging systems by compressive sampling and computational reconstruction.During acquisition,multiple high-speed images are encoded and collapsed to a single measurement.Then,algorithms are employed to retrieve the video frames from the coded snapshot.Recently developed plug-and-play algorithms made the SCI reconstruction possible in large-scale problems.However,the lack of high-resolution encoding systems still precludes SCI’s wide application.Thus,in this paper,we build,to the best of our knowledge,a novel hybrid coded aperture snapshot compressive imaging(HCA-SCI)system by incorporating a dynamic liquid crystal on silicon and a high-resolution lithography mask.We further implement a Pn P reconstruction algorithm with cascaded denoisers for high-quality reconstruction.Based on the proposed HCA-SCI system and algorithm,we obtain a 10-mega-pixel SCI system to capture high-speed scenes,leading to a high throughput of 4.6×10^(9) voxels per second.Both simulation and real-data experiments verify the feasibility and performance of our proposed HCA-SCI scheme.
基金This work was supported in part by the State Key Research Development Program of China(2017YFA0505503)the National Natural Science Foundation of China(81890991 and 31671383)+4 种基金Beijing Advanced Innovation Center for Structural Bio logy,Tsinghua University(100300001)the fund from Foshan-Tsinghua Innovation Special Fund(FTISF,2019THFS0141)to J.G.,the National Natural Science Foundation of China(31871444)the program for Guangdong Introducing Innovative and Entrepreneurial Teams(2016ZT06S029)to J.W.Australia China Science and Research Fund Joint Research Centre for POCT(ACSRF65827)Shenzhen Science and Technology Program(KQTD 20170810110913065)to D.J.
文摘There is an increasing interest in understanding how three-dimensional(3D)organization of the genome is regulated.Different strategies have been employed to identify genome-wide chromatin interactions.However,due to current limitations in resolving genomic contacts,visualization and validation of these genomic loci with sub-kilobase resolution remain unsolved to date.Here,we describe Tn5 transposase-based Fluorescencein situhybridization(Tn5-FISH),a PCR-based,cost-effective imaging method,which can co-localize the genomic loci with sub-kilobase resolution,dissect genome architecture,and verify chromatin interactions detected by chromatin configuration capture(3C)-derived methods.To validate this method,short-range interactions in keratin-encoding gene(KRT)locus in topologically associated domain(TAD)were imaged by triple-color Tn5-FISH,indicating that Tn5-FISH is very useful to verify short-range chromatin interactions inside the contact domain and TAD.Therefore,Tn5-FISH can be a powerful molecular tool for the clinical detection of cytogenetic changes in numerous genetic diseases such as cancers.
基金supported by the National Natural Science Foundation of China (Grant No.60432030).
文摘Application layer multicast routing is a multiobjective optimization problem.Three routing constraints,tree’s cost,tree’s balance and network layer load distribution are analyzed in this paper.The three fitness functions are used to evaluate a multicast tree on the three indexes respectively and one general fitness function is generated.A novel approach based on genetic algorithms is proposed.Numerical simulations show that,compared with geometrical routing rules,the proposed algorithm improve all three indexes,especially on cost and network layer load distribution indexes.
基金This work was supported by the National Key Research and Development Program of China(Nos.2020AAA0105500 and 2021ZD0109901)the National Natural Science Foundation of China(Nos.62088102,62125106,and 61971260)the Beijing Municipal Science and Technology Commission(No.Z181100003118014).
文摘The metaverse is attracting considerable attention recently.It aims to build a virtual environment that people can interact with the world and cooperate with each other.In this survey paper,we re-introduce metaverse in a new framework based on a broad range of technologies,including perception which enables us to precisely capture the characteristics of the real world,computation which supports the large computation requirement over large-scale data,reconstruction which builds the virtual world from the real one,cooperation which facilitates long-distance communication and teamwork between users,and interaction which bridges users and the virtual world.Despite its popularity,the fundamental techniques in this framework are still immature.Innovating new techniques to facilitate the applications of metaverse is necessary.In recent years,artificial intelligence(AI),especially deep learning,has shown promising results for empowering various areas,from science to industry.It is reasonable to imagine how we can combine AI with the framework in order to promote the development of metaverse.In this survey,we present the recent achievement by AI for metaverse in the proposed framework,including perception,computation,reconstruction,cooperation,and interaction.We also discuss some future works that AI can contribute to metaverse.