Object classification is an important aspect of machine intelligence.Current practices in object classification entail the digitization of object information followed by the application of digital algorithms such as d...Object classification is an important aspect of machine intelligence.Current practices in object classification entail the digitization of object information followed by the application of digital algorithms such as deep neural networks.The execution of digital neural networks is power-consuming,and the throughput is limited.The existing von Neumann digital computing paradigm is also less suited for the implementation of highly parallel neural network architectures.^(1)展开更多
Under spatially coherent light,a diffractive optical network composed of structured surfaces can be designed to perform any arbitrary complex-valued linear transformation between its input and output fields-of-view(FO...Under spatially coherent light,a diffractive optical network composed of structured surfaces can be designed to perform any arbitrary complex-valued linear transformation between its input and output fields-of-view(FOVs)if the total number(N)of optimizable phase-only diffractive features is≥~2N_(i)N_(o),where Ni and No refer to the number of useful pixels at the input and the output FOVs,respectively.Here we report the design of a spatially incoherent diffractive optical processor that can approximate any arbitrary linear transformation in time-averaged intensity between its input and output FOVs.Under spatially incoherent monochromatic light,the spatially varying intensity point spread function(H)of a diffractive network,corresponding to a given,arbitrarily-selected linear intensity transformation,can be written as H(m,n;m′,n′)=|h(m,n;m′,n′)|^(2),where h is the spatially coherent point spread function of the same diffractive network,and(m,n)and(m′,n′)define the coordinates of the output and input FOVs,respectively.Using numerical simulations and deep learning,supervised through examples of input-output profiles,we demonstrate that a spatially incoherent diffractive network can be trained to all-optically perform any arbitrary linear intensity transformation between its input and output if N≥~2N_(i)N_(o).We also report the design of spatially incoherent diffractive networks for linear processing of intensity information at multiple illumination wavelengths,operating simultaneously.Finally,we numerically demonstrate a diffractive network design that performs all-optical classification of handwritten digits under spatially incoherent illumination,achieving a test accuracy of>95%.Spatially incoherent diffractive networks will be broadly useful for designing all-optical visual processors that can work under natural light.展开更多
As an optical processor,a diffractive deep neural network(D2NN)utilizes engineered diffractive surfaces designed through machine learning to perform all-optical information processing,completing its tasks at the speed...As an optical processor,a diffractive deep neural network(D2NN)utilizes engineered diffractive surfaces designed through machine learning to perform all-optical information processing,completing its tasks at the speed of light propagation through thin optical layers.With sufficient degrees of freedom,D2NNs can perform arbitrary complex-valued linear transformations using spatially coherent light.Similarly,D2NNs can also perform arbitrary linear intensity transformations with spatially incoherent illumination;however,under spatially incoherent light,these transformations are nonnegative,acting on diffraction-limited optical intensity patterns at the input field of view.Here,we expand the use of spatially incoherent D2NNs to complex-valued information processing for executing arbitrary complex-valued linear transformations using spatially incoherent light.Through simulations,we show that as the number of optimized diffractive features increases beyond a threshold dictated by the multiplication of the input and output space-bandwidth products,a spatially incoherent diffractive visual processor can approximate any complex-valued linear transformation and be used for all-optical image encryption using incoherent illumination.The findings are important for the all-optical processing of information under natural light using various forms of diffractive surface-based optical processors.展开更多
A plethora of research advances have emerged in the fields of optics and photonics that benefit from harnessing the power of machine learning.Specifically,there has been a revival of interest in optical computing hard...A plethora of research advances have emerged in the fields of optics and photonics that benefit from harnessing the power of machine learning.Specifically,there has been a revival of interest in optical computing hardware due to its potential advantages for machine learning tasks in terms of parallelization,power efficiency and computation speed.Diffractive deep neural networks(D^(2)NNs)form such an optical computing framework that benefits from deep learning-based design of successive diffractive layers to all-optically process information as the input light diffracts through these passive layers.D^(2)NNs have demonstrated success in various tasks,including object classification,the spectral encoding of information,optical pulse shaping and imaging.Here,we substantially improve the inference performance of diffractive optical networks using feature engineering and ensemble learning.After independently training 1252 D^(2)NNs that were diversely engineered with a variety of passive input filters,we applied a pruning algorithm to select an optimized ensemble of D^(2)NNs that collectively improved the image classification accuracy.Through this pruning,we numerically demonstrated that ensembles of N=14 and N=30 D^(2)NNs achieve blind testing accuracies of 61.14±0.23%and 62.13±0.05%,respectively,on the classification of GFAR-10 test images,providing an inference improvennent of>16%compared to the average performance of the individual D^(2)NNs within each ensemble.These results constitute the highest inference accuracies achieved to date by any diffractive optical neural network design on the same dataset and might provide a significant leap to extend the application space of diffractive optical image classification and machine vision systems.展开更多
文摘Object classification is an important aspect of machine intelligence.Current practices in object classification entail the digitization of object information followed by the application of digital algorithms such as deep neural networks.The execution of digital neural networks is power-consuming,and the throughput is limited.The existing von Neumann digital computing paradigm is also less suited for the implementation of highly parallel neural network architectures.^(1)
基金The Ozcan Research Group at UCLA acknowledges the support of U.S.Department of Energy(DOE),Office of Basic Energy Sciences,Division of Materials Sciences and Engineering under Award#DE-SC0023088.
文摘Under spatially coherent light,a diffractive optical network composed of structured surfaces can be designed to perform any arbitrary complex-valued linear transformation between its input and output fields-of-view(FOVs)if the total number(N)of optimizable phase-only diffractive features is≥~2N_(i)N_(o),where Ni and No refer to the number of useful pixels at the input and the output FOVs,respectively.Here we report the design of a spatially incoherent diffractive optical processor that can approximate any arbitrary linear transformation in time-averaged intensity between its input and output FOVs.Under spatially incoherent monochromatic light,the spatially varying intensity point spread function(H)of a diffractive network,corresponding to a given,arbitrarily-selected linear intensity transformation,can be written as H(m,n;m′,n′)=|h(m,n;m′,n′)|^(2),where h is the spatially coherent point spread function of the same diffractive network,and(m,n)and(m′,n′)define the coordinates of the output and input FOVs,respectively.Using numerical simulations and deep learning,supervised through examples of input-output profiles,we demonstrate that a spatially incoherent diffractive network can be trained to all-optically perform any arbitrary linear intensity transformation between its input and output if N≥~2N_(i)N_(o).We also report the design of spatially incoherent diffractive networks for linear processing of intensity information at multiple illumination wavelengths,operating simultaneously.Finally,we numerically demonstrate a diffractive network design that performs all-optical classification of handwritten digits under spatially incoherent illumination,achieving a test accuracy of>95%.Spatially incoherent diffractive networks will be broadly useful for designing all-optical visual processors that can work under natural light.
基金support of the U.S.Department of Energy (DOE),Office of Basic Energy Sciences,Division of Materials Sciences and Engineering under Award#DE-SC0023088.
文摘As an optical processor,a diffractive deep neural network(D2NN)utilizes engineered diffractive surfaces designed through machine learning to perform all-optical information processing,completing its tasks at the speed of light propagation through thin optical layers.With sufficient degrees of freedom,D2NNs can perform arbitrary complex-valued linear transformations using spatially coherent light.Similarly,D2NNs can also perform arbitrary linear intensity transformations with spatially incoherent illumination;however,under spatially incoherent light,these transformations are nonnegative,acting on diffraction-limited optical intensity patterns at the input field of view.Here,we expand the use of spatially incoherent D2NNs to complex-valued information processing for executing arbitrary complex-valued linear transformations using spatially incoherent light.Through simulations,we show that as the number of optimized diffractive features increases beyond a threshold dictated by the multiplication of the input and output space-bandwidth products,a spatially incoherent diffractive visual processor can approximate any complex-valued linear transformation and be used for all-optical image encryption using incoherent illumination.The findings are important for the all-optical processing of information under natural light using various forms of diffractive surface-based optical processors.
基金The Ozcan Research Group at UCLA acknowledges the support of Fujikura(Japan).
文摘A plethora of research advances have emerged in the fields of optics and photonics that benefit from harnessing the power of machine learning.Specifically,there has been a revival of interest in optical computing hardware due to its potential advantages for machine learning tasks in terms of parallelization,power efficiency and computation speed.Diffractive deep neural networks(D^(2)NNs)form such an optical computing framework that benefits from deep learning-based design of successive diffractive layers to all-optically process information as the input light diffracts through these passive layers.D^(2)NNs have demonstrated success in various tasks,including object classification,the spectral encoding of information,optical pulse shaping and imaging.Here,we substantially improve the inference performance of diffractive optical networks using feature engineering and ensemble learning.After independently training 1252 D^(2)NNs that were diversely engineered with a variety of passive input filters,we applied a pruning algorithm to select an optimized ensemble of D^(2)NNs that collectively improved the image classification accuracy.Through this pruning,we numerically demonstrated that ensembles of N=14 and N=30 D^(2)NNs achieve blind testing accuracies of 61.14±0.23%and 62.13±0.05%,respectively,on the classification of GFAR-10 test images,providing an inference improvennent of>16%compared to the average performance of the individual D^(2)NNs within each ensemble.These results constitute the highest inference accuracies achieved to date by any diffractive optical neural network design on the same dataset and might provide a significant leap to extend the application space of diffractive optical image classification and machine vision systems.