As an emerging approach to space situational awareness and space imaging,the practical use of an event-based camera(EBC)in space imaging for precise source analysis is still in its infancy.The nature of event-based sp...As an emerging approach to space situational awareness and space imaging,the practical use of an event-based camera(EBC)in space imaging for precise source analysis is still in its infancy.The nature of event-based space imaging and data collection needs to be further explored to develop more effective event-based space imaging systems and advance the capabilities of event-based tracking systems with improved target measurement models.Moreover,for event measurements to be meaningful,a framework must be investigated for EBC calibration to project events from pixel array coordinates in the image plane to coordinates in a target resident space object’s reference frame.In this paper,the traditional techniques of conventional astronomy are reconsidered to properly utilise the EBC for space imaging and space situational awareness.This paper presents the techniques and systems used for calibrating an EBC for reliable and accurate measurement acquisition.These techniques are vital in building event-based space imaging systems capable of real-world space situational awareness tasks.By calibrating sources detected using the EBC,the spatiotemporal characteristics of detected sources or“event sources”can be related to the photometric characteristics of the underlying astrophysical objects.Finally,these characteristics are analysed to establish a foundation for principled processing and observing techniques which appropriately exploit the capabilities of the EBC.展开更多
The rise of the Internet and identity authentication systems has brought convenience to people's lives but has also introduced the potential risk of privacy leaks.Existing biometric authentication systems based on...The rise of the Internet and identity authentication systems has brought convenience to people's lives but has also introduced the potential risk of privacy leaks.Existing biometric authentication systems based on explicit and static features bear the risk of being attacked by mimicked data.This work proposes a highly efficient biometric authentication system based on transient eye blink signals that are precisely captured by a neuromorphic vision sensor with microsecond-level temporal resolution.The neuromorphic vision sensor only transmits the local pixel-level changes induced by the eye blinks when they occur,which leads to advantageous characteristics such as an ultra-low latency response.We first propose a set of effective biometric features describing the motion,speed,energy and frequency signal of eye blinks based on the microsecond temporal resolution of event densities.We then train the ensemble model and non-ensemble model with our Neuro Biometric dataset for biometrics authentication.The experiments show that our system is able to identify and verify the subjects with the ensemble model at an accuracy of 0.948 and with the non-ensemble model at an accuracy of 0.925.The low false positive rates(about 0.002)and the highly dynamic features are not only hard to reproduce but also avoid recording visible characteristics of a user's appearance.The proposed system sheds light on a new path towards safer authentication using neuromorphic vision sensors.展开更多
Estimating the global state of a networked system is an important problem in many application domains.The classical approach to tackling this problem is the periodic(observation)method,which is inefficient because it ...Estimating the global state of a networked system is an important problem in many application domains.The classical approach to tackling this problem is the periodic(observation)method,which is inefficient because it often observes states at a very high frequency.This inefficiency has motivated the idea of event-based method,which leverages the evolution dynamics in question and makes observations only when some rules are triggered(i.e.,only when certain conditions hold).This paper initiates the investigation of using the event-based method to estimate the equilibrium in the new application domain of cybersecurity,where equilibrium is an important metric that has no closed-form solutions.More specifically,the paper presents an event-based method for estimating cybersecurity equilibrium in the preventive and reactive cyber defense dynamics,which has been proven globally convergent.The presented study proves that the estimated equilibrium from our trigger rule i)indeed converges to the equilibrium of the dynamics and ii)is Zeno-free,which assures the usefulness of the event-based method.Numerical examples show that the event-based method can reduce 98%of the observation cost incurred by the periodic method.In order to use the event-based method in practice,this paper investigates how to bridge the gap between i)the continuous state in the dynamics model,which is dubbed probability-state because it measures the probability that a node is in the secure or compromised state,and ii)the discrete state that is often encountered in practice,dubbed sample-state because it is sampled from some nodes.This bridge may be of independent value because probability-state models have been widely used to approximate exponentially-many discrete state systems.展开更多
An ultrafast framing camera with a pulse-dilation device,a microchannel plate(MCP)imager,and an electronic imaging system were reported.The camera achieved a temporal resolution of 10 ps by using a pulse-dilation devi...An ultrafast framing camera with a pulse-dilation device,a microchannel plate(MCP)imager,and an electronic imaging system were reported.The camera achieved a temporal resolution of 10 ps by using a pulse-dilation device and gated MCP imager,and a spatial resolution of 100μm by using an electronic imaging system comprising combined magnetic lenses.The spatial resolution characteristics of the camera were studied both theoretically and experimentally.The results showed that the camera with combined magnetic lenses reduced the field curvature and acquired a larger working area.A working area with a diameter of 53 mm was created by applying four magnetic lenses to the camera.Furthermore,the camera was used to detect the X-rays produced by the laser-targeting device.The diagnostic results indicated that the width of the X-ray pulse was approximately 18 ps.展开更多
The widespread availability of digital multimedia data has led to a new challenge in digital forensics.Traditional source camera identification algorithms usually rely on various traces in the capturing process.Howeve...The widespread availability of digital multimedia data has led to a new challenge in digital forensics.Traditional source camera identification algorithms usually rely on various traces in the capturing process.However,these traces have become increasingly difficult to extract due to wide availability of various image processing algorithms.Convolutional Neural Networks(CNN)-based algorithms have demonstrated good discriminative capabilities for different brands and even different models of camera devices.However,their performances is not ideal in case of distinguishing between individual devices of the same model,because cameras of the same model typically use the same optical lens,image sensor,and image processing algorithms,that result in minimal overall differences.In this paper,we propose a camera forensics algorithm based on multi-scale feature fusion to address these issues.The proposed algorithm extracts different local features from feature maps of different scales and then fuses them to obtain a comprehensive feature representation.This representation is then fed into a subsequent camera fingerprint classification network.Building upon the Swin-T network,we utilize Transformer Blocks and Graph Convolutional Network(GCN)modules to fuse multi-scale features from different stages of the backbone network.Furthermore,we conduct experiments on established datasets to demonstrate the feasibility and effectiveness of the proposed approach.展开更多
This paper introduces an intelligent computational approach for extracting salient objects fromimages and estimatingtheir distance information with PTZ (Pan-Tilt-Zoom) cameras. PTZ cameras have found wide applications...This paper introduces an intelligent computational approach for extracting salient objects fromimages and estimatingtheir distance information with PTZ (Pan-Tilt-Zoom) cameras. PTZ cameras have found wide applications innumerous public places, serving various purposes such as public securitymanagement, natural disastermonitoring,and crisis alarms, particularly with the rapid development of Artificial Intelligence and global infrastructuralprojects. In this paper, we combine Gauss optical principles with the PTZ camera’s capabilities of horizontal andpitch rotation, as well as optical zoom, to estimate the distance of the object.We present a novel monocular objectdistance estimation model based on the Focal Length-Target Pixel Size (FLTPS) relationship, achieving an accuracyrate of over 95% for objects within a 5 km range. The salient object extraction is achieved through a simplifiedconvolution kernel and the utilization of the object’s RGB features, which offer significantly faster computingspeeds compared to Convolutional Neural Networks (CNNs). Additionally, we introduce the dark channel beforethe fog removal algorithm, resulting in a 20 dB increase in image definition, which significantly benefits distanceestimation. Our system offers the advantages of stability and low device load, making it an asset for public securityaffairs and providing a reference point for future developments in surveillance hardware.展开更多
This paper aims to develop an automatic miscalibration detection and correction framework to maintain accurate calibration of LiDAR and camera for autonomous vehicle after the sensor drift.First,a monitoring algorithm...This paper aims to develop an automatic miscalibration detection and correction framework to maintain accurate calibration of LiDAR and camera for autonomous vehicle after the sensor drift.First,a monitoring algorithm that can continuously detect the miscalibration in each frame is designed,leveraging the rotational motion each individual sensor observes.Then,as sensor drift occurs,the projection constraints between visual feature points and LiDAR 3-D points are used to compute the scaled camera motion,which is further utilized to align the drifted LiDAR scan with the camera image.Finally,the proposed method is sufficiently compared with two representative approaches in the online experiments with varying levels of random drift,then the method is further extended to the offline calibration experiment and is demonstrated by a comparison with two existing benchmark methods.展开更多
The geometric accuracy of topographic mapping with high-resolution remote sensing images is inevita-bly affected by the orbiter attitude jitter.Therefore,it is necessary to conduct preliminary research on the stereo m...The geometric accuracy of topographic mapping with high-resolution remote sensing images is inevita-bly affected by the orbiter attitude jitter.Therefore,it is necessary to conduct preliminary research on the stereo mapping camera equipped on lunar orbiter before launching.In this work,an imaging simulation method consid-ering the attitude jitter is presented.The impact analysis of different attitude jitter on terrain undulation is conduct-ed by simulating jitter at three attitude angles,respectively.The proposed simulation method is based on the rigor-ous sensor model,using the lunar digital elevation model(DEM)and orthoimage as reference data.The orbit and attitude of the lunar stereo mapping camera are simulated while considering the attitude jitter.Two-dimensional simulated stereo images are generated according to the position and attitude of the orbiter in a given orbit.Experi-mental analyses were conducted by the DEM with the simulated stereo image.The simulation imaging results demonstrate that the proposed method can ensure imaging efficiency without losing the accuracy of topographic mapping.The effect of attitude jitter on the stereo mapping accuracy of the simulated images was analyzed through a DEM comparison.展开更多
In visual measurement,high-precision camera calibration often employs circular targets.To address issues in mainstream methods,such as the eccentricity error of the circle from using the circle’s center for calibrati...In visual measurement,high-precision camera calibration often employs circular targets.To address issues in mainstream methods,such as the eccentricity error of the circle from using the circle’s center for calibration,overfitting or local minimum from fullparameter optimization,and calibration errors due to neglecting the center of distortion,a stepwise camera calibration method incorporating compensation for eccentricity error was proposed to enhance monocular camera calibration precision.Initially,the multiimage distortion correction method calculated the common center of distortion and coefficients,improving precision,stability,and efficiency compared to single-image distortion correction methods.Subsequently,the projection point of the circle’s center was compared with the center of the contour’s projection to iteratively correct the eccentricity error,leading to more precise and stable calibration.Finally,nonlinear optimization refined the calibration parameters to minimize reprojection error and boosts precision.These processes achieved stepwise camera calibration,which enhanced robustness.In addition,the module comparison experiment showed that both the eccentricity error compensation and the camera parameter optimization could improve the calibration precision,but the latter had a greater impact.The combined use of the two methods further improved the precision and stability.Simulations and experiments confirmed that the proposed method achieved high precision,stability,and robustness,suitable for high-precision visual measurements.展开更多
Real-time indoor camera localization is a significant problem in indoor robot navigation and surveillance systems.The scene can change during the image sequence and plays a vital role in the localization performance o...Real-time indoor camera localization is a significant problem in indoor robot navigation and surveillance systems.The scene can change during the image sequence and plays a vital role in the localization performance of robotic applications in terms of accuracy and speed.This research proposed a real-time indoor camera localization system based on a recurrent neural network that detects scene change during the image sequence.An annotated image dataset trains the proposed system and predicts the camera pose in real-time.The system mainly improved the localization performance of indoor cameras by more accurately predicting the camera pose.It also recognizes the scene changes during the sequence and evaluates the effects of these changes.This system achieved high accuracy and real-time performance.The scene change detection process was performed using visual rhythm and the proposed recurrent deep architecture,which performed camera pose prediction and scene change impact evaluation.Overall,this study proposed a novel real-time localization system for indoor cameras that detects scene changes and shows how they affect localization performance.展开更多
文摘As an emerging approach to space situational awareness and space imaging,the practical use of an event-based camera(EBC)in space imaging for precise source analysis is still in its infancy.The nature of event-based space imaging and data collection needs to be further explored to develop more effective event-based space imaging systems and advance the capabilities of event-based tracking systems with improved target measurement models.Moreover,for event measurements to be meaningful,a framework must be investigated for EBC calibration to project events from pixel array coordinates in the image plane to coordinates in a target resident space object’s reference frame.In this paper,the traditional techniques of conventional astronomy are reconsidered to properly utilise the EBC for space imaging and space situational awareness.This paper presents the techniques and systems used for calibrating an EBC for reliable and accurate measurement acquisition.These techniques are vital in building event-based space imaging systems capable of real-world space situational awareness tasks.By calibrating sources detected using the EBC,the spatiotemporal characteristics of detected sources or“event sources”can be related to the photometric characteristics of the underlying astrophysical objects.Finally,these characteristics are analysed to establish a foundation for principled processing and observing techniques which appropriately exploit the capabilities of the EBC.
基金supported by the National Natural Science Foundation of China(61906138)the National Science and Technology Major Project of the Ministry of Science and Technology of China(2018AAA0102900)+2 种基金the Shanghai Automotive Industry Sci-Tech Development Program(1838)the European Union’s Horizon 2020 Research and Innovation Program(785907)the Shanghai AI Innovation Development Program 2018。
文摘The rise of the Internet and identity authentication systems has brought convenience to people's lives but has also introduced the potential risk of privacy leaks.Existing biometric authentication systems based on explicit and static features bear the risk of being attacked by mimicked data.This work proposes a highly efficient biometric authentication system based on transient eye blink signals that are precisely captured by a neuromorphic vision sensor with microsecond-level temporal resolution.The neuromorphic vision sensor only transmits the local pixel-level changes induced by the eye blinks when they occur,which leads to advantageous characteristics such as an ultra-low latency response.We first propose a set of effective biometric features describing the motion,speed,energy and frequency signal of eye blinks based on the microsecond temporal resolution of event densities.We then train the ensemble model and non-ensemble model with our Neuro Biometric dataset for biometrics authentication.The experiments show that our system is able to identify and verify the subjects with the ensemble model at an accuracy of 0.948 and with the non-ensemble model at an accuracy of 0.925.The low false positive rates(about 0.002)and the highly dynamic features are not only hard to reproduce but also avoid recording visible characteristics of a user's appearance.The proposed system sheds light on a new path towards safer authentication using neuromorphic vision sensors.
基金supported in part by the National Natural Sciences Foundation of China(62072111)。
文摘Estimating the global state of a networked system is an important problem in many application domains.The classical approach to tackling this problem is the periodic(observation)method,which is inefficient because it often observes states at a very high frequency.This inefficiency has motivated the idea of event-based method,which leverages the evolution dynamics in question and makes observations only when some rules are triggered(i.e.,only when certain conditions hold).This paper initiates the investigation of using the event-based method to estimate the equilibrium in the new application domain of cybersecurity,where equilibrium is an important metric that has no closed-form solutions.More specifically,the paper presents an event-based method for estimating cybersecurity equilibrium in the preventive and reactive cyber defense dynamics,which has been proven globally convergent.The presented study proves that the estimated equilibrium from our trigger rule i)indeed converges to the equilibrium of the dynamics and ii)is Zeno-free,which assures the usefulness of the event-based method.Numerical examples show that the event-based method can reduce 98%of the observation cost incurred by the periodic method.In order to use the event-based method in practice,this paper investigates how to bridge the gap between i)the continuous state in the dynamics model,which is dubbed probability-state because it measures the probability that a node is in the secure or compromised state,and ii)the discrete state that is often encountered in practice,dubbed sample-state because it is sampled from some nodes.This bridge may be of independent value because probability-state models have been widely used to approximate exponentially-many discrete state systems.
基金National Natural Science Foundation of China(NSFC)(No.11775147)Guangdong Basic and Applied Basic Research Foundation(Nos.2019A1515110130 and 2024A1515011832)+1 种基金Shenzhen Key Laboratory of Photonics and Biophotonics(ZDSYS20210623092006020)Shenzhen Science and Technology Program(Nos.JCYJ20210324095007020,JCYJ20200109105201936 and JCYJ20230808105019039).
文摘An ultrafast framing camera with a pulse-dilation device,a microchannel plate(MCP)imager,and an electronic imaging system were reported.The camera achieved a temporal resolution of 10 ps by using a pulse-dilation device and gated MCP imager,and a spatial resolution of 100μm by using an electronic imaging system comprising combined magnetic lenses.The spatial resolution characteristics of the camera were studied both theoretically and experimentally.The results showed that the camera with combined magnetic lenses reduced the field curvature and acquired a larger working area.A working area with a diameter of 53 mm was created by applying four magnetic lenses to the camera.Furthermore,the camera was used to detect the X-rays produced by the laser-targeting device.The diagnostic results indicated that the width of the X-ray pulse was approximately 18 ps.
基金This work was funded by the National Natural Science Foundation of China(Grant No.62172132)Public Welfare Technology Research Project of Zhejiang Province(Grant No.LGF21F020014)the Opening Project of Key Laboratory of Public Security Information Application Based on Big-Data Architecture,Ministry of Public Security of Zhejiang Police College(Grant No.2021DSJSYS002).
文摘The widespread availability of digital multimedia data has led to a new challenge in digital forensics.Traditional source camera identification algorithms usually rely on various traces in the capturing process.However,these traces have become increasingly difficult to extract due to wide availability of various image processing algorithms.Convolutional Neural Networks(CNN)-based algorithms have demonstrated good discriminative capabilities for different brands and even different models of camera devices.However,their performances is not ideal in case of distinguishing between individual devices of the same model,because cameras of the same model typically use the same optical lens,image sensor,and image processing algorithms,that result in minimal overall differences.In this paper,we propose a camera forensics algorithm based on multi-scale feature fusion to address these issues.The proposed algorithm extracts different local features from feature maps of different scales and then fuses them to obtain a comprehensive feature representation.This representation is then fed into a subsequent camera fingerprint classification network.Building upon the Swin-T network,we utilize Transformer Blocks and Graph Convolutional Network(GCN)modules to fuse multi-scale features from different stages of the backbone network.Furthermore,we conduct experiments on established datasets to demonstrate the feasibility and effectiveness of the proposed approach.
基金the Social Development Project of Jiangsu Key R&D Program(BE2022680)the National Natural Science Foundation of China(Nos.62371253,52278119).
文摘This paper introduces an intelligent computational approach for extracting salient objects fromimages and estimatingtheir distance information with PTZ (Pan-Tilt-Zoom) cameras. PTZ cameras have found wide applications innumerous public places, serving various purposes such as public securitymanagement, natural disastermonitoring,and crisis alarms, particularly with the rapid development of Artificial Intelligence and global infrastructuralprojects. In this paper, we combine Gauss optical principles with the PTZ camera’s capabilities of horizontal andpitch rotation, as well as optical zoom, to estimate the distance of the object.We present a novel monocular objectdistance estimation model based on the Focal Length-Target Pixel Size (FLTPS) relationship, achieving an accuracyrate of over 95% for objects within a 5 km range. The salient object extraction is achieved through a simplifiedconvolution kernel and the utilization of the object’s RGB features, which offer significantly faster computingspeeds compared to Convolutional Neural Networks (CNNs). Additionally, we introduce the dark channel beforethe fog removal algorithm, resulting in a 20 dB increase in image definition, which significantly benefits distanceestimation. Our system offers the advantages of stability and low device load, making it an asset for public securityaffairs and providing a reference point for future developments in surveillance hardware.
基金Supported by National Natural Science Foundation of China(Grant Nos.52025121,52394263)National Key R&D Plan of China(Grant No.2023YFD2000301).
文摘This paper aims to develop an automatic miscalibration detection and correction framework to maintain accurate calibration of LiDAR and camera for autonomous vehicle after the sensor drift.First,a monitoring algorithm that can continuously detect the miscalibration in each frame is designed,leveraging the rotational motion each individual sensor observes.Then,as sensor drift occurs,the projection constraints between visual feature points and LiDAR 3-D points are used to compute the scaled camera motion,which is further utilized to align the drifted LiDAR scan with the camera image.Finally,the proposed method is sufficiently compared with two representative approaches in the online experiments with varying levels of random drift,then the method is further extended to the offline calibration experiment and is demonstrated by a comparison with two existing benchmark methods.
基金Supported by the National Natural Science Foundation of China(42221002,42171432)Shanghai Municipal Science and Technology Major Project(2021SHZDZX0100)the Fundamental Research Funds for the Central Universities.
文摘The geometric accuracy of topographic mapping with high-resolution remote sensing images is inevita-bly affected by the orbiter attitude jitter.Therefore,it is necessary to conduct preliminary research on the stereo mapping camera equipped on lunar orbiter before launching.In this work,an imaging simulation method consid-ering the attitude jitter is presented.The impact analysis of different attitude jitter on terrain undulation is conduct-ed by simulating jitter at three attitude angles,respectively.The proposed simulation method is based on the rigor-ous sensor model,using the lunar digital elevation model(DEM)and orthoimage as reference data.The orbit and attitude of the lunar stereo mapping camera are simulated while considering the attitude jitter.Two-dimensional simulated stereo images are generated according to the position and attitude of the orbiter in a given orbit.Experi-mental analyses were conducted by the DEM with the simulated stereo image.The simulation imaging results demonstrate that the proposed method can ensure imaging efficiency without losing the accuracy of topographic mapping.The effect of attitude jitter on the stereo mapping accuracy of the simulated images was analyzed through a DEM comparison.
文摘In visual measurement,high-precision camera calibration often employs circular targets.To address issues in mainstream methods,such as the eccentricity error of the circle from using the circle’s center for calibration,overfitting or local minimum from fullparameter optimization,and calibration errors due to neglecting the center of distortion,a stepwise camera calibration method incorporating compensation for eccentricity error was proposed to enhance monocular camera calibration precision.Initially,the multiimage distortion correction method calculated the common center of distortion and coefficients,improving precision,stability,and efficiency compared to single-image distortion correction methods.Subsequently,the projection point of the circle’s center was compared with the center of the contour’s projection to iteratively correct the eccentricity error,leading to more precise and stable calibration.Finally,nonlinear optimization refined the calibration parameters to minimize reprojection error and boosts precision.These processes achieved stepwise camera calibration,which enhanced robustness.In addition,the module comparison experiment showed that both the eccentricity error compensation and the camera parameter optimization could improve the calibration precision,but the latter had a greater impact.The combined use of the two methods further improved the precision and stability.Simulations and experiments confirmed that the proposed method achieved high precision,stability,and robustness,suitable for high-precision visual measurements.
文摘Real-time indoor camera localization is a significant problem in indoor robot navigation and surveillance systems.The scene can change during the image sequence and plays a vital role in the localization performance of robotic applications in terms of accuracy and speed.This research proposed a real-time indoor camera localization system based on a recurrent neural network that detects scene change during the image sequence.An annotated image dataset trains the proposed system and predicts the camera pose in real-time.The system mainly improved the localization performance of indoor cameras by more accurately predicting the camera pose.It also recognizes the scene changes during the sequence and evaluates the effects of these changes.This system achieved high accuracy and real-time performance.The scene change detection process was performed using visual rhythm and the proposed recurrent deep architecture,which performed camera pose prediction and scene change impact evaluation.Overall,this study proposed a novel real-time localization system for indoor cameras that detects scene changes and shows how they affect localization performance.