The widespread availability of digital multimedia data has led to a new challenge in digital forensics.Traditional source camera identification algorithms usually rely on various traces in the capturing process.Howeve...The widespread availability of digital multimedia data has led to a new challenge in digital forensics.Traditional source camera identification algorithms usually rely on various traces in the capturing process.However,these traces have become increasingly difficult to extract due to wide availability of various image processing algorithms.Convolutional Neural Networks(CNN)-based algorithms have demonstrated good discriminative capabilities for different brands and even different models of camera devices.However,their performances is not ideal in case of distinguishing between individual devices of the same model,because cameras of the same model typically use the same optical lens,image sensor,and image processing algorithms,that result in minimal overall differences.In this paper,we propose a camera forensics algorithm based on multi-scale feature fusion to address these issues.The proposed algorithm extracts different local features from feature maps of different scales and then fuses them to obtain a comprehensive feature representation.This representation is then fed into a subsequent camera fingerprint classification network.Building upon the Swin-T network,we utilize Transformer Blocks and Graph Convolutional Network(GCN)modules to fuse multi-scale features from different stages of the backbone network.Furthermore,we conduct experiments on established datasets to demonstrate the feasibility and effectiveness of the proposed approach.展开更多
An ultrafast framing camera with a pulse-dilation device,a microchannel plate(MCP)imager,and an electronic imaging system were reported.The camera achieved a temporal resolution of 10 ps by using a pulse-dilation devi...An ultrafast framing camera with a pulse-dilation device,a microchannel plate(MCP)imager,and an electronic imaging system were reported.The camera achieved a temporal resolution of 10 ps by using a pulse-dilation device and gated MCP imager,and a spatial resolution of 100μm by using an electronic imaging system comprising combined magnetic lenses.The spatial resolution characteristics of the camera were studied both theoretically and experimentally.The results showed that the camera with combined magnetic lenses reduced the field curvature and acquired a larger working area.A working area with a diameter of 53 mm was created by applying four magnetic lenses to the camera.Furthermore,the camera was used to detect the X-rays produced by the laser-targeting device.The diagnostic results indicated that the width of the X-ray pulse was approximately 18 ps.展开更多
This paper introduces an intelligent computational approach for extracting salient objects fromimages and estimatingtheir distance information with PTZ (Pan-Tilt-Zoom) cameras. PTZ cameras have found wide applications...This paper introduces an intelligent computational approach for extracting salient objects fromimages and estimatingtheir distance information with PTZ (Pan-Tilt-Zoom) cameras. PTZ cameras have found wide applications innumerous public places, serving various purposes such as public securitymanagement, natural disastermonitoring,and crisis alarms, particularly with the rapid development of Artificial Intelligence and global infrastructuralprojects. In this paper, we combine Gauss optical principles with the PTZ camera’s capabilities of horizontal andpitch rotation, as well as optical zoom, to estimate the distance of the object.We present a novel monocular objectdistance estimation model based on the Focal Length-Target Pixel Size (FLTPS) relationship, achieving an accuracyrate of over 95% for objects within a 5 km range. The salient object extraction is achieved through a simplifiedconvolution kernel and the utilization of the object’s RGB features, which offer significantly faster computingspeeds compared to Convolutional Neural Networks (CNNs). Additionally, we introduce the dark channel beforethe fog removal algorithm, resulting in a 20 dB increase in image definition, which significantly benefits distanceestimation. Our system offers the advantages of stability and low device load, making it an asset for public securityaffairs and providing a reference point for future developments in surveillance hardware.展开更多
This paper aims to develop an automatic miscalibration detection and correction framework to maintain accurate calibration of LiDAR and camera for autonomous vehicle after the sensor drift.First,a monitoring algorithm...This paper aims to develop an automatic miscalibration detection and correction framework to maintain accurate calibration of LiDAR and camera for autonomous vehicle after the sensor drift.First,a monitoring algorithm that can continuously detect the miscalibration in each frame is designed,leveraging the rotational motion each individual sensor observes.Then,as sensor drift occurs,the projection constraints between visual feature points and LiDAR 3-D points are used to compute the scaled camera motion,which is further utilized to align the drifted LiDAR scan with the camera image.Finally,the proposed method is sufficiently compared with two representative approaches in the online experiments with varying levels of random drift,then the method is further extended to the offline calibration experiment and is demonstrated by a comparison with two existing benchmark methods.展开更多
Real-time indoor camera localization is a significant problem in indoor robot navigation and surveillance systems.The scene can change during the image sequence and plays a vital role in the localization performance o...Real-time indoor camera localization is a significant problem in indoor robot navigation and surveillance systems.The scene can change during the image sequence and plays a vital role in the localization performance of robotic applications in terms of accuracy and speed.This research proposed a real-time indoor camera localization system based on a recurrent neural network that detects scene change during the image sequence.An annotated image dataset trains the proposed system and predicts the camera pose in real-time.The system mainly improved the localization performance of indoor cameras by more accurately predicting the camera pose.It also recognizes the scene changes during the sequence and evaluates the effects of these changes.This system achieved high accuracy and real-time performance.The scene change detection process was performed using visual rhythm and the proposed recurrent deep architecture,which performed camera pose prediction and scene change impact evaluation.Overall,this study proposed a novel real-time localization system for indoor cameras that detects scene changes and shows how they affect localization performance.展开更多
Compton camera-based prompt gamma(PG) imaging has been proposed for range verification during proton therapy. However, a deviation between the PG and dose distributions, as well as the difference between the reconstru...Compton camera-based prompt gamma(PG) imaging has been proposed for range verification during proton therapy. However, a deviation between the PG and dose distributions, as well as the difference between the reconstructed PG and exact values, limit the effectiveness of the approach in accurate range monitoring during clinical applications. The aim of the study was to realize a PG-based dose reconstruction with a Compton camera, thereby further improving the prediction accuracy of in vivo range verification and providing a novel method for beam monitoring during proton therapy. In this paper, we present an approach based on a subset-driven origin ensemble with resolution recovery and a double evolutionary algorithm to reconstruct the dose depth profile(DDP) from the gamma events obtained by a cadmium-zinc-telluride Compton camera with limited position and energy resolution. Simulations of proton pencil beams with clinical particle rate irradiating phantoms made of different materials and the CT-based thoracic phantom were used to evaluate the feasibility of the proposed method. The results show that for the monoenergetic proton pencil beam irradiating homogeneous-material box phantom,the accuracy of the reconstructed DDP was within 0.3 mm for range prediction and within 5.2% for dose prediction. In particular, for 1.6-Gy irradiation in the therapy simulation of thoracic tumors, the range deviation of the reconstructed spreadout Bragg peak was within 0.8 mm, and the relative dose deviation in the peak area was less than 7% compared to the exact values. The results demonstrate the potential and feasibility of the proposed method in future Compton-based accurate dose reconstruction and range verification during proton therapy.展开更多
A novel and fast three-dimensional reconstruction method for a Compton camera and its performance in radionuclide imaging is proposed and analyzed in this study. The conical surface sampling back-projection method wit...A novel and fast three-dimensional reconstruction method for a Compton camera and its performance in radionuclide imaging is proposed and analyzed in this study. The conical surface sampling back-projection method with scattering angle correction(CSS-BP-SC) can quickly perform the back-projection process of the Compton cone and can be used to precompute the list-mode maximum likelihood expectation maximization(LM-MLEM). A dedicated parallel architecture was designed for the graphics processing unit acceleration of the back-projection and iteration stage of the CSS-BP-SC-based LM-MLEM. The imaging results of the two-point source Monte Carlo(MC) simulation demonstrate that by analyzing the full width at half maximum along the three coordinate axes, the CSS-BP-SC-based LM-MLEM can obtain imaging results comparable to those of the traditional reconstruction algorithm, that is, the simple back-projection-based LM-MLEM. The imaging results of the mouse phantom MC simulation and experiment demonstrate that the reconstruction results obtained by the proposed method sufficiently coincide with the set radioactivity distribution, and the speed increased by more than 664 times compared to the traditional reconstruction algorithm in the mouse phantom experiment. The proposed method will further advance the imaging applications of Compton cameras.展开更多
Lanthanum bromide(LaBr_(3))crystal has a high energy resolution and time resolution and has been used in Compton cameras(CCs)over the past few decades.However,LaBr_(3) crystal arrays are difficult to process because L...Lanthanum bromide(LaBr_(3))crystal has a high energy resolution and time resolution and has been used in Compton cameras(CCs)over the past few decades.However,LaBr_(3) crystal arrays are difficult to process because LaBr_(3) is easy to crack and break;thus,few LaBr_(3)-based CC prototypes have been built.In this study,we designed and fabricated a large-pixel LaBr_(3) CC prototype and evaluated its performance with regard to position,energy,and angular resolution.We used two 10×10 LaBr_(3) crystal arrays with a pixel size of 5 mm×5 mm,silicon photomultipliers(SiPMs),and corresponding decoding circuits to construct our prototype.Additionally,a framework based on a Voronoi diagram and a lookup table was developed for list-mode projection data acquisition.Monte Carlo(MC)simulations based on Geant4 and experiments were conducted to evaluate the performance of our CC prototype.The lateral position resolution was 5 mm,and the maximum deviation in the depth direction was 2.5 and 5 mm for the scatterer and absorber,respectively.The corresponding measured energy resolu-tions were 7.65%and 8.44%,respectively,at 511 keV.The experimental results of ^(137)Cs point-like sources were consistent with the MC simulation results with regard to the spatial positions and full widths at half maximum(FWHMs).The angular resolution of the fabricated prototype was approximately 6°when a point-like ^(137)Cs source was centrally placed at a distance of 5 cm from the scatterer.We proposed and investigated a large-pixel LaBr_(3) CC for the first time and verified its feasibility for use in accurate spatial positioning of radiative sources with a high angular resolution.The proposed CC can satisfy the requirements of radiative source imaging and positioning in the nuclear industry and medical applications.展开更多
Theγ-rays are widely and abundantly present in strong nuclear radiation environments,and when they act on the camera equipment used to obtain environmental visual information on nuclear robots,radiation effects will ...Theγ-rays are widely and abundantly present in strong nuclear radiation environments,and when they act on the camera equipment used to obtain environmental visual information on nuclear robots,radiation effects will occur,which will degrade the performance of the camera system,reduce the imaging quality,and even cause catastrophic consequences.Color reducibility is an important index for evaluating the imaging quality of color camera,but its degradation mechanism in a nuclear radiation environment is still unclear.In this paper,theγ-ray irradiation experiments of CMOS cameras were carried out to analyse the degradation law of the camera’s color reducibility with cumulative irradiation and reveal the degradation mechanism of the color information of the CMOS camera underγ-ray irradiation.The results show that the spectral response of CMOS image sensor(CIS)and the spectral transmittance of lens after irradiation affect the values of a^(*)and b^(*)in the LAB color model.While the full well capacity(FWC)of CIS and transmittance of lens affect the value of L^(*)in the LAB color model,thus increase color difference and reduce brightness,the combined effect of color difference and brightness degradation will reduce the color reducibility of CMOS cameras.Therefore,the degradation of the color information of the CMOS camera afterγ-ray irradiation mainly comes from the changes in the FWC and spectral response of CIS,and the spectral transmittance of lens.展开更多
To address the eccentric error of circular marks in camera calibration,a circle location method based on the invariance of collinear points and pole–polar constraint is proposed in this paper.Firstly,the centers of t...To address the eccentric error of circular marks in camera calibration,a circle location method based on the invariance of collinear points and pole–polar constraint is proposed in this paper.Firstly,the centers of the ellipses are extracted,and the real concentric circle center projection equation is established by exploiting the cross ratio invariance of the collinear points.Subsequently,since the infinite lines passing through the centers of the marks are parallel,the other center projection coordinates are expressed as the solution problem of linear equations.The problem of projection deviation caused by using the center of the ellipse as the real circle center projection is addressed,and the results are utilized as the true image points to achieve the high precision camera calibration.As demonstrated by the simulations and practical experiments,the proposed method performs a better location and calibration performance by achieving the actual center projection of circular marks.The relevant results confirm the precision and robustness of the proposed approach.展开更多
Traffic incident management (TIM) is a FHWA Every Day Counts initiative with the objective of reducing secondary crashes, improving travel reliability, and ensuring safety of responders. Agency roadside cameras play a...Traffic incident management (TIM) is a FHWA Every Day Counts initiative with the objective of reducing secondary crashes, improving travel reliability, and ensuring safety of responders. Agency roadside cameras play a critical role in TIM by helping dispatchers quickly identify the precise location of incidents when receiving reports from motorists with varying levels of spatial accuracy. Reconciling position reports that are often mile marker based, with cameras that operate in a Pan-Tilt-Zoom coordinate system relies on dispatchers having detailed knowledge for hundreds of cameras and perhaps some presets. During real-time incident dispatching, reducing the time it takes to identify the most relevant cameras and setting their view on the incident is an important opportunity to improve incident management dispatch times. This research develops a camera-to-mile marker mapping technique that automatically sets the camera view to a specified mile marker within the field-of-view of the camera. Over 350 traffic cameras along Indiana’s 2250 directional miles of interstate were mapped to approximately 5000 discrete locations that correspond to approximately 780 directional miles (~35% of interstate) of camera coverage. This newly developed technique will allow operators to quickly identify the nearest camera and set them to the reported location. This research also identifies segments on the interstate system with limited or no camera coverage for decision makers to prioritize future capital investments. This paper concludes with brief discussion on future research to automate the mapping using LiDAR data and to set the cameras after automatically detecting the events using connected vehicle trajectory data.展开更多
The need for efficient and reproducible development processes for sensor and perception systems is growing with their increased use in modern vehicles. Such processes can be achieved by using virtual test environments...The need for efficient and reproducible development processes for sensor and perception systems is growing with their increased use in modern vehicles. Such processes can be achieved by using virtual test environments and virtual sensor models. In the context of this, the present paper documents the development of a sensor model for depth estimation of virtual three-dimensional scenarios. For this purpose, the geometric and algorithmic principles of stereoscopic camera systems are recreated in a virtual form. The model is implemented as a subroutine in the Epic Games Unreal Engine, which is one of the most common Game Engines. Its architecture consists of several independent procedures that enable a local depth estimation, but also a reconstruction of a whole three-dimensional scenery. In addition, a separate programme for calibrating the model is presented. In addition to the basic principles, the architecture and the implementation, this work also documents the evaluation of the model created. It is shown that the model meets specifically defined requirements for real-time capability and the accuracy of the evaluation. Thus, it is suitable for the virtual testing of common algorithms and highly automated driving functions.展开更多
Nowadays,smartphones are used as self-health monitoring devices for humans.Self-health monitoring devices help clinicians with big data for accurate diagnosis and guidance for treatment through repetitive measurement....Nowadays,smartphones are used as self-health monitoring devices for humans.Self-health monitoring devices help clinicians with big data for accurate diagnosis and guidance for treatment through repetitive measurement.Repetitive measurement of haemoglobin requires for pregnant women,pediatric,pulmonary hypertension and obstetric patients.Noninvasive haemoglobin measurement through conjunctiva leads to inaccurate measurement.The inaccuracy is due to a decrease in the density of goblet cells and acinar units in Meibomian glands in the human eye as age increases.Furthermore,conjunctivitis is a disease in the eye due to inflammation or infection at the conjunctiva.Conjunctivitis is in the form of lines in the eyelid and covers the white part of the eyeball.Moreover,small blood vessels in eye regions of conjunctiva inflammations are not visible to the human eye or standard camera.This paper proposes smartphone-based hae-moglobin(SBH)measurement through a borescope camera from anterior ciliary arteries of the eye for the above problem.The proposed SBH method acquires images from the anterior ciliary arteries region of the eye through a smartphone attached with a high megapixel borescope camera.The anterior ciliary arteries are projected through transverse dyadic wavelet transform(TDyWT)and applied with delta segmentation to obtain blood cells from the ciliary arteries of the eye.Furthermore,the Gaussian regression algorithm measures haemoglobin(Hb)with more accuracy based on the person,eye arteries,red pixel statistical parameters obtained from the left and right eye,age,and weight.Furthermore,the experimen-tal result of the proposed SBH method has an accuracy of 96%in haemoglobin measurement.展开更多
As the COVID-19 epidemic spread across the globe,people around the world were advised or mandated to wear masks in public places to prevent its spreading further.In some cases,not wearing a mask could result in a fine...As the COVID-19 epidemic spread across the globe,people around the world were advised or mandated to wear masks in public places to prevent its spreading further.In some cases,not wearing a mask could result in a fine.To monitor mask wearing,and to prevent the spread of future epidemics,this study proposes an image recognition system consisting of a camera,an infrared thermal array sensor,and a convolutional neural network trained in mask recognition.The infrared sensor monitors body temperature and displays the results in real-time on a liquid crystal display screen.The proposed system reduces the inefficiency of traditional object detection by providing training data according to the specific needs of the user and by applying You Only Look Once Version 4(YOLOv4)object detection technology,which experiments show has more efficient training parameters and a higher level of accuracy in object recognition.All datasets are uploaded to the cloud for storage using Google Colaboratory,saving human resources and achieving a high level of efficiency at a low cost.展开更多
基金This work was funded by the National Natural Science Foundation of China(Grant No.62172132)Public Welfare Technology Research Project of Zhejiang Province(Grant No.LGF21F020014)the Opening Project of Key Laboratory of Public Security Information Application Based on Big-Data Architecture,Ministry of Public Security of Zhejiang Police College(Grant No.2021DSJSYS002).
文摘The widespread availability of digital multimedia data has led to a new challenge in digital forensics.Traditional source camera identification algorithms usually rely on various traces in the capturing process.However,these traces have become increasingly difficult to extract due to wide availability of various image processing algorithms.Convolutional Neural Networks(CNN)-based algorithms have demonstrated good discriminative capabilities for different brands and even different models of camera devices.However,their performances is not ideal in case of distinguishing between individual devices of the same model,because cameras of the same model typically use the same optical lens,image sensor,and image processing algorithms,that result in minimal overall differences.In this paper,we propose a camera forensics algorithm based on multi-scale feature fusion to address these issues.The proposed algorithm extracts different local features from feature maps of different scales and then fuses them to obtain a comprehensive feature representation.This representation is then fed into a subsequent camera fingerprint classification network.Building upon the Swin-T network,we utilize Transformer Blocks and Graph Convolutional Network(GCN)modules to fuse multi-scale features from different stages of the backbone network.Furthermore,we conduct experiments on established datasets to demonstrate the feasibility and effectiveness of the proposed approach.
基金National Natural Science Foundation of China(NSFC)(No.11775147)Guangdong Basic and Applied Basic Research Foundation(Nos.2019A1515110130 and 2024A1515011832)+1 种基金Shenzhen Key Laboratory of Photonics and Biophotonics(ZDSYS20210623092006020)Shenzhen Science and Technology Program(Nos.JCYJ20210324095007020,JCYJ20200109105201936 and JCYJ20230808105019039).
文摘An ultrafast framing camera with a pulse-dilation device,a microchannel plate(MCP)imager,and an electronic imaging system were reported.The camera achieved a temporal resolution of 10 ps by using a pulse-dilation device and gated MCP imager,and a spatial resolution of 100μm by using an electronic imaging system comprising combined magnetic lenses.The spatial resolution characteristics of the camera were studied both theoretically and experimentally.The results showed that the camera with combined magnetic lenses reduced the field curvature and acquired a larger working area.A working area with a diameter of 53 mm was created by applying four magnetic lenses to the camera.Furthermore,the camera was used to detect the X-rays produced by the laser-targeting device.The diagnostic results indicated that the width of the X-ray pulse was approximately 18 ps.
基金the Social Development Project of Jiangsu Key R&D Program(BE2022680)the National Natural Science Foundation of China(Nos.62371253,52278119).
文摘This paper introduces an intelligent computational approach for extracting salient objects fromimages and estimatingtheir distance information with PTZ (Pan-Tilt-Zoom) cameras. PTZ cameras have found wide applications innumerous public places, serving various purposes such as public securitymanagement, natural disastermonitoring,and crisis alarms, particularly with the rapid development of Artificial Intelligence and global infrastructuralprojects. In this paper, we combine Gauss optical principles with the PTZ camera’s capabilities of horizontal andpitch rotation, as well as optical zoom, to estimate the distance of the object.We present a novel monocular objectdistance estimation model based on the Focal Length-Target Pixel Size (FLTPS) relationship, achieving an accuracyrate of over 95% for objects within a 5 km range. The salient object extraction is achieved through a simplifiedconvolution kernel and the utilization of the object’s RGB features, which offer significantly faster computingspeeds compared to Convolutional Neural Networks (CNNs). Additionally, we introduce the dark channel beforethe fog removal algorithm, resulting in a 20 dB increase in image definition, which significantly benefits distanceestimation. Our system offers the advantages of stability and low device load, making it an asset for public securityaffairs and providing a reference point for future developments in surveillance hardware.
基金Supported by National Natural Science Foundation of China(Grant Nos.52025121,52394263)National Key R&D Plan of China(Grant No.2023YFD2000301).
文摘This paper aims to develop an automatic miscalibration detection and correction framework to maintain accurate calibration of LiDAR and camera for autonomous vehicle after the sensor drift.First,a monitoring algorithm that can continuously detect the miscalibration in each frame is designed,leveraging the rotational motion each individual sensor observes.Then,as sensor drift occurs,the projection constraints between visual feature points and LiDAR 3-D points are used to compute the scaled camera motion,which is further utilized to align the drifted LiDAR scan with the camera image.Finally,the proposed method is sufficiently compared with two representative approaches in the online experiments with varying levels of random drift,then the method is further extended to the offline calibration experiment and is demonstrated by a comparison with two existing benchmark methods.
文摘Real-time indoor camera localization is a significant problem in indoor robot navigation and surveillance systems.The scene can change during the image sequence and plays a vital role in the localization performance of robotic applications in terms of accuracy and speed.This research proposed a real-time indoor camera localization system based on a recurrent neural network that detects scene change during the image sequence.An annotated image dataset trains the proposed system and predicts the camera pose in real-time.The system mainly improved the localization performance of indoor cameras by more accurately predicting the camera pose.It also recognizes the scene changes during the sequence and evaluates the effects of these changes.This system achieved high accuracy and real-time performance.The scene change detection process was performed using visual rhythm and the proposed recurrent deep architecture,which performed camera pose prediction and scene change impact evaluation.Overall,this study proposed a novel real-time localization system for indoor cameras that detects scene changes and shows how they affect localization performance.
基金supported by Natural Science Foundation of Beijing Municipality (Beijing Natural Science Foundation)(No.7191005)。
文摘Compton camera-based prompt gamma(PG) imaging has been proposed for range verification during proton therapy. However, a deviation between the PG and dose distributions, as well as the difference between the reconstructed PG and exact values, limit the effectiveness of the approach in accurate range monitoring during clinical applications. The aim of the study was to realize a PG-based dose reconstruction with a Compton camera, thereby further improving the prediction accuracy of in vivo range verification and providing a novel method for beam monitoring during proton therapy. In this paper, we present an approach based on a subset-driven origin ensemble with resolution recovery and a double evolutionary algorithm to reconstruct the dose depth profile(DDP) from the gamma events obtained by a cadmium-zinc-telluride Compton camera with limited position and energy resolution. Simulations of proton pencil beams with clinical particle rate irradiating phantoms made of different materials and the CT-based thoracic phantom were used to evaluate the feasibility of the proposed method. The results show that for the monoenergetic proton pencil beam irradiating homogeneous-material box phantom,the accuracy of the reconstructed DDP was within 0.3 mm for range prediction and within 5.2% for dose prediction. In particular, for 1.6-Gy irradiation in the therapy simulation of thoracic tumors, the range deviation of the reconstructed spreadout Bragg peak was within 0.8 mm, and the relative dose deviation in the peak area was less than 7% compared to the exact values. The results demonstrate the potential and feasibility of the proposed method in future Compton-based accurate dose reconstruction and range verification during proton therapy.
基金supported by the National Natural Science Foundation of China (No. 12220101005)Natural Science Foundation of Jiangsu Province (No. BK20220132)+2 种基金Primary Research and Development Plan of Jiangsu Province (No. BE2019002-3)Fundamental Research Funds for Central Universities (No. NG2022004)the Foundation of the Graduate Innovation Center in NUAA (No. xcxjh20210613)。
文摘A novel and fast three-dimensional reconstruction method for a Compton camera and its performance in radionuclide imaging is proposed and analyzed in this study. The conical surface sampling back-projection method with scattering angle correction(CSS-BP-SC) can quickly perform the back-projection process of the Compton cone and can be used to precompute the list-mode maximum likelihood expectation maximization(LM-MLEM). A dedicated parallel architecture was designed for the graphics processing unit acceleration of the back-projection and iteration stage of the CSS-BP-SC-based LM-MLEM. The imaging results of the two-point source Monte Carlo(MC) simulation demonstrate that by analyzing the full width at half maximum along the three coordinate axes, the CSS-BP-SC-based LM-MLEM can obtain imaging results comparable to those of the traditional reconstruction algorithm, that is, the simple back-projection-based LM-MLEM. The imaging results of the mouse phantom MC simulation and experiment demonstrate that the reconstruction results obtained by the proposed method sufficiently coincide with the set radioactivity distribution, and the speed increased by more than 664 times compared to the traditional reconstruction algorithm in the mouse phantom experiment. The proposed method will further advance the imaging applications of Compton cameras.
文摘Lanthanum bromide(LaBr_(3))crystal has a high energy resolution and time resolution and has been used in Compton cameras(CCs)over the past few decades.However,LaBr_(3) crystal arrays are difficult to process because LaBr_(3) is easy to crack and break;thus,few LaBr_(3)-based CC prototypes have been built.In this study,we designed and fabricated a large-pixel LaBr_(3) CC prototype and evaluated its performance with regard to position,energy,and angular resolution.We used two 10×10 LaBr_(3) crystal arrays with a pixel size of 5 mm×5 mm,silicon photomultipliers(SiPMs),and corresponding decoding circuits to construct our prototype.Additionally,a framework based on a Voronoi diagram and a lookup table was developed for list-mode projection data acquisition.Monte Carlo(MC)simulations based on Geant4 and experiments were conducted to evaluate the performance of our CC prototype.The lateral position resolution was 5 mm,and the maximum deviation in the depth direction was 2.5 and 5 mm for the scatterer and absorber,respectively.The corresponding measured energy resolu-tions were 7.65%and 8.44%,respectively,at 511 keV.The experimental results of ^(137)Cs point-like sources were consistent with the MC simulation results with regard to the spatial positions and full widths at half maximum(FWHMs).The angular resolution of the fabricated prototype was approximately 6°when a point-like ^(137)Cs source was centrally placed at a distance of 5 cm from the scatterer.We proposed and investigated a large-pixel LaBr_(3) CC for the first time and verified its feasibility for use in accurate spatial positioning of radiative sources with a high angular resolution.The proposed CC can satisfy the requirements of radiative source imaging and positioning in the nuclear industry and medical applications.
基金National Natural Science Foundation of China(11805269)West Light Talent Training Plan of the Chinese Academy of Sciences(2022-XBQNXZ-010)Science and Technology Innovation Leading Talent Project of Xinjiang Uygur Autonomous Region(2022TSYCLJ0042)。
文摘Theγ-rays are widely and abundantly present in strong nuclear radiation environments,and when they act on the camera equipment used to obtain environmental visual information on nuclear robots,radiation effects will occur,which will degrade the performance of the camera system,reduce the imaging quality,and even cause catastrophic consequences.Color reducibility is an important index for evaluating the imaging quality of color camera,but its degradation mechanism in a nuclear radiation environment is still unclear.In this paper,theγ-ray irradiation experiments of CMOS cameras were carried out to analyse the degradation law of the camera’s color reducibility with cumulative irradiation and reveal the degradation mechanism of the color information of the CMOS camera underγ-ray irradiation.The results show that the spectral response of CMOS image sensor(CIS)and the spectral transmittance of lens after irradiation affect the values of a^(*)and b^(*)in the LAB color model.While the full well capacity(FWC)of CIS and transmittance of lens affect the value of L^(*)in the LAB color model,thus increase color difference and reduce brightness,the combined effect of color difference and brightness degradation will reduce the color reducibility of CMOS cameras.Therefore,the degradation of the color information of the CMOS camera afterγ-ray irradiation mainly comes from the changes in the FWC and spectral response of CIS,and the spectral transmittance of lens.
基金supported by the Aerospace Science and Technology Joint Fund(6141B061505)the National Natural Science Foundation of China(61473100).
文摘To address the eccentric error of circular marks in camera calibration,a circle location method based on the invariance of collinear points and pole–polar constraint is proposed in this paper.Firstly,the centers of the ellipses are extracted,and the real concentric circle center projection equation is established by exploiting the cross ratio invariance of the collinear points.Subsequently,since the infinite lines passing through the centers of the marks are parallel,the other center projection coordinates are expressed as the solution problem of linear equations.The problem of projection deviation caused by using the center of the ellipse as the real circle center projection is addressed,and the results are utilized as the true image points to achieve the high precision camera calibration.As demonstrated by the simulations and practical experiments,the proposed method performs a better location and calibration performance by achieving the actual center projection of circular marks.The relevant results confirm the precision and robustness of the proposed approach.
文摘Traffic incident management (TIM) is a FHWA Every Day Counts initiative with the objective of reducing secondary crashes, improving travel reliability, and ensuring safety of responders. Agency roadside cameras play a critical role in TIM by helping dispatchers quickly identify the precise location of incidents when receiving reports from motorists with varying levels of spatial accuracy. Reconciling position reports that are often mile marker based, with cameras that operate in a Pan-Tilt-Zoom coordinate system relies on dispatchers having detailed knowledge for hundreds of cameras and perhaps some presets. During real-time incident dispatching, reducing the time it takes to identify the most relevant cameras and setting their view on the incident is an important opportunity to improve incident management dispatch times. This research develops a camera-to-mile marker mapping technique that automatically sets the camera view to a specified mile marker within the field-of-view of the camera. Over 350 traffic cameras along Indiana’s 2250 directional miles of interstate were mapped to approximately 5000 discrete locations that correspond to approximately 780 directional miles (~35% of interstate) of camera coverage. This newly developed technique will allow operators to quickly identify the nearest camera and set them to the reported location. This research also identifies segments on the interstate system with limited or no camera coverage for decision makers to prioritize future capital investments. This paper concludes with brief discussion on future research to automate the mapping using LiDAR data and to set the cameras after automatically detecting the events using connected vehicle trajectory data.
文摘The need for efficient and reproducible development processes for sensor and perception systems is growing with their increased use in modern vehicles. Such processes can be achieved by using virtual test environments and virtual sensor models. In the context of this, the present paper documents the development of a sensor model for depth estimation of virtual three-dimensional scenarios. For this purpose, the geometric and algorithmic principles of stereoscopic camera systems are recreated in a virtual form. The model is implemented as a subroutine in the Epic Games Unreal Engine, which is one of the most common Game Engines. Its architecture consists of several independent procedures that enable a local depth estimation, but also a reconstruction of a whole three-dimensional scenery. In addition, a separate programme for calibrating the model is presented. In addition to the basic principles, the architecture and the implementation, this work also documents the evaluation of the model created. It is shown that the model meets specifically defined requirements for real-time capability and the accuracy of the evaluation. Thus, it is suitable for the virtual testing of common algorithms and highly automated driving functions.
文摘Nowadays,smartphones are used as self-health monitoring devices for humans.Self-health monitoring devices help clinicians with big data for accurate diagnosis and guidance for treatment through repetitive measurement.Repetitive measurement of haemoglobin requires for pregnant women,pediatric,pulmonary hypertension and obstetric patients.Noninvasive haemoglobin measurement through conjunctiva leads to inaccurate measurement.The inaccuracy is due to a decrease in the density of goblet cells and acinar units in Meibomian glands in the human eye as age increases.Furthermore,conjunctivitis is a disease in the eye due to inflammation or infection at the conjunctiva.Conjunctivitis is in the form of lines in the eyelid and covers the white part of the eyeball.Moreover,small blood vessels in eye regions of conjunctiva inflammations are not visible to the human eye or standard camera.This paper proposes smartphone-based hae-moglobin(SBH)measurement through a borescope camera from anterior ciliary arteries of the eye for the above problem.The proposed SBH method acquires images from the anterior ciliary arteries region of the eye through a smartphone attached with a high megapixel borescope camera.The anterior ciliary arteries are projected through transverse dyadic wavelet transform(TDyWT)and applied with delta segmentation to obtain blood cells from the ciliary arteries of the eye.Furthermore,the Gaussian regression algorithm measures haemoglobin(Hb)with more accuracy based on the person,eye arteries,red pixel statistical parameters obtained from the left and right eye,age,and weight.Furthermore,the experimen-tal result of the proposed SBH method has an accuracy of 96%in haemoglobin measurement.
文摘As the COVID-19 epidemic spread across the globe,people around the world were advised or mandated to wear masks in public places to prevent its spreading further.In some cases,not wearing a mask could result in a fine.To monitor mask wearing,and to prevent the spread of future epidemics,this study proposes an image recognition system consisting of a camera,an infrared thermal array sensor,and a convolutional neural network trained in mask recognition.The infrared sensor monitors body temperature and displays the results in real-time on a liquid crystal display screen.The proposed system reduces the inefficiency of traditional object detection by providing training data according to the specific needs of the user and by applying You Only Look Once Version 4(YOLOv4)object detection technology,which experiments show has more efficient training parameters and a higher level of accuracy in object recognition.All datasets are uploaded to the cloud for storage using Google Colaboratory,saving human resources and achieving a high level of efficiency at a low cost.