Several common dual quaternion functions,such as the power function,the magnitude function,the 2-norm function,and the kth largest eigenvalue of a dual quaternion Hermitian matrix,are standard dual quaternion function...Several common dual quaternion functions,such as the power function,the magnitude function,the 2-norm function,and the kth largest eigenvalue of a dual quaternion Hermitian matrix,are standard dual quaternion functions,i.e.,the standard parts of their function values depend upon only the standard parts of their dual quaternion variables.Furthermore,the sum,product,minimum,maximum,and composite functions of two standard dual functions,the logarithm and the exponential of standard unit dual quaternion functions,are still standard dual quaternion functions.On the other hand,the dual quaternion optimization problem,where objective and constraint function values are dual numbers but variables are dual quaternions,naturally arises from applications.We show that to solve an equality constrained dual quaternion optimization(EQDQO)problem,we only need to solve two quaternion optimization problems.If the involved dual quaternion functions are all standard,the optimization problem is called a standard dual quaternion optimization problem,and some better results hold.Then,we show that the dual quaternion optimization problems arising from the hand-eye calibration problem and the simultaneous localization and mapping(SLAM)problem are equality constrained standard dual quaternion optimization problems.展开更多
The widespread availability of digital multimedia data has led to a new challenge in digital forensics.Traditional source camera identification algorithms usually rely on various traces in the capturing process.Howeve...The widespread availability of digital multimedia data has led to a new challenge in digital forensics.Traditional source camera identification algorithms usually rely on various traces in the capturing process.However,these traces have become increasingly difficult to extract due to wide availability of various image processing algorithms.Convolutional Neural Networks(CNN)-based algorithms have demonstrated good discriminative capabilities for different brands and even different models of camera devices.However,their performances is not ideal in case of distinguishing between individual devices of the same model,because cameras of the same model typically use the same optical lens,image sensor,and image processing algorithms,that result in minimal overall differences.In this paper,we propose a camera forensics algorithm based on multi-scale feature fusion to address these issues.The proposed algorithm extracts different local features from feature maps of different scales and then fuses them to obtain a comprehensive feature representation.This representation is then fed into a subsequent camera fingerprint classification network.Building upon the Swin-T network,we utilize Transformer Blocks and Graph Convolutional Network(GCN)modules to fuse multi-scale features from different stages of the backbone network.Furthermore,we conduct experiments on established datasets to demonstrate the feasibility and effectiveness of the proposed approach.展开更多
An ultrafast framing camera with a pulse-dilation device,a microchannel plate(MCP)imager,and an electronic imaging system were reported.The camera achieved a temporal resolution of 10 ps by using a pulse-dilation devi...An ultrafast framing camera with a pulse-dilation device,a microchannel plate(MCP)imager,and an electronic imaging system were reported.The camera achieved a temporal resolution of 10 ps by using a pulse-dilation device and gated MCP imager,and a spatial resolution of 100μm by using an electronic imaging system comprising combined magnetic lenses.The spatial resolution characteristics of the camera were studied both theoretically and experimentally.The results showed that the camera with combined magnetic lenses reduced the field curvature and acquired a larger working area.A working area with a diameter of 53 mm was created by applying four magnetic lenses to the camera.Furthermore,the camera was used to detect the X-rays produced by the laser-targeting device.The diagnostic results indicated that the width of the X-ray pulse was approximately 18 ps.展开更多
This paper introduces an intelligent computational approach for extracting salient objects fromimages and estimatingtheir distance information with PTZ (Pan-Tilt-Zoom) cameras. PTZ cameras have found wide applications...This paper introduces an intelligent computational approach for extracting salient objects fromimages and estimatingtheir distance information with PTZ (Pan-Tilt-Zoom) cameras. PTZ cameras have found wide applications innumerous public places, serving various purposes such as public securitymanagement, natural disastermonitoring,and crisis alarms, particularly with the rapid development of Artificial Intelligence and global infrastructuralprojects. In this paper, we combine Gauss optical principles with the PTZ camera’s capabilities of horizontal andpitch rotation, as well as optical zoom, to estimate the distance of the object.We present a novel monocular objectdistance estimation model based on the Focal Length-Target Pixel Size (FLTPS) relationship, achieving an accuracyrate of over 95% for objects within a 5 km range. The salient object extraction is achieved through a simplifiedconvolution kernel and the utilization of the object’s RGB features, which offer significantly faster computingspeeds compared to Convolutional Neural Networks (CNNs). Additionally, we introduce the dark channel beforethe fog removal algorithm, resulting in a 20 dB increase in image definition, which significantly benefits distanceestimation. Our system offers the advantages of stability and low device load, making it an asset for public securityaffairs and providing a reference point for future developments in surveillance hardware.展开更多
This paper aims to develop an automatic miscalibration detection and correction framework to maintain accurate calibration of LiDAR and camera for autonomous vehicle after the sensor drift.First,a monitoring algorithm...This paper aims to develop an automatic miscalibration detection and correction framework to maintain accurate calibration of LiDAR and camera for autonomous vehicle after the sensor drift.First,a monitoring algorithm that can continuously detect the miscalibration in each frame is designed,leveraging the rotational motion each individual sensor observes.Then,as sensor drift occurs,the projection constraints between visual feature points and LiDAR 3-D points are used to compute the scaled camera motion,which is further utilized to align the drifted LiDAR scan with the camera image.Finally,the proposed method is sufficiently compared with two representative approaches in the online experiments with varying levels of random drift,then the method is further extended to the offline calibration experiment and is demonstrated by a comparison with two existing benchmark methods.展开更多
The geometric accuracy of topographic mapping with high-resolution remote sensing images is inevita-bly affected by the orbiter attitude jitter.Therefore,it is necessary to conduct preliminary research on the stereo m...The geometric accuracy of topographic mapping with high-resolution remote sensing images is inevita-bly affected by the orbiter attitude jitter.Therefore,it is necessary to conduct preliminary research on the stereo mapping camera equipped on lunar orbiter before launching.In this work,an imaging simulation method consid-ering the attitude jitter is presented.The impact analysis of different attitude jitter on terrain undulation is conduct-ed by simulating jitter at three attitude angles,respectively.The proposed simulation method is based on the rigor-ous sensor model,using the lunar digital elevation model(DEM)and orthoimage as reference data.The orbit and attitude of the lunar stereo mapping camera are simulated while considering the attitude jitter.Two-dimensional simulated stereo images are generated according to the position and attitude of the orbiter in a given orbit.Experi-mental analyses were conducted by the DEM with the simulated stereo image.The simulation imaging results demonstrate that the proposed method can ensure imaging efficiency without losing the accuracy of topographic mapping.The effect of attitude jitter on the stereo mapping accuracy of the simulated images was analyzed through a DEM comparison.展开更多
Real-time indoor camera localization is a significant problem in indoor robot navigation and surveillance systems.The scene can change during the image sequence and plays a vital role in the localization performance o...Real-time indoor camera localization is a significant problem in indoor robot navigation and surveillance systems.The scene can change during the image sequence and plays a vital role in the localization performance of robotic applications in terms of accuracy and speed.This research proposed a real-time indoor camera localization system based on a recurrent neural network that detects scene change during the image sequence.An annotated image dataset trains the proposed system and predicts the camera pose in real-time.The system mainly improved the localization performance of indoor cameras by more accurately predicting the camera pose.It also recognizes the scene changes during the sequence and evaluates the effects of these changes.This system achieved high accuracy and real-time performance.The scene change detection process was performed using visual rhythm and the proposed recurrent deep architecture,which performed camera pose prediction and scene change impact evaluation.Overall,this study proposed a novel real-time localization system for indoor cameras that detects scene changes and shows how they affect localization performance.展开更多
To overcome the influence of on-orbit extreme temperature environment on the tool pose(position and orientation) accuracy of a space robot,a new self-calibration method based on a measurement camera(hand-eye vision) a...To overcome the influence of on-orbit extreme temperature environment on the tool pose(position and orientation) accuracy of a space robot,a new self-calibration method based on a measurement camera(hand-eye vision) attached to its end-effector was presented.Using the relative pose errors between the two adjacent calibration positions of the space robot,the cost function of the calibration was built,which was different from the conventional calibration method.The particle swarm optimization algorithm(PSO) was used to optimize the function to realize the geometrical parameter identification of the space robot.The above calibration method was carried out through self-calibration simulation of a six-DOF space robot whose end-effector was equipped with hand-eye vision.The results showed that after calibration there was a significant improvement of tool pose accuracy in a set of independent reference positions,which verified the feasibility of the method.At the same time,because it was unnecessary for this method to know the transformation matrix from the robot base to the calibration plate,it reduced the complexity of calibration model and shortened the error propagation chain,which benefited to improve the calibration accuracy.展开更多
For the hand-eye calibration of the vision service robot,the traditional hand-eye calibration tech-nology can’t be realized which because the service robot is independently developed and there is no teaching device t...For the hand-eye calibration of the vision service robot,the traditional hand-eye calibration tech-nology can’t be realized which because the service robot is independently developed and there is no teaching device to feed back the pose information of the service robot in real time.In this paper,a hand-eye calibration method based on ROS(Robot Operating System)is proposed.In this method,ROS system is used to accurately control the arm of the service robot to rotate in different positions for many times.Meanwhile,the head camera of the service robot takes images of a fixed point in the scene.Then,the nonlinear equations were established ac-cording to the homography matrix of the two images and the position and pose information of the ROS system,and the accurate hand-eye relationship was optimized by the least square method.Finally,an experimental platform is built and the proposed hand-eye calibration method is verified.The experiment results show that the method is easy to operate,simple algorithm and correct result,which verifies the effectiveness of the algorithm and provides conditions for the realization of humanoid grasping of visual service robot.展开更多
Compton camera-based prompt gamma(PG) imaging has been proposed for range verification during proton therapy. However, a deviation between the PG and dose distributions, as well as the difference between the reconstru...Compton camera-based prompt gamma(PG) imaging has been proposed for range verification during proton therapy. However, a deviation between the PG and dose distributions, as well as the difference between the reconstructed PG and exact values, limit the effectiveness of the approach in accurate range monitoring during clinical applications. The aim of the study was to realize a PG-based dose reconstruction with a Compton camera, thereby further improving the prediction accuracy of in vivo range verification and providing a novel method for beam monitoring during proton therapy. In this paper, we present an approach based on a subset-driven origin ensemble with resolution recovery and a double evolutionary algorithm to reconstruct the dose depth profile(DDP) from the gamma events obtained by a cadmium-zinc-telluride Compton camera with limited position and energy resolution. Simulations of proton pencil beams with clinical particle rate irradiating phantoms made of different materials and the CT-based thoracic phantom were used to evaluate the feasibility of the proposed method. The results show that for the monoenergetic proton pencil beam irradiating homogeneous-material box phantom,the accuracy of the reconstructed DDP was within 0.3 mm for range prediction and within 5.2% for dose prediction. In particular, for 1.6-Gy irradiation in the therapy simulation of thoracic tumors, the range deviation of the reconstructed spreadout Bragg peak was within 0.8 mm, and the relative dose deviation in the peak area was less than 7% compared to the exact values. The results demonstrate the potential and feasibility of the proposed method in future Compton-based accurate dose reconstruction and range verification during proton therapy.展开更多
A novel and fast three-dimensional reconstruction method for a Compton camera and its performance in radionuclide imaging is proposed and analyzed in this study. The conical surface sampling back-projection method wit...A novel and fast three-dimensional reconstruction method for a Compton camera and its performance in radionuclide imaging is proposed and analyzed in this study. The conical surface sampling back-projection method with scattering angle correction(CSS-BP-SC) can quickly perform the back-projection process of the Compton cone and can be used to precompute the list-mode maximum likelihood expectation maximization(LM-MLEM). A dedicated parallel architecture was designed for the graphics processing unit acceleration of the back-projection and iteration stage of the CSS-BP-SC-based LM-MLEM. The imaging results of the two-point source Monte Carlo(MC) simulation demonstrate that by analyzing the full width at half maximum along the three coordinate axes, the CSS-BP-SC-based LM-MLEM can obtain imaging results comparable to those of the traditional reconstruction algorithm, that is, the simple back-projection-based LM-MLEM. The imaging results of the mouse phantom MC simulation and experiment demonstrate that the reconstruction results obtained by the proposed method sufficiently coincide with the set radioactivity distribution, and the speed increased by more than 664 times compared to the traditional reconstruction algorithm in the mouse phantom experiment. The proposed method will further advance the imaging applications of Compton cameras.展开更多
Lanthanum bromide(LaBr_(3))crystal has a high energy resolution and time resolution and has been used in Compton cameras(CCs)over the past few decades.However,LaBr_(3) crystal arrays are difficult to process because L...Lanthanum bromide(LaBr_(3))crystal has a high energy resolution and time resolution and has been used in Compton cameras(CCs)over the past few decades.However,LaBr_(3) crystal arrays are difficult to process because LaBr_(3) is easy to crack and break;thus,few LaBr_(3)-based CC prototypes have been built.In this study,we designed and fabricated a large-pixel LaBr_(3) CC prototype and evaluated its performance with regard to position,energy,and angular resolution.We used two 10×10 LaBr_(3) crystal arrays with a pixel size of 5 mm×5 mm,silicon photomultipliers(SiPMs),and corresponding decoding circuits to construct our prototype.Additionally,a framework based on a Voronoi diagram and a lookup table was developed for list-mode projection data acquisition.Monte Carlo(MC)simulations based on Geant4 and experiments were conducted to evaluate the performance of our CC prototype.The lateral position resolution was 5 mm,and the maximum deviation in the depth direction was 2.5 and 5 mm for the scatterer and absorber,respectively.The corresponding measured energy resolu-tions were 7.65%and 8.44%,respectively,at 511 keV.The experimental results of ^(137)Cs point-like sources were consistent with the MC simulation results with regard to the spatial positions and full widths at half maximum(FWHMs).The angular resolution of the fabricated prototype was approximately 6°when a point-like ^(137)Cs source was centrally placed at a distance of 5 cm from the scatterer.We proposed and investigated a large-pixel LaBr_(3) CC for the first time and verified its feasibility for use in accurate spatial positioning of radiative sources with a high angular resolution.The proposed CC can satisfy the requirements of radiative source imaging and positioning in the nuclear industry and medical applications.展开更多
Theγ-rays are widely and abundantly present in strong nuclear radiation environments,and when they act on the camera equipment used to obtain environmental visual information on nuclear robots,radiation effects will ...Theγ-rays are widely and abundantly present in strong nuclear radiation environments,and when they act on the camera equipment used to obtain environmental visual information on nuclear robots,radiation effects will occur,which will degrade the performance of the camera system,reduce the imaging quality,and even cause catastrophic consequences.Color reducibility is an important index for evaluating the imaging quality of color camera,but its degradation mechanism in a nuclear radiation environment is still unclear.In this paper,theγ-ray irradiation experiments of CMOS cameras were carried out to analyse the degradation law of the camera’s color reducibility with cumulative irradiation and reveal the degradation mechanism of the color information of the CMOS camera underγ-ray irradiation.The results show that the spectral response of CMOS image sensor(CIS)and the spectral transmittance of lens after irradiation affect the values of a^(*)and b^(*)in the LAB color model.While the full well capacity(FWC)of CIS and transmittance of lens affect the value of L^(*)in the LAB color model,thus increase color difference and reduce brightness,the combined effect of color difference and brightness degradation will reduce the color reducibility of CMOS cameras.Therefore,the degradation of the color information of the CMOS camera afterγ-ray irradiation mainly comes from the changes in the FWC and spectral response of CIS,and the spectral transmittance of lens.展开更多
基金Hong Kong Innovation and Technology Commission(InnoHK Project CIMDA).
文摘Several common dual quaternion functions,such as the power function,the magnitude function,the 2-norm function,and the kth largest eigenvalue of a dual quaternion Hermitian matrix,are standard dual quaternion functions,i.e.,the standard parts of their function values depend upon only the standard parts of their dual quaternion variables.Furthermore,the sum,product,minimum,maximum,and composite functions of two standard dual functions,the logarithm and the exponential of standard unit dual quaternion functions,are still standard dual quaternion functions.On the other hand,the dual quaternion optimization problem,where objective and constraint function values are dual numbers but variables are dual quaternions,naturally arises from applications.We show that to solve an equality constrained dual quaternion optimization(EQDQO)problem,we only need to solve two quaternion optimization problems.If the involved dual quaternion functions are all standard,the optimization problem is called a standard dual quaternion optimization problem,and some better results hold.Then,we show that the dual quaternion optimization problems arising from the hand-eye calibration problem and the simultaneous localization and mapping(SLAM)problem are equality constrained standard dual quaternion optimization problems.
基金This work was funded by the National Natural Science Foundation of China(Grant No.62172132)Public Welfare Technology Research Project of Zhejiang Province(Grant No.LGF21F020014)the Opening Project of Key Laboratory of Public Security Information Application Based on Big-Data Architecture,Ministry of Public Security of Zhejiang Police College(Grant No.2021DSJSYS002).
文摘The widespread availability of digital multimedia data has led to a new challenge in digital forensics.Traditional source camera identification algorithms usually rely on various traces in the capturing process.However,these traces have become increasingly difficult to extract due to wide availability of various image processing algorithms.Convolutional Neural Networks(CNN)-based algorithms have demonstrated good discriminative capabilities for different brands and even different models of camera devices.However,their performances is not ideal in case of distinguishing between individual devices of the same model,because cameras of the same model typically use the same optical lens,image sensor,and image processing algorithms,that result in minimal overall differences.In this paper,we propose a camera forensics algorithm based on multi-scale feature fusion to address these issues.The proposed algorithm extracts different local features from feature maps of different scales and then fuses them to obtain a comprehensive feature representation.This representation is then fed into a subsequent camera fingerprint classification network.Building upon the Swin-T network,we utilize Transformer Blocks and Graph Convolutional Network(GCN)modules to fuse multi-scale features from different stages of the backbone network.Furthermore,we conduct experiments on established datasets to demonstrate the feasibility and effectiveness of the proposed approach.
基金National Natural Science Foundation of China(NSFC)(No.11775147)Guangdong Basic and Applied Basic Research Foundation(Nos.2019A1515110130 and 2024A1515011832)+1 种基金Shenzhen Key Laboratory of Photonics and Biophotonics(ZDSYS20210623092006020)Shenzhen Science and Technology Program(Nos.JCYJ20210324095007020,JCYJ20200109105201936 and JCYJ20230808105019039).
文摘An ultrafast framing camera with a pulse-dilation device,a microchannel plate(MCP)imager,and an electronic imaging system were reported.The camera achieved a temporal resolution of 10 ps by using a pulse-dilation device and gated MCP imager,and a spatial resolution of 100μm by using an electronic imaging system comprising combined magnetic lenses.The spatial resolution characteristics of the camera were studied both theoretically and experimentally.The results showed that the camera with combined magnetic lenses reduced the field curvature and acquired a larger working area.A working area with a diameter of 53 mm was created by applying four magnetic lenses to the camera.Furthermore,the camera was used to detect the X-rays produced by the laser-targeting device.The diagnostic results indicated that the width of the X-ray pulse was approximately 18 ps.
基金the Social Development Project of Jiangsu Key R&D Program(BE2022680)the National Natural Science Foundation of China(Nos.62371253,52278119).
文摘This paper introduces an intelligent computational approach for extracting salient objects fromimages and estimatingtheir distance information with PTZ (Pan-Tilt-Zoom) cameras. PTZ cameras have found wide applications innumerous public places, serving various purposes such as public securitymanagement, natural disastermonitoring,and crisis alarms, particularly with the rapid development of Artificial Intelligence and global infrastructuralprojects. In this paper, we combine Gauss optical principles with the PTZ camera’s capabilities of horizontal andpitch rotation, as well as optical zoom, to estimate the distance of the object.We present a novel monocular objectdistance estimation model based on the Focal Length-Target Pixel Size (FLTPS) relationship, achieving an accuracyrate of over 95% for objects within a 5 km range. The salient object extraction is achieved through a simplifiedconvolution kernel and the utilization of the object’s RGB features, which offer significantly faster computingspeeds compared to Convolutional Neural Networks (CNNs). Additionally, we introduce the dark channel beforethe fog removal algorithm, resulting in a 20 dB increase in image definition, which significantly benefits distanceestimation. Our system offers the advantages of stability and low device load, making it an asset for public securityaffairs and providing a reference point for future developments in surveillance hardware.
基金Supported by National Natural Science Foundation of China(Grant Nos.52025121,52394263)National Key R&D Plan of China(Grant No.2023YFD2000301).
文摘This paper aims to develop an automatic miscalibration detection and correction framework to maintain accurate calibration of LiDAR and camera for autonomous vehicle after the sensor drift.First,a monitoring algorithm that can continuously detect the miscalibration in each frame is designed,leveraging the rotational motion each individual sensor observes.Then,as sensor drift occurs,the projection constraints between visual feature points and LiDAR 3-D points are used to compute the scaled camera motion,which is further utilized to align the drifted LiDAR scan with the camera image.Finally,the proposed method is sufficiently compared with two representative approaches in the online experiments with varying levels of random drift,then the method is further extended to the offline calibration experiment and is demonstrated by a comparison with two existing benchmark methods.
基金Supported by the National Natural Science Foundation of China(42221002,42171432)Shanghai Municipal Science and Technology Major Project(2021SHZDZX0100)the Fundamental Research Funds for the Central Universities.
文摘The geometric accuracy of topographic mapping with high-resolution remote sensing images is inevita-bly affected by the orbiter attitude jitter.Therefore,it is necessary to conduct preliminary research on the stereo mapping camera equipped on lunar orbiter before launching.In this work,an imaging simulation method consid-ering the attitude jitter is presented.The impact analysis of different attitude jitter on terrain undulation is conduct-ed by simulating jitter at three attitude angles,respectively.The proposed simulation method is based on the rigor-ous sensor model,using the lunar digital elevation model(DEM)and orthoimage as reference data.The orbit and attitude of the lunar stereo mapping camera are simulated while considering the attitude jitter.Two-dimensional simulated stereo images are generated according to the position and attitude of the orbiter in a given orbit.Experi-mental analyses were conducted by the DEM with the simulated stereo image.The simulation imaging results demonstrate that the proposed method can ensure imaging efficiency without losing the accuracy of topographic mapping.The effect of attitude jitter on the stereo mapping accuracy of the simulated images was analyzed through a DEM comparison.
文摘Real-time indoor camera localization is a significant problem in indoor robot navigation and surveillance systems.The scene can change during the image sequence and plays a vital role in the localization performance of robotic applications in terms of accuracy and speed.This research proposed a real-time indoor camera localization system based on a recurrent neural network that detects scene change during the image sequence.An annotated image dataset trains the proposed system and predicts the camera pose in real-time.The system mainly improved the localization performance of indoor cameras by more accurately predicting the camera pose.It also recognizes the scene changes during the sequence and evaluates the effects of these changes.This system achieved high accuracy and real-time performance.The scene change detection process was performed using visual rhythm and the proposed recurrent deep architecture,which performed camera pose prediction and scene change impact evaluation.Overall,this study proposed a novel real-time localization system for indoor cameras that detects scene changes and shows how they affect localization performance.
基金Projects(60775049,60805033) supported by the National Natural Science Foundation of ChinaProject(2007AA704317) supported by the National High Technology Research and Development Program of China
文摘To overcome the influence of on-orbit extreme temperature environment on the tool pose(position and orientation) accuracy of a space robot,a new self-calibration method based on a measurement camera(hand-eye vision) attached to its end-effector was presented.Using the relative pose errors between the two adjacent calibration positions of the space robot,the cost function of the calibration was built,which was different from the conventional calibration method.The particle swarm optimization algorithm(PSO) was used to optimize the function to realize the geometrical parameter identification of the space robot.The above calibration method was carried out through self-calibration simulation of a six-DOF space robot whose end-effector was equipped with hand-eye vision.The results showed that after calibration there was a significant improvement of tool pose accuracy in a set of independent reference positions,which verified the feasibility of the method.At the same time,because it was unnecessary for this method to know the transformation matrix from the robot base to the calibration plate,it reduced the complexity of calibration model and shortened the error propagation chain,which benefited to improve the calibration accuracy.
基金supported by the University-level Funding Projects of Guangdong Mechanical&Elec-trical college of Technology(No.Gccrcxm-202007)The 2021 Guangdong Provincial Department of edu-cation recognized scientific research projects in Col-leges and Universities(No.2021KTSCX205)the 2021 School-level Scientific Research Projects(No.YJZD2021-54).
文摘For the hand-eye calibration of the vision service robot,the traditional hand-eye calibration tech-nology can’t be realized which because the service robot is independently developed and there is no teaching device to feed back the pose information of the service robot in real time.In this paper,a hand-eye calibration method based on ROS(Robot Operating System)is proposed.In this method,ROS system is used to accurately control the arm of the service robot to rotate in different positions for many times.Meanwhile,the head camera of the service robot takes images of a fixed point in the scene.Then,the nonlinear equations were established ac-cording to the homography matrix of the two images and the position and pose information of the ROS system,and the accurate hand-eye relationship was optimized by the least square method.Finally,an experimental platform is built and the proposed hand-eye calibration method is verified.The experiment results show that the method is easy to operate,simple algorithm and correct result,which verifies the effectiveness of the algorithm and provides conditions for the realization of humanoid grasping of visual service robot.
基金supported by Natural Science Foundation of Beijing Municipality (Beijing Natural Science Foundation)(No.7191005)。
文摘Compton camera-based prompt gamma(PG) imaging has been proposed for range verification during proton therapy. However, a deviation between the PG and dose distributions, as well as the difference between the reconstructed PG and exact values, limit the effectiveness of the approach in accurate range monitoring during clinical applications. The aim of the study was to realize a PG-based dose reconstruction with a Compton camera, thereby further improving the prediction accuracy of in vivo range verification and providing a novel method for beam monitoring during proton therapy. In this paper, we present an approach based on a subset-driven origin ensemble with resolution recovery and a double evolutionary algorithm to reconstruct the dose depth profile(DDP) from the gamma events obtained by a cadmium-zinc-telluride Compton camera with limited position and energy resolution. Simulations of proton pencil beams with clinical particle rate irradiating phantoms made of different materials and the CT-based thoracic phantom were used to evaluate the feasibility of the proposed method. The results show that for the monoenergetic proton pencil beam irradiating homogeneous-material box phantom,the accuracy of the reconstructed DDP was within 0.3 mm for range prediction and within 5.2% for dose prediction. In particular, for 1.6-Gy irradiation in the therapy simulation of thoracic tumors, the range deviation of the reconstructed spreadout Bragg peak was within 0.8 mm, and the relative dose deviation in the peak area was less than 7% compared to the exact values. The results demonstrate the potential and feasibility of the proposed method in future Compton-based accurate dose reconstruction and range verification during proton therapy.
基金supported by the National Natural Science Foundation of China (No. 12220101005)Natural Science Foundation of Jiangsu Province (No. BK20220132)+2 种基金Primary Research and Development Plan of Jiangsu Province (No. BE2019002-3)Fundamental Research Funds for Central Universities (No. NG2022004)the Foundation of the Graduate Innovation Center in NUAA (No. xcxjh20210613)。
文摘A novel and fast three-dimensional reconstruction method for a Compton camera and its performance in radionuclide imaging is proposed and analyzed in this study. The conical surface sampling back-projection method with scattering angle correction(CSS-BP-SC) can quickly perform the back-projection process of the Compton cone and can be used to precompute the list-mode maximum likelihood expectation maximization(LM-MLEM). A dedicated parallel architecture was designed for the graphics processing unit acceleration of the back-projection and iteration stage of the CSS-BP-SC-based LM-MLEM. The imaging results of the two-point source Monte Carlo(MC) simulation demonstrate that by analyzing the full width at half maximum along the three coordinate axes, the CSS-BP-SC-based LM-MLEM can obtain imaging results comparable to those of the traditional reconstruction algorithm, that is, the simple back-projection-based LM-MLEM. The imaging results of the mouse phantom MC simulation and experiment demonstrate that the reconstruction results obtained by the proposed method sufficiently coincide with the set radioactivity distribution, and the speed increased by more than 664 times compared to the traditional reconstruction algorithm in the mouse phantom experiment. The proposed method will further advance the imaging applications of Compton cameras.
文摘Lanthanum bromide(LaBr_(3))crystal has a high energy resolution and time resolution and has been used in Compton cameras(CCs)over the past few decades.However,LaBr_(3) crystal arrays are difficult to process because LaBr_(3) is easy to crack and break;thus,few LaBr_(3)-based CC prototypes have been built.In this study,we designed and fabricated a large-pixel LaBr_(3) CC prototype and evaluated its performance with regard to position,energy,and angular resolution.We used two 10×10 LaBr_(3) crystal arrays with a pixel size of 5 mm×5 mm,silicon photomultipliers(SiPMs),and corresponding decoding circuits to construct our prototype.Additionally,a framework based on a Voronoi diagram and a lookup table was developed for list-mode projection data acquisition.Monte Carlo(MC)simulations based on Geant4 and experiments were conducted to evaluate the performance of our CC prototype.The lateral position resolution was 5 mm,and the maximum deviation in the depth direction was 2.5 and 5 mm for the scatterer and absorber,respectively.The corresponding measured energy resolu-tions were 7.65%and 8.44%,respectively,at 511 keV.The experimental results of ^(137)Cs point-like sources were consistent with the MC simulation results with regard to the spatial positions and full widths at half maximum(FWHMs).The angular resolution of the fabricated prototype was approximately 6°when a point-like ^(137)Cs source was centrally placed at a distance of 5 cm from the scatterer.We proposed and investigated a large-pixel LaBr_(3) CC for the first time and verified its feasibility for use in accurate spatial positioning of radiative sources with a high angular resolution.The proposed CC can satisfy the requirements of radiative source imaging and positioning in the nuclear industry and medical applications.
基金National Natural Science Foundation of China(11805269)West Light Talent Training Plan of the Chinese Academy of Sciences(2022-XBQNXZ-010)Science and Technology Innovation Leading Talent Project of Xinjiang Uygur Autonomous Region(2022TSYCLJ0042)。
文摘Theγ-rays are widely and abundantly present in strong nuclear radiation environments,and when they act on the camera equipment used to obtain environmental visual information on nuclear robots,radiation effects will occur,which will degrade the performance of the camera system,reduce the imaging quality,and even cause catastrophic consequences.Color reducibility is an important index for evaluating the imaging quality of color camera,but its degradation mechanism in a nuclear radiation environment is still unclear.In this paper,theγ-ray irradiation experiments of CMOS cameras were carried out to analyse the degradation law of the camera’s color reducibility with cumulative irradiation and reveal the degradation mechanism of the color information of the CMOS camera underγ-ray irradiation.The results show that the spectral response of CMOS image sensor(CIS)and the spectral transmittance of lens after irradiation affect the values of a^(*)and b^(*)in the LAB color model.While the full well capacity(FWC)of CIS and transmittance of lens affect the value of L^(*)in the LAB color model,thus increase color difference and reduce brightness,the combined effect of color difference and brightness degradation will reduce the color reducibility of CMOS cameras.Therefore,the degradation of the color information of the CMOS camera afterγ-ray irradiation mainly comes from the changes in the FWC and spectral response of CIS,and the spectral transmittance of lens.