Modeling of welding process by robotic vision is basically a theoretical problem, means mainly on physical problem, and also technological problem. To obtain a good model of welding process by robotic vision, theoreti...Modeling of welding process by robotic vision is basically a theoretical problem, means mainly on physical problem, and also technological problem. To obtain a good model of welding process by robotic vision, theoretical researches are required but also constant experimental researches of several welding processes. Until today researches of welding processes has been based on empirical and detailed experimentation. This paper presents welding process by robotic and automation points of view with application of new technologies. Welding robotic system has been designed with possibility to control and inspect this process. Parameters that should be controlled during the process have been identified to reach desired quality. Figure of control system of welding process by robotic vision is given in this paper.展开更多
A method is put forward to realize the recognition and guiding of initial welding position. The weld seams are marked with black lines, which simplify the computational complexity of image processing greatly. A two-ti...A method is put forward to realize the recognition and guiding of initial welding position. The weld seams are marked with black lines, which simplify the computational complexity of image processing greatly. A two-time template matching method has been advanced to search for the target point, which is simple and has higher calculation speed. According to the depth computing principle with the special point matching using binocular stereovision, the initial welding position can be confirmed by calculating the middle point of the perpendicular line of two radials in the space. Taking the welding of propellant fuel container for example, good results are obtained with the algorithms. Finally, similar method for terminating welding position is also advanced.展开更多
Accurate stereo vision calibration is a preliminary step towards high-precision visual posi- tioning of robot. Combining with the characteristics of genetic algorithm (GA) and particle swarm optimization (PSO), a ...Accurate stereo vision calibration is a preliminary step towards high-precision visual posi- tioning of robot. Combining with the characteristics of genetic algorithm (GA) and particle swarm optimization (PSO), a three-stage calibration method based on hybrid intelligent optimization is pro- posed for nonlinear camera models in this paper. The motivation is to improve the accuracy of the calibration process. In this approach, the stereo vision calibration is considered as an optimization problem that can be solved by the GA and PSO. The initial linear values can be obtained in the frost stage. Then in the second stage, two cameras' parameters are optimized separately. Finally, the in- tegrated optimized calibration of two models is obtained in the third stage. Direct linear transforma- tion (DLT), GA and PSO are individually used in three stages. It is shown that the results of every stage can correctly find near-optimal solution and it can be used to initialize the next stage. Simula- tion analysis and actual experimental results indicate that this calibration method works more accu- rate and robust in noisy environment compared with traditional calibration methods. The proposed method can fulfill the requirements of robot sophisticated visual operation.展开更多
A new design of vision based soccer robot using the type TMS320F240 of DSPs for MiroSot series is presented. The DSP used enables cost effective control of DC motor, and features fewer external components, lower syste...A new design of vision based soccer robot using the type TMS320F240 of DSPs for MiroSot series is presented. The DSP used enables cost effective control of DC motor, and features fewer external components, lower system cost and better performances than traditional microcontroller. The hardware architecture of robot is firstly presented in detail, and then the software design is briefly discussed. The control structure of decision making subsystem is illuminated also in this paper. The conclusion and prospect are given at last.展开更多
In this paper,we present a robot vision based system for coordinate measurement of feature points on large scale automobile parts.Our system consists of an industrial 6-DOF robot mounted with a CCD camera and a PC.The...In this paper,we present a robot vision based system for coordinate measurement of feature points on large scale automobile parts.Our system consists of an industrial 6-DOF robot mounted with a CCD camera and a PC.The system controls the robot into the area of feature points.The images of measuring feature points are acquired by the camera mounted on the robot.3D positions of the feature points are obtained from a model based pose estimation that applies to the images.The measured positions of all feature points are then transformed to the reference coordinate of feature points whose positions are obtained from the coordinate measuring machine(CMM).Finally,the point-to-point distances between the measured feature points and the reference feature points are calculated and reported.The results show that the root mean square error(RMSE) of measure values obtained by our system is less than 0.5 mm.Our system is adequate for automobile assembly and can perform faster than conventional methods.展开更多
One being developed automatic sweep robot, need to estimate if anyone is on a certain range of road ahead then automatically adjust running speed, in order to ensure work efficiency and operation safety. This paper pr...One being developed automatic sweep robot, need to estimate if anyone is on a certain range of road ahead then automatically adjust running speed, in order to ensure work efficiency and operation safety. This paper proposed a method using face detection to predict the data of image sensor. The experimental results show that, the proposed algorithm is practical and reliable, and good outcome have been achieved in the application of instruction robot.展开更多
The task of an intelligent control system design applying soft and quantum computational intelligence technologies discussed.An example of a control object as a mobile robot with redundant robotic manipulator and ster...The task of an intelligent control system design applying soft and quantum computational intelligence technologies discussed.An example of a control object as a mobile robot with redundant robotic manipulator and stereovision introduced.Design of robust knowledge bases is performed using a developed computational intelligence-quantum/soft computing toolkit(QC/SCOptKBTM).The knowledge base self-organization process of fuzzy homogeneous regulators through the application of end-to-end IT of quantum computing described.The coordination control between the mobile robot and redundant manipulator with stereovision based on soft computing described.The general design methodology of a generalizing control unit based on the physical laws of quantum computing(quantum information-thermodynamic trade-off of control quality distribution and knowledge base self-organization goal)is considered.The modernization of the pattern recognition system based on stereo vision technology presented.The effectiveness of the proposed methodology is demonstrated in comparison with the structures of control systems based on soft computing for unforeseen control situations with sensor system.The main objective of this article is to demonstrate the advantages of the approach based on quantum/soft computing.展开更多
Technology and Woodwork industries have been going through a phase of perfection over a period of hundred years, up until today, the wood work traditions are still exist and improving with the help of modern technolog...Technology and Woodwork industries have been going through a phase of perfection over a period of hundred years, up until today, the wood work traditions are still exist and improving with the help of modern technology. Looking at history and current studies with regards to production of furniture, many factories are producing millions of chairs for entire continents and to maintain the market lead position requires a lot of human dedication and intelligent investments in technology. Using automation in painting means to automatically control the process including industrial machines like robots, conveyors and others which save time, money and also reduce human efforts. Most automation companies are specialized in industrial automation, industrial robots, communications, electronics, and software for developing complex projects to automate desirable task by integrating robots. This study is purposefully conducted to shed more light on the benefits or impacts of automation in painting furniture parts. One of the benefits that most of companies gain from automating the painting process with robots is the productivity growth. The recruitment process for specialized personnel in the painting processes has also been greatly simplified due to the fact that the robots have taken over the difficult’ tasks and protection from areas with high risk inhalation of harmful substances is assured. In this research, Automation is shown to be efficient and effective in the painting system by reviewing different studies. Automated dyeing processes in factories have made progress due to the successful results achieved by automated painting robots. In that regard, the impact of automation in painting furniture parts is evident in the production of fitments that are in almost every home through the solutions offered by automation to the consumer.展开更多
This research is dedicated to develop a safety measurement for human-machine cooperative system, in which the machine region and the human region cannot be separated due to the overlap and the movement both from human...This research is dedicated to develop a safety measurement for human-machine cooperative system, in which the machine region and the human region cannot be separated due to the overlap and the movement both from human and from machines. Our proposal here is to automatically monitor the moving objects by image sensing/recognition method, such that the machine system can get enough information about the environment situation and about the production progress at any time, and therefore the machines can accordingly take some corresponding actions automatically to avoid hazard. For this purpose, two types of monitor systems are proposed. The first type is based on the omni directional vision sensor, and the second is based on the stereo vision sensor. Each type may be used alone or together with another type, depending on the safety system's requirements and the specific situation of the manufacture field to be monitored. In this paper, the description about these two types are given, and as for the special application of these image sensors into safety control, the construction of a hierarchy safety system is proposed.展开更多
This paper presents some human-inspired strategies for lighting control in a robot system for best scene interpretation,where the main intention is to avoid possible glares or highlights occurring in images. It firstl...This paper presents some human-inspired strategies for lighting control in a robot system for best scene interpretation,where the main intention is to avoid possible glares or highlights occurring in images. It firstly compares the characteristics of human eyes and robot eyes. Then some evaluation criteria are addressed to assess the lighting conditions. A bio-inspired method is adopted to avoid the visual glare which is caused by either direct illumination from large light sources or indirect illumination reflected by smooth surfaces. Appropriate methods are proposed to optimize the pose and optical parameters of the light source and the vision camera.展开更多
The process of segmenting point cloud data into several homogeneous areas with points in the same region having the same attributes is known as 3D segmentation.Segmentation is challenging with point cloud data due to...The process of segmenting point cloud data into several homogeneous areas with points in the same region having the same attributes is known as 3D segmentation.Segmentation is challenging with point cloud data due to substantial redundancy,fluctuating sample density and lack of apparent organization.The research area has a wide range of robotics applications,including intelligent vehicles,autonomous mapping and navigation.A number of researchers have introduced various methodologies and algorithms.Deep learning has been successfully used to a spectrum of 2D vision domains as a prevailing A.I.methods.However,due to the specific problems of processing point clouds with deep neural networks,deep learning on point clouds is still in its initial stages.This study examines many strategies that have been presented to 3D instance and semantic segmentation and gives a complete assessment of current developments in deep learning-based 3D segmentation.In these approaches’benefits,draw backs,and design mechanisms are studied and addressed.This study evaluates the impact of various segmentation algorithms on competitiveness on various publicly accessible datasets,as well as the most often used pipelines,their advantages and limits,insightful findings and intriguing future research directions.展开更多
The colour-enhanced point cloud map is increasingly being employed in fields such as robotics,3D reconstruction and virtual reality.The authors propose ER-Mapping(Extrinsic Robust coloured Mapping system using residua...The colour-enhanced point cloud map is increasingly being employed in fields such as robotics,3D reconstruction and virtual reality.The authors propose ER-Mapping(Extrinsic Robust coloured Mapping system using residual evaluation and selection).ER-Mapping consists of two components:the simultaneous localisation and mapping(SLAM)subsystem and the colouring subsystem.The SLAM subsystem reconstructs the geometric structure,where it employs a dynamic threshold-based residual selection in LiDAR-inertial odometry to improve mapping accuracy.On the other hand,the col-ouring subsystem focuses on recovering texture information from input images and innovatively utilises 3D–2D feature selection and optimisation methods,eliminating the need for strict hardware time synchronisation and highly accurate extrinsic parameters.Experiments were conducted in both indoor and outdoor environments.The results demonstrate that our system can enhance accuracy,reduce computational costs and achieve extrinsic robustness.展开更多
Cooperative target identification is the prerequisite for the relative position and orientation measurement between the space robot arm and the to-be-arrested object. We propose an on- orbit real-time robust algorithm...Cooperative target identification is the prerequisite for the relative position and orientation measurement between the space robot arm and the to-be-arrested object. We propose an on- orbit real-time robust algorithm for cooperative target identification in complex background using the features of circle and lines. It first extracts only the interested edges in the target image using an adaptive threshold and refines them to about single-pixel-width with improved non-maximum suppression. Adapting a novel tracking approach, edge segments changing smoothly in tangential directions are obtained. With a small amount of calculation, large numbers of invalid edges are removed. From the few remained edges, valid circular arcs are extracted and reassembled to obtain circles according to a reliable criterion. Finally, the target is identified if there are certain numbers of straight lines whose relative positions with the circle match the known target pattern. Experiments demonstrate that the proposed algorithm accurately identifies the cooperative target within the range of 0.3 1.5 m under complex background at the speed of 8 frames per second, regardless of lighting condition and target attitude. The proposed algorithm is very suitable for real-time visual measurement of space robot arm because of its robustness and small memory requirement.展开更多
The relative pose between inertial and visual sensors equipped in autonomous robots is calibrated in two steps. In the first step, the sensing system is moved along a line, the orientations in the relative pose are co...The relative pose between inertial and visual sensors equipped in autonomous robots is calibrated in two steps. In the first step, the sensing system is moved along a line, the orientations in the relative pose are computed from at least five corresponding points in the two images captured before and after the movement. In the second step, the translation parameters in the relative pose are obtained with at least two corresponding points in the two images captured before and after one step motion. Experiments are conducted to verify the effectiveness of the proposed method.展开更多
A localization method based on distance function of projected features is presented to solve the accuracy reduction or failure problem due to occlusion and blurring caused by smog, when dealing with vision based local...A localization method based on distance function of projected features is presented to solve the accuracy reduction or failure problem due to occlusion and blurring caused by smog, when dealing with vision based localization for target oil and gas wellhead (OGWH). Firstly, the target OGWH is modeled as a cylinder with marker, and a vector with redundant parameter is used to describe its pose. Secondly, the explicit mapping relationship between the pose vector with redundant parameter and projected features is derived. Then, a 2D-point-to-feature distance function is proposed, as well as its derivative. Finally, based on this distance function and its derivative, an algorithm is proposed to estimate the pose of target OGWH directly according to the 2D image information, and the validity of the method is verified by both synthetic data and real image experiments. The results show that this method is able to accomplish the localization in the case of occlusion and blurring, and its anti-noise ability is good especially with noise ratio of less than 70%.展开更多
With the rapid development of artificial intelligence(AI),the application of this technology in the medical field is becoming increasingly extensive,along with a gradual increase in the amount of intelligent equipment...With the rapid development of artificial intelligence(AI),the application of this technology in the medical field is becoming increasingly extensive,along with a gradual increase in the amount of intelligent equipment in hospitals.Service robots can save human resources and replace nursing staff to achieve some work.In view of the phenomenon of mobile service robots'grabbing and distribution of patients'drugs in hospitals,a real‐time object detection and positioning system based on image and text information is proposed,which realizes the precise positioning and tracking of the grabbing objects and completes the grasping of a specific object(medicine bottle).The lightweight object detection model NanoDet is used to learn the features of the grasping objects and the object category,and bounding boxes are regressed.Then,the images in the bounding boxes are enhanced to overcome unfavourable factors,such as a small object region.The text detection and recognition model PP‐OCR is used to detect and recognise the enhanced images and extract the text information.The object information provided by the two models is fused,and the text recognition result is matched with the object detection box to achieve the precise posi-tioning of the grasping object.The kernel correlation filter(KCF)tracking algorithm is introduced to achieve real‐time tracking of specific objects to precisely control the robot's grasping.Both deep learning models adopt lightweight networks to facilitate direct deployment.The experiments show that the proposed robot grasping detection system has high reliability,accuracy and real‐time performance.展开更多
文摘Modeling of welding process by robotic vision is basically a theoretical problem, means mainly on physical problem, and also technological problem. To obtain a good model of welding process by robotic vision, theoretical researches are required but also constant experimental researches of several welding processes. Until today researches of welding processes has been based on empirical and detailed experimentation. This paper presents welding process by robotic and automation points of view with application of new technologies. Welding robotic system has been designed with possibility to control and inspect this process. Parameters that should be controlled during the process have been identified to reach desired quality. Figure of control system of welding process by robotic vision is given in this paper.
基金This project is supported by National Natural Science Foundation of China(No.60474036) and Shanghai Municipal Science and Technology CommitteeFoundation, China (No.021111116).
文摘A method is put forward to realize the recognition and guiding of initial welding position. The weld seams are marked with black lines, which simplify the computational complexity of image processing greatly. A two-time template matching method has been advanced to search for the target point, which is simple and has higher calculation speed. According to the depth computing principle with the special point matching using binocular stereovision, the initial welding position can be confirmed by calculating the middle point of the perpendicular line of two radials in the space. Taking the welding of propellant fuel container for example, good results are obtained with the algorithms. Finally, similar method for terminating welding position is also advanced.
文摘Accurate stereo vision calibration is a preliminary step towards high-precision visual posi- tioning of robot. Combining with the characteristics of genetic algorithm (GA) and particle swarm optimization (PSO), a three-stage calibration method based on hybrid intelligent optimization is pro- posed for nonlinear camera models in this paper. The motivation is to improve the accuracy of the calibration process. In this approach, the stereo vision calibration is considered as an optimization problem that can be solved by the GA and PSO. The initial linear values can be obtained in the frost stage. Then in the second stage, two cameras' parameters are optimized separately. Finally, the in- tegrated optimized calibration of two models is obtained in the third stage. Direct linear transforma- tion (DLT), GA and PSO are individually used in three stages. It is shown that the results of every stage can correctly find near-optimal solution and it can be used to initialize the next stage. Simula- tion analysis and actual experimental results indicate that this calibration method works more accu- rate and robust in noisy environment compared with traditional calibration methods. The proposed method can fulfill the requirements of robot sophisticated visual operation.
文摘A new design of vision based soccer robot using the type TMS320F240 of DSPs for MiroSot series is presented. The DSP used enables cost effective control of DC motor, and features fewer external components, lower system cost and better performances than traditional microcontroller. The hardware architecture of robot is firstly presented in detail, and then the software design is briefly discussed. The control structure of decision making subsystem is illuminated also in this paper. The conclusion and prospect are given at last.
基金wsupported by the Thailand Research Fund and Solimac Automation Co.,Ltd.under the Research and Researchers for Industry Program(RRI)under Grant No.MSD56I0098Office of the Higher Education Commission under the National Research University Project of Thailand
文摘In this paper,we present a robot vision based system for coordinate measurement of feature points on large scale automobile parts.Our system consists of an industrial 6-DOF robot mounted with a CCD camera and a PC.The system controls the robot into the area of feature points.The images of measuring feature points are acquired by the camera mounted on the robot.3D positions of the feature points are obtained from a model based pose estimation that applies to the images.The measured positions of all feature points are then transformed to the reference coordinate of feature points whose positions are obtained from the coordinate measuring machine(CMM).Finally,the point-to-point distances between the measured feature points and the reference feature points are calculated and reported.The results show that the root mean square error(RMSE) of measure values obtained by our system is less than 0.5 mm.Our system is adequate for automobile assembly and can perform faster than conventional methods.
文摘One being developed automatic sweep robot, need to estimate if anyone is on a certain range of road ahead then automatically adjust running speed, in order to ensure work efficiency and operation safety. This paper proposed a method using face detection to predict the data of image sensor. The experimental results show that, the proposed algorithm is practical and reliable, and good outcome have been achieved in the application of instruction robot.
文摘The task of an intelligent control system design applying soft and quantum computational intelligence technologies discussed.An example of a control object as a mobile robot with redundant robotic manipulator and stereovision introduced.Design of robust knowledge bases is performed using a developed computational intelligence-quantum/soft computing toolkit(QC/SCOptKBTM).The knowledge base self-organization process of fuzzy homogeneous regulators through the application of end-to-end IT of quantum computing described.The coordination control between the mobile robot and redundant manipulator with stereovision based on soft computing described.The general design methodology of a generalizing control unit based on the physical laws of quantum computing(quantum information-thermodynamic trade-off of control quality distribution and knowledge base self-organization goal)is considered.The modernization of the pattern recognition system based on stereo vision technology presented.The effectiveness of the proposed methodology is demonstrated in comparison with the structures of control systems based on soft computing for unforeseen control situations with sensor system.The main objective of this article is to demonstrate the advantages of the approach based on quantum/soft computing.
文摘Technology and Woodwork industries have been going through a phase of perfection over a period of hundred years, up until today, the wood work traditions are still exist and improving with the help of modern technology. Looking at history and current studies with regards to production of furniture, many factories are producing millions of chairs for entire continents and to maintain the market lead position requires a lot of human dedication and intelligent investments in technology. Using automation in painting means to automatically control the process including industrial machines like robots, conveyors and others which save time, money and also reduce human efforts. Most automation companies are specialized in industrial automation, industrial robots, communications, electronics, and software for developing complex projects to automate desirable task by integrating robots. This study is purposefully conducted to shed more light on the benefits or impacts of automation in painting furniture parts. One of the benefits that most of companies gain from automating the painting process with robots is the productivity growth. The recruitment process for specialized personnel in the painting processes has also been greatly simplified due to the fact that the robots have taken over the difficult’ tasks and protection from areas with high risk inhalation of harmful substances is assured. In this research, Automation is shown to be efficient and effective in the painting system by reviewing different studies. Automated dyeing processes in factories have made progress due to the successful results achieved by automated painting robots. In that regard, the impact of automation in painting furniture parts is evident in the production of fitments that are in almost every home through the solutions offered by automation to the consumer.
文摘This research is dedicated to develop a safety measurement for human-machine cooperative system, in which the machine region and the human region cannot be separated due to the overlap and the movement both from human and from machines. Our proposal here is to automatically monitor the moving objects by image sensing/recognition method, such that the machine system can get enough information about the environment situation and about the production progress at any time, and therefore the machines can accordingly take some corresponding actions automatically to avoid hazard. For this purpose, two types of monitor systems are proposed. The first type is based on the omni directional vision sensor, and the second is based on the stereo vision sensor. Each type may be used alone or together with another type, depending on the safety system's requirements and the specific situation of the manufacture field to be monitored. In this paper, the description about these two types are given, and as for the special application of these image sensors into safety control, the construction of a hierarchy safety system is proposed.
基金supported by the National Natural Science Foundation of China and Microsoft Research Asia ( No.NSFC-61173096 No.61103140),NCET+3 种基金the Science and Technology Department of Zhejiang Province ( No.R1110679 No.2010R10006 No.2010C33095 No.Y1090592)
文摘This paper presents some human-inspired strategies for lighting control in a robot system for best scene interpretation,where the main intention is to avoid possible glares or highlights occurring in images. It firstly compares the characteristics of human eyes and robot eyes. Then some evaluation criteria are addressed to assess the lighting conditions. A bio-inspired method is adopted to avoid the visual glare which is caused by either direct illumination from large light sources or indirect illumination reflected by smooth surfaces. Appropriate methods are proposed to optimize the pose and optical parameters of the light source and the vision camera.
基金This research was supported by the BB21 plus funded by Busan Metropolitan City and Busan Institute for Talent and Lifelong Education(BIT)and a grant from Tongmyong University Innovated University Research Park(I-URP)funded by Busan Metropolitan City,Republic of Korea.
文摘The process of segmenting point cloud data into several homogeneous areas with points in the same region having the same attributes is known as 3D segmentation.Segmentation is challenging with point cloud data due to substantial redundancy,fluctuating sample density and lack of apparent organization.The research area has a wide range of robotics applications,including intelligent vehicles,autonomous mapping and navigation.A number of researchers have introduced various methodologies and algorithms.Deep learning has been successfully used to a spectrum of 2D vision domains as a prevailing A.I.methods.However,due to the specific problems of processing point clouds with deep neural networks,deep learning on point clouds is still in its initial stages.This study examines many strategies that have been presented to 3D instance and semantic segmentation and gives a complete assessment of current developments in deep learning-based 3D segmentation.In these approaches’benefits,draw backs,and design mechanisms are studied and addressed.This study evaluates the impact of various segmentation algorithms on competitiveness on various publicly accessible datasets,as well as the most often used pipelines,their advantages and limits,insightful findings and intriguing future research directions.
基金This work was supported by STI 2030‐Major Projects No.2021ZD0201403,in part by NSFC 62088101 Autonomous Intelligent Unmanned Systems.
文摘The colour-enhanced point cloud map is increasingly being employed in fields such as robotics,3D reconstruction and virtual reality.The authors propose ER-Mapping(Extrinsic Robust coloured Mapping system using residual evaluation and selection).ER-Mapping consists of two components:the simultaneous localisation and mapping(SLAM)subsystem and the colouring subsystem.The SLAM subsystem reconstructs the geometric structure,where it employs a dynamic threshold-based residual selection in LiDAR-inertial odometry to improve mapping accuracy.On the other hand,the col-ouring subsystem focuses on recovering texture information from input images and innovatively utilises 3D–2D feature selection and optimisation methods,eliminating the need for strict hardware time synchronisation and highly accurate extrinsic parameters.Experiments were conducted in both indoor and outdoor environments.The results demonstrate that our system can enhance accuracy,reduce computational costs and achieve extrinsic robustness.
基金supported by the National Basic Research Program of China (No. 2013CB733103)
文摘Cooperative target identification is the prerequisite for the relative position and orientation measurement between the space robot arm and the to-be-arrested object. We propose an on- orbit real-time robust algorithm for cooperative target identification in complex background using the features of circle and lines. It first extracts only the interested edges in the target image using an adaptive threshold and refines them to about single-pixel-width with improved non-maximum suppression. Adapting a novel tracking approach, edge segments changing smoothly in tangential directions are obtained. With a small amount of calculation, large numbers of invalid edges are removed. From the few remained edges, valid circular arcs are extracted and reassembled to obtain circles according to a reliable criterion. Finally, the target is identified if there are certain numbers of straight lines whose relative positions with the circle match the known target pattern. Experiments demonstrate that the proposed algorithm accurately identifies the cooperative target within the range of 0.3 1.5 m under complex background at the speed of 8 frames per second, regardless of lighting condition and target attitude. The proposed algorithm is very suitable for real-time visual measurement of space robot arm because of its robustness and small memory requirement.
基金supported by National Natural Science Foundation of China (Nos. 60805038 and 60725309)Beijing Natural Science Foundation (No. 4082032)
文摘The relative pose between inertial and visual sensors equipped in autonomous robots is calibrated in two steps. In the first step, the sensing system is moved along a line, the orientations in the relative pose are computed from at least five corresponding points in the two images captured before and after the movement. In the second step, the translation parameters in the relative pose are obtained with at least two corresponding points in the two images captured before and after one step motion. Experiments are conducted to verify the effectiveness of the proposed method.
基金supported by National Natural Science Foundation of China(No.61403226)the State Key Laboratory of Tribology of China(No.SKLT09A03)
文摘A localization method based on distance function of projected features is presented to solve the accuracy reduction or failure problem due to occlusion and blurring caused by smog, when dealing with vision based localization for target oil and gas wellhead (OGWH). Firstly, the target OGWH is modeled as a cylinder with marker, and a vector with redundant parameter is used to describe its pose. Secondly, the explicit mapping relationship between the pose vector with redundant parameter and projected features is derived. Then, a 2D-point-to-feature distance function is proposed, as well as its derivative. Finally, based on this distance function and its derivative, an algorithm is proposed to estimate the pose of target OGWH directly according to the 2D image information, and the validity of the method is verified by both synthetic data and real image experiments. The results show that this method is able to accomplish the localization in the case of occlusion and blurring, and its anti-noise ability is good especially with noise ratio of less than 70%.
基金National Natural Science Foundation of China under Grants,Grant/Award Number:61973184Young Scholars Program of Shandong University,Weihai,Grant/Award Number:20820211010+1 种基金National Key Research and Development Plan of China under Grant,Grant/Award Number:2020AAA0108903Natural Science Foundation of Shandong Province,Grant/Award Numbers:ZR2020MD041,ZR2020MF077。
文摘With the rapid development of artificial intelligence(AI),the application of this technology in the medical field is becoming increasingly extensive,along with a gradual increase in the amount of intelligent equipment in hospitals.Service robots can save human resources and replace nursing staff to achieve some work.In view of the phenomenon of mobile service robots'grabbing and distribution of patients'drugs in hospitals,a real‐time object detection and positioning system based on image and text information is proposed,which realizes the precise positioning and tracking of the grabbing objects and completes the grasping of a specific object(medicine bottle).The lightweight object detection model NanoDet is used to learn the features of the grasping objects and the object category,and bounding boxes are regressed.Then,the images in the bounding boxes are enhanced to overcome unfavourable factors,such as a small object region.The text detection and recognition model PP‐OCR is used to detect and recognise the enhanced images and extract the text information.The object information provided by the two models is fused,and the text recognition result is matched with the object detection box to achieve the precise posi-tioning of the grasping object.The kernel correlation filter(KCF)tracking algorithm is introduced to achieve real‐time tracking of specific objects to precisely control the robot's grasping.Both deep learning models adopt lightweight networks to facilitate direct deployment.The experiments show that the proposed robot grasping detection system has high reliability,accuracy and real‐time performance.