The fast paced nature of robotic soccer necessitates real time sensing coupled with quick behaving and decision making. In the field with real robots, it is important to well perceive the location of ball, team robots...The fast paced nature of robotic soccer necessitates real time sensing coupled with quick behaving and decision making. In the field with real robots, it is important to well perceive the location of ball, team robots and opponent robots through the vision system in real time. In this paper the architecture of global vision system of our small size robotic team and the process of object recognition is described. According to the study on color distribution in different color space and quantitative investigation, a method which uses H (Hue) thresholds as the major thresholds to feature exact and recognize object in real time is presented.展开更多
Multi-sensor vision system plays an important role in the 3D measurement of large objects.However,due to the widely distribution of sensors,the problem of lacking common fields of view(FOV) arises frequently,which m...Multi-sensor vision system plays an important role in the 3D measurement of large objects.However,due to the widely distribution of sensors,the problem of lacking common fields of view(FOV) arises frequently,which makes the global calibration of the vision system quite difficult.The primary existing solution relies on large-scale surveying equipments,which is ponderous and inconvenient for field calibrations.In this paper,a global calibration method of multi-sensor vision system is proposed and investigated.The proposed method utilizes pairs of skew laser lines,which are generated by a group of laser pointers,as the calibration objects.Each pair of skew laser lines provides a unique coordinate system in space which can be reconstructed in certain vision sensor's coordinates by using a planar pattern.Then the geometries of sensors are computed under rigid transformation constrains by taking coordinates of each skew lines pair as the intermediary.The method is applied on both visual cameras with synthetic data and a real two-camera vision system;results show the validity and good performance.The prime contribution of this paper is taking skew laser lines as the global calibration objects,which makes the method simple and flexible.The method need no expensive equipments and can be used in large-scale calibration.展开更多
Innovative ZTE is a shining example of a Chinese company’s ongoing pursuit of excellence by Lu Qianwen EVERY autumn,28-year-old electronics fan Zhong Datong gets excited.It’s the time his favorite annual event,"...Innovative ZTE is a shining example of a Chinese company’s ongoing pursuit of excellence by Lu Qianwen EVERY autumn,28-year-old electronics fan Zhong Datong gets excited.It’s the time his favorite annual event,"P&T/ EXPO Comm China," the largest international information and communication technology exhibition in Asia,is held in Beijing. This year the dazzling new products at the booth of the ZTE Corp.were a contin-展开更多
Building fences to manage the cattle grazing can be very expensive;cost inefficient. These do not provide dynamic control over the area in which the cattle are grazing. Existing virtual fencing techniques for the cont...Building fences to manage the cattle grazing can be very expensive;cost inefficient. These do not provide dynamic control over the area in which the cattle are grazing. Existing virtual fencing techniques for the control of herds of cattle, based on polygon coordinate definition of boundaries is limited in the area of land mass coverage and dynamism. This work seeks to develop a more robust and an improved monocular vision based boundary avoidance for non-invasive stray control system for cattle, with a view to increase land mass coverage in virtual fencing techniques and dynamism. The monocular vision based depth estimation will be modeled using concept of global Fourier Transform (FT) and local Wavelet Transform (WT) of image structure of scenes (boundaries). The magnitude of the global Fourier Transform gives the dominant orientations and textual patterns of the image;while the local Wavelet Transform gives the dominant spectral features of the image and their spatial distribution. Each scene picture or image is defined by features v, which contain the set of global (FT) and local (WT) statistics of the image. Scenes or boundaries distances are given by estimating the depth D by means of the image features v. Sound cues of intensity equivalent to the magnitude of the depth D are applied to the animal ears as stimuli. This brings about the desired control as animals tend to move away from uncomfortable sounds.展开更多
For the improvement of accuracy and better fault-tolerant performance, a global position system (GPS)/vision navigation (VISNAV) integrated relative navigation and attitude determination approach is presented for ...For the improvement of accuracy and better fault-tolerant performance, a global position system (GPS)/vision navigation (VISNAV) integrated relative navigation and attitude determination approach is presented for ultra-close spacecraft formation flying. Onboard GPS and VISNAV system are adopted and a federal Kalman filter architecture is used for the total navigation system design. Simulation results indicate that the integrated system can provide a total improvement of relative navigation and attitude estimation performance in accuracy and fault-tolerance.展开更多
We present an omnidirectional vision system we have implemented to provide our mobile robot with a fast tracking and robust localization capability. An algorithm is proposed to do reconstruction of the environment fro...We present an omnidirectional vision system we have implemented to provide our mobile robot with a fast tracking and robust localization capability. An algorithm is proposed to do reconstruction of the environment from the omnidirectional image and global localization of the robot in the context of the Middle Size League RoboCup field. This is accomplished by learning a set of visual landmarks such as the goals and the corner posts. Due to the dynamic changing environment and the partially observable landmarks, four localization cases are discussed in order to get robust localization performance. Localization is performed using a method that matches the observed landmarks, i.e. color blobs, which are extracted from the environment. The advantages of the cylindrical projection are discussed giving special consideration to the characteristics of the visual landmark and the meaning of the blob extraction. The analysis is established based on real time experiments with our omnidirectional vision system and the actual mobile robot. The comparative studies are presented and the feasibility of the method is shown.展开更多
文摘The fast paced nature of robotic soccer necessitates real time sensing coupled with quick behaving and decision making. In the field with real robots, it is important to well perceive the location of ball, team robots and opponent robots through the vision system in real time. In this paper the architecture of global vision system of our small size robotic team and the process of object recognition is described. According to the study on color distribution in different color space and quantitative investigation, a method which uses H (Hue) thresholds as the major thresholds to feature exact and recognize object in real time is presented.
基金supported by National Natural Science Foundation of China (Grant No. 60804060)Research Fund for the Doctoral Program of Higher Education of China (Grant No. 200800061003)
文摘Multi-sensor vision system plays an important role in the 3D measurement of large objects.However,due to the widely distribution of sensors,the problem of lacking common fields of view(FOV) arises frequently,which makes the global calibration of the vision system quite difficult.The primary existing solution relies on large-scale surveying equipments,which is ponderous and inconvenient for field calibrations.In this paper,a global calibration method of multi-sensor vision system is proposed and investigated.The proposed method utilizes pairs of skew laser lines,which are generated by a group of laser pointers,as the calibration objects.Each pair of skew laser lines provides a unique coordinate system in space which can be reconstructed in certain vision sensor's coordinates by using a planar pattern.Then the geometries of sensors are computed under rigid transformation constrains by taking coordinates of each skew lines pair as the intermediary.The method is applied on both visual cameras with synthetic data and a real two-camera vision system;results show the validity and good performance.The prime contribution of this paper is taking skew laser lines as the global calibration objects,which makes the method simple and flexible.The method need no expensive equipments and can be used in large-scale calibration.
文摘Innovative ZTE is a shining example of a Chinese company’s ongoing pursuit of excellence by Lu Qianwen EVERY autumn,28-year-old electronics fan Zhong Datong gets excited.It’s the time his favorite annual event,"P&T/ EXPO Comm China," the largest international information and communication technology exhibition in Asia,is held in Beijing. This year the dazzling new products at the booth of the ZTE Corp.were a contin-
文摘Building fences to manage the cattle grazing can be very expensive;cost inefficient. These do not provide dynamic control over the area in which the cattle are grazing. Existing virtual fencing techniques for the control of herds of cattle, based on polygon coordinate definition of boundaries is limited in the area of land mass coverage and dynamism. This work seeks to develop a more robust and an improved monocular vision based boundary avoidance for non-invasive stray control system for cattle, with a view to increase land mass coverage in virtual fencing techniques and dynamism. The monocular vision based depth estimation will be modeled using concept of global Fourier Transform (FT) and local Wavelet Transform (WT) of image structure of scenes (boundaries). The magnitude of the global Fourier Transform gives the dominant orientations and textual patterns of the image;while the local Wavelet Transform gives the dominant spectral features of the image and their spatial distribution. Each scene picture or image is defined by features v, which contain the set of global (FT) and local (WT) statistics of the image. Scenes or boundaries distances are given by estimating the depth D by means of the image features v. Sound cues of intensity equivalent to the magnitude of the depth D are applied to the animal ears as stimuli. This brings about the desired control as animals tend to move away from uncomfortable sounds.
文摘For the improvement of accuracy and better fault-tolerant performance, a global position system (GPS)/vision navigation (VISNAV) integrated relative navigation and attitude determination approach is presented for ultra-close spacecraft formation flying. Onboard GPS and VISNAV system are adopted and a federal Kalman filter architecture is used for the total navigation system design. Simulation results indicate that the integrated system can provide a total improvement of relative navigation and attitude estimation performance in accuracy and fault-tolerance.
文摘We present an omnidirectional vision system we have implemented to provide our mobile robot with a fast tracking and robust localization capability. An algorithm is proposed to do reconstruction of the environment from the omnidirectional image and global localization of the robot in the context of the Middle Size League RoboCup field. This is accomplished by learning a set of visual landmarks such as the goals and the corner posts. Due to the dynamic changing environment and the partially observable landmarks, four localization cases are discussed in order to get robust localization performance. Localization is performed using a method that matches the observed landmarks, i.e. color blobs, which are extracted from the environment. The advantages of the cylindrical projection are discussed giving special consideration to the characteristics of the visual landmark and the meaning of the blob extraction. The analysis is established based on real time experiments with our omnidirectional vision system and the actual mobile robot. The comparative studies are presented and the feasibility of the method is shown.