Vehicle recognition system (VRS) plays a very important role in the field of intelligent transportation systems.A novel and intuitive method is proposed for vehicle location.The method we provide for vehicle location ...Vehicle recognition system (VRS) plays a very important role in the field of intelligent transportation systems.A novel and intuitive method is proposed for vehicle location.The method we provide for vehicle location is based on human visual perception model technique. The perception color space HSI in this algorithm is adopted.Three color components of a color image and more potential edge patterns are integrated for solving the feature extraction problem.A fast and automatic threshold technique based on human visual perception model is also developed.The vertical edge projection and horizontal edge projection are adopted for locating left-right boundary of vehicle and top-bottom boundary of vehicle, respectively. Very promising experimental results are obtained using real-time vehicle image sequences, which have confirmed that this proposed location vehicle method is efficient and reliable, and its calculation speed meets the needs of the VRS.展开更多
We present an omnidirectional vision system we have implemented to provide our mobile robot with a fast tracking and robust localization capability. An algorithm is proposed to do reconstruction of the environment fro...We present an omnidirectional vision system we have implemented to provide our mobile robot with a fast tracking and robust localization capability. An algorithm is proposed to do reconstruction of the environment from the omnidirectional image and global localization of the robot in the context of the Middle Size League RoboCup field. This is accomplished by learning a set of visual landmarks such as the goals and the corner posts. Due to the dynamic changing environment and the partially observable landmarks, four localization cases are discussed in order to get robust localization performance. Localization is performed using a method that matches the observed landmarks, i.e. color blobs, which are extracted from the environment. The advantages of the cylindrical projection are discussed giving special consideration to the characteristics of the visual landmark and the meaning of the blob extraction. The analysis is established based on real time experiments with our omnidirectional vision system and the actual mobile robot. The comparative studies are presented and the feasibility of the method is shown.展开更多
Street-level visualization is an important application of 3D city models.Challenges to street-level visualization include the cluttering of buildings due to fine detail and visualization performance.In this paper,a no...Street-level visualization is an important application of 3D city models.Challenges to street-level visualization include the cluttering of buildings due to fine detail and visualization performance.In this paper,a novel method is proposed for streetlevel visualization based on visual saliency evaluation.The basic idea of the method is to preserve these salient buildings in a scene while removing those that are non-salient.The method can be divided into pre-processing procedures and real-time visualization.The first step in pre-processing is to convert 3D building models at higher Levels of Detail(Lo Ds) into LoD 1 models with simplified ground plans.Then,a number of index viewpoints are created along the streets; these indices refer to both the position and the direction of each street site.A visual saliency value is computed for each building,with respect to the index site,based on a visual difference between the original model and the generalized model.We calculate and evaluate three methods for visual saliency:local difference,global difference and minimum projection area.The real-time visualization process begins by mapping the observer to its closest indices.The street view is then generated based on the building information stored in those indexes.A user study shows that the local visual saliency method performs better than do the global visual saliency,area and image-based methods and that the framework proposed in this paper may improve the performance of 3D visualization.展开更多
文摘Vehicle recognition system (VRS) plays a very important role in the field of intelligent transportation systems.A novel and intuitive method is proposed for vehicle location.The method we provide for vehicle location is based on human visual perception model technique. The perception color space HSI in this algorithm is adopted.Three color components of a color image and more potential edge patterns are integrated for solving the feature extraction problem.A fast and automatic threshold technique based on human visual perception model is also developed.The vertical edge projection and horizontal edge projection are adopted for locating left-right boundary of vehicle and top-bottom boundary of vehicle, respectively. Very promising experimental results are obtained using real-time vehicle image sequences, which have confirmed that this proposed location vehicle method is efficient and reliable, and its calculation speed meets the needs of the VRS.
文摘We present an omnidirectional vision system we have implemented to provide our mobile robot with a fast tracking and robust localization capability. An algorithm is proposed to do reconstruction of the environment from the omnidirectional image and global localization of the robot in the context of the Middle Size League RoboCup field. This is accomplished by learning a set of visual landmarks such as the goals and the corner posts. Due to the dynamic changing environment and the partially observable landmarks, four localization cases are discussed in order to get robust localization performance. Localization is performed using a method that matches the observed landmarks, i.e. color blobs, which are extracted from the environment. The advantages of the cylindrical projection are discussed giving special consideration to the characteristics of the visual landmark and the meaning of the blob extraction. The analysis is established based on real time experiments with our omnidirectional vision system and the actual mobile robot. The comparative studies are presented and the feasibility of the method is shown.
基金supported by the National Natural Science Foundation of China(Grant No.41201486)the National Key Technologies R&D Program of China(Grant No.SQ2013GX07E00985)the project of the Priority Academic Program Development of Jiangsu Higher Education Institutions(PAPD) in the Collaborative Innovation Center of Modern Grain Circulation and Security,Nanjing University of Finance and Economics
文摘Street-level visualization is an important application of 3D city models.Challenges to street-level visualization include the cluttering of buildings due to fine detail and visualization performance.In this paper,a novel method is proposed for streetlevel visualization based on visual saliency evaluation.The basic idea of the method is to preserve these salient buildings in a scene while removing those that are non-salient.The method can be divided into pre-processing procedures and real-time visualization.The first step in pre-processing is to convert 3D building models at higher Levels of Detail(Lo Ds) into LoD 1 models with simplified ground plans.Then,a number of index viewpoints are created along the streets; these indices refer to both the position and the direction of each street site.A visual saliency value is computed for each building,with respect to the index site,based on a visual difference between the original model and the generalized model.We calculate and evaluate three methods for visual saliency:local difference,global difference and minimum projection area.The real-time visualization process begins by mapping the observer to its closest indices.The street view is then generated based on the building information stored in those indexes.A user study shows that the local visual saliency method performs better than do the global visual saliency,area and image-based methods and that the framework proposed in this paper may improve the performance of 3D visualization.