期刊文献+
共找到36篇文章
< 1 2 >
每页显示 20 50 100
OPTIMIZED MEANSHIFT TARGET REFERENCE MODEL BASED ON IMPROVED PIXEL WEIGHTING IN VISUAL TRACKING 被引量:4
1
作者 Chen Ken Song Kangkang +1 位作者 Kyoungho Choi Guo Yunyan 《Journal of Electronics(China)》 2013年第3期283-289,共7页
The generic Meanshift is susceptible to interference of background pixels with the target pixels in the kernel of the reference model, which compromises the tracking performance. In this paper, we enhance the target c... The generic Meanshift is susceptible to interference of background pixels with the target pixels in the kernel of the reference model, which compromises the tracking performance. In this paper, we enhance the target color feature by attenuating the background color within the kernel through enlarging the pixel weightings which map to the pixels on the target. This way, the background pixel interference is largely suppressed in the color histogram in the course of constructing the target reference model. In addition, the proposed method also reduces the number of Meanshift iterations, which speeds up the algorithmic convergence. The two tests validate the proposed approach with improved tracking robustness on real-world video sequences. 展开更多
关键词 visual tracking MEANSHIFT Color feature histogram Pixel weighting tracking robustness
下载PDF
Visual tracking based on transfer learning of deep salience information 被引量:2
2
作者 Haorui Zuo Zhiyong Xu +1 位作者 Jianlin Zhang Ge Jia 《Opto-Electronic Advances》 2020年第9期30-40,共11页
In this paper,we propose a new visual tracking method in light of salience information and deep learning.Salience detection is used to exploit features with salient information of the image.Complicated representations... In this paper,we propose a new visual tracking method in light of salience information and deep learning.Salience detection is used to exploit features with salient information of the image.Complicated representations of image features can be gained by the function of every layer in convolution neural network(CNN).The characteristic of biology vision in attention-based salience is similar to the neuroscience features of convolution neural network.This motivates us to improve the representation ability of CNN with functions of salience detection.We adopt the fully-convolution networks(FCNs)to perform salience detection.We take parts of the network structure to perform salience extraction,which promotes the classification ability of the model.The network we propose shows great performance in tracking with the salient information.Compared with other excellent algorithms,our algorithm can track the target better in the open tracking datasets.We realize the 0.5592 accuracy on visual object tracking 2015(VOT15)dataset.For unmanned aerial vehicle 123(UAV123)dataset,the precision and success rate of our tracker is 0.710 and 0.429. 展开更多
关键词 convolution neural network transfer learning salience detection visual tracking
下载PDF
Robust visual tracking algorithm based on Monte Carlo approach with integrated attributes 被引量:1
3
作者 席涛 张胜修 颜诗源 《Journal of Harbin Institute of Technology(New Series)》 EI CAS 2010年第6期771-775,共5页
To improve the reliability and accuracy of visual tracker,a robust visual tracking algorithm based on multi-cues fusion under Bayesian framework is proposed.The weighed color and texture cues of the object are applied... To improve the reliability and accuracy of visual tracker,a robust visual tracking algorithm based on multi-cues fusion under Bayesian framework is proposed.The weighed color and texture cues of the object are applied to describe the moving object.An adjustable observation model is incorporated into particle filtering,which utilizes the properties of particle filter for coping with non-linear,non-Gaussian assumption and the ability to predict the position of the moving object in a cluttered environment and two complementary attributes are employed to estimate the matching similarity dynamically in term of the likelihood ratio factors;furthermore tunes the weight values according to the confidence map of the color and texture feature on-line adaptively to reconfigure the optimal observation likelihood model,which ensured attaining the maximum likelihood ratio in the tracking scenario even if in the situations where the object is occluded or illumination,pose and scale are time-variant.The experimental result shows that the algorithm can track a moving object accurately while the reliability of tracking in a challenging case is validated in the experimentation. 展开更多
关键词 visual tracking particle filter gabor wavelet monte carlo approach multi-cues fusion
下载PDF
A creative design of robotic visual tracking system in tailed welded blanks based on TRIZ 被引量:1
4
作者 张雷 赵明扬 +1 位作者 邹媛媛 赵立华 《China Welding》 EI CAS 2006年第4期23-25,共3页
According to the main tools of TRIZ, the theory of inventive problem solving, a new flowchart of the product conceptual design process to solve contradiction in TRIZ is proposed. In order to realize autonomous moving ... According to the main tools of TRIZ, the theory of inventive problem solving, a new flowchart of the product conceptual design process to solve contradiction in TRIZ is proposed. In order to realize autonomous moving and automatic weld seam tracking for welding robot in Tailed Welded Blanks, a creative design of robotic visual tracking system bused on CMOS has been developed by using the flowchart. The new system is not only used to inspect the workpiece ahead of a welding torch and measure the joint orientation and lateral deviation caused by curvature or discontinuity in the joint part, but also to record and measure the image size of the weld pool. Moreover, the hardware and software components are discussed in brief. 展开更多
关键词 visual tracking creative design TRIZ
下载PDF
MULTI-TARGET VISUAL TRACKING AND OCCLUSION DETECTION BY COMBINING BHATTACHARYYA COEFFICIENT AND KALMAN FILTER INNOVATION 被引量:1
5
作者 Chen Ken Chul Gyu Jhun 《Journal of Electronics(China)》 2013年第3期275-282,共8页
This paper introduces an approach for visual tracking of multi-target with occlusion occurrence. Based on the author's previous work in which the Overlap Coefficient (OC) is used to detect the occlusion, in this p... This paper introduces an approach for visual tracking of multi-target with occlusion occurrence. Based on the author's previous work in which the Overlap Coefficient (OC) is used to detect the occlusion, in this paper a method of combining Bhattacharyya Coefficient (BC) and Kalman filter innovation term is proposed as the criteria for jointly detecting the occlusion occurrence. Fragmentation of target is introduced in order to closely monitor the occlusion development. In the course of occlusion, the Kalman predictor is applied to determine the location of the occluded target, and the criterion for checking the re-appearance of the occluded target is also presented. The proposed approach is put to test on a standard video sequence, suggesting the satisfactory performance in multi-target tracking. 展开更多
关键词 visual tracking Multi-target occlusion Bhattacharyya Coefficient (BC) Kalman filter
下载PDF
Hierarchical Template Matching for Robust Visual Tracking with Severe Occlusions 被引量:1
6
作者 Lizuo Jin Tirui Wu +1 位作者 Feng Liu Gang Zeng 《ZTE Communications》 2012年第4期54-59,共6页
To tackle the problem of severe occlusions in visual tracking, we propose a hierarchical template-matching method based on a layered appearance model. This model integrates holistic- and part-region matching in order ... To tackle the problem of severe occlusions in visual tracking, we propose a hierarchical template-matching method based on a layered appearance model. This model integrates holistic- and part-region matching in order to locate an object in a coarse-to-fine manner. Furthermore, in order to reduce ambiguity in object localization, only the discriminative parts of an object' s appearance template are chosen for similarity computing with respect to their cornerness measurements. The similarity between parts is computed in a layer-wise manner, and from this, occlusions can be evaluated. When the object is partly occluded, it can be located accurately by matching candidate regions with the appearance template. When it is completely occluded, its location can be predicted from its historical motion information using a Kalman filter. The proposed tracker is tested on several practical image sequences, and the experimental results show that it can consistently provide accurate object location for stable tracking, even for severe occlusions. 展开更多
关键词 visual tracking hierarchical template matching layeredappearance model occlusion analysis
下载PDF
Sensor planning method for visual tracking in 3D camera networks 被引量:1
7
作者 Anlong Ming Xin Chen 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2014年第6期1107-1116,共10页
Most sensors or cameras discussed in the sensor network community are usually 3D homogeneous, even though their2 D coverage areas in the ground plane are heterogeneous. Meanwhile, observed objects of camera networks a... Most sensors or cameras discussed in the sensor network community are usually 3D homogeneous, even though their2 D coverage areas in the ground plane are heterogeneous. Meanwhile, observed objects of camera networks are usually simplified as 2D points in previous literature. However in actual application scenes, not only cameras are always heterogeneous with different height and action radiuses, but also the observed objects are with 3D features(i.e., height). This paper presents a sensor planning formulation addressing the efficiency enhancement of visual tracking in 3D heterogeneous camera networks that track and detect people traversing a region. The problem of sensor planning consists of three issues:(i) how to model the 3D heterogeneous cameras;(ii) how to rank the visibility, which ensures that the object of interest is visible in a camera's field of view;(iii) how to reconfigure the 3D viewing orientations of the cameras. This paper studies the geometric properties of 3D heterogeneous camera networks and addresses an evaluation formulation to rank the visibility of observed objects. Then a sensor planning method is proposed to improve the efficiency of visual tracking. Finally, the numerical results show that the proposed method can improve the tracking performance of the system compared to the conventional strategies. 展开更多
关键词 camera model sensor planning camera network visual tracking
下载PDF
An Adaptive Padding Correlation Filter With Group Feature Fusion for Robust Visual Tracking
8
作者 Zihang Feng Liping Yan +1 位作者 Yuanqing Xia Bo Xiao 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2022年第10期1845-1860,共16页
In recent visual tracking research,correlation filter(CF)based trackers become popular because of their high speed and considerable accuracy.Previous methods mainly work on the extension of features and the solution o... In recent visual tracking research,correlation filter(CF)based trackers become popular because of their high speed and considerable accuracy.Previous methods mainly work on the extension of features and the solution of the boundary effect to learn a better correlation filter.However,the related studies are insufficient.By exploring the potential of trackers in these two aspects,a novel adaptive padding correlation filter(APCF)with feature group fusion is proposed for robust visual tracking in this paper based on the popular context-aware tracking framework.In the tracker,three feature groups are fused by use of the weighted sum of the normalized response maps,to alleviate the risk of drift caused by the extreme change of single feature.Moreover,to improve the adaptive ability of padding for the filter training of different object shapes,the best padding is selected from the preset pool according to tracking precision over the whole video,where tracking precision is predicted according to the prediction model trained by use of the sequence features of the first several frames.The sequence features include three traditional features and eight newly constructed features.Extensive experiments demonstrate that the proposed tracker is superior to most state-of-the-art correlation filter based trackers and has a stable improvement compared to the basic trackers. 展开更多
关键词 Adaptive padding context information correlation filter(CF) feature group fusion robust visual tracking
下载PDF
Robust visual tracking for manipulators withunknown intrinsic and extrinsic parameters
9
作者 Chaoli WANG Xueming DING 《控制理论与应用(英文版)》 EI 2007年第4期420-426,共7页
This paper addresses the robust visual tracking of multi-feature points for a 3D manipulator with unknown intrinsic and extrinsic parameters of the vision system. This class of control systems are highly nonlinear con... This paper addresses the robust visual tracking of multi-feature points for a 3D manipulator with unknown intrinsic and extrinsic parameters of the vision system. This class of control systems are highly nonlinear control systems characterized as time-varying and strong coupling in states and unknown parameters. It is first pointed out that not only is the Jacobian image matrix nonsingular, but also its minimum singular value has a positive limit. This provides the foundation of kinematics and dynamics control of manipulators with visual feedback. Second, the Euler angle expressed rotation transformation is employed to estimate a subspace of the parameter space of the vision system. Based on the two results above, and arbitrarily chosen parameters in this subspace, the tracking controllers are proposed so that the image errors can be made as small as desired so long as the control gain is allowed to be large. The controller does not use visual velocity to achieve high and robust performance with low sampling rate of the vision system. The obtained results are proved by Lyapunov direct method. Experiments are included to demonstrate the effectiveness of the proposed controller. 展开更多
关键词 ROBUST visual tracking MANIPULATOR CAMERA Intrinsic and extrinsic parameters
下载PDF
Robust Visual Tracking with Hierarchical Deep Features Weighted Fusion
10
作者 Dianwei Wang Chunxiang Xu +3 位作者 Daxiang Li Ying Liu Zhijie Xu Jing Wang 《Journal of Beijing Institute of Technology》 EI CAS 2019年第4期770-776,共7页
To solve the problem of low robustness of trackers under significant appearance changes in complex background,a novel moving target tracking method based on hierarchical deep features weighted fusion and correlation f... To solve the problem of low robustness of trackers under significant appearance changes in complex background,a novel moving target tracking method based on hierarchical deep features weighted fusion and correlation filter is proposed.Firstly,multi-layer features are extracted by a deep model pre-trained on massive object recognition datasets.The linearly separable features of Relu3-1,Relu4-1 and Relu5-4 layers from VGG-Net-19 are especially suitable for target tracking.Then,correlation filters over hierarchical convolutional features are learned to generate their correlation response maps.Finally,a novel approach of weight adjustment is presented to fuse response maps.The maximum value of the final response map is just the location of the target.Extensive experiments on the object tracking benchmark datasets demonstrate the high robustness and recognition precision compared with several state-of-the-art trackers under the different conditions. 展开更多
关键词 visual tracking convolution neural network correlation filter feature fusion
下载PDF
Real-Time Visual Tracking with Compact Shape and Color Feature
11
作者 Zhenguo Gao Shixiong Xia +4 位作者 Yikun Zhang Rui Yao Jiaqi Zhao Qiang Niu Haifeng Jiang 《Computers, Materials & Continua》 SCIE EI 2018年第6期509-521,共13页
The colour feature is often used in the object tracking.The tracking methods extract the colour features of the object and the background,and distinguish them by a classifier.However,these existing methods simply use ... The colour feature is often used in the object tracking.The tracking methods extract the colour features of the object and the background,and distinguish them by a classifier.However,these existing methods simply use the colour information of the target pixels and do not consider the shape feature of the target,so that the description capability of the feature is weak.Moreover,incorporating shape information often leads to large feature dimension,which is not conducive to real-time object tracking.Recently,the emergence of visual tracking methods based on deep learning has also greatly increased the demand for computing resources of the algorithm.In this paper,we propose a real-time visual tracking method with compact shape and colour feature,which forms low dimensional compact shape and colour feature by fusing the shape and colour characteristics of the candidate object region,and reduces the dimensionality of the combined feature through the Hash function.The structural classification function is trained and updated online with dynamic data flow for adapting to the new frames.Further,the classification and prediction of the object are carried out with structured classification function.The experimental results demonstrate that the proposed tracker performs superiorly against several state-of-the-art algorithms on the challenging benchmark dataset OTB-100 and OTB-13. 展开更多
关键词 visual tracking compact feature colour feature structural learning
下载PDF
2D Part-Based Visual Tracking of Hydraulic Excavators
12
作者 Bo Xiao Ruiqi Chen Zhenhua Zhu 《World Journal of Engineering and Technology》 2016年第3期101-111,共11页
Visual tracking has been widely applied in construction industry and attracted signifi-cant interests recently. Lots of research studies have adopted visual tracking techniques on the surveillance of construction work... Visual tracking has been widely applied in construction industry and attracted signifi-cant interests recently. Lots of research studies have adopted visual tracking techniques on the surveillance of construction workforce, project productivity and construction safety. Until now, visual tracking algorithms have gained promising performance when tracking un-articulated equipment in construction sites. However, state-of-art tracking algorithms have unguaranteed performance in tracking articulated equipment, such as backhoes and excavators. The stretching buckets and booms are the main obstacles of successfully tracking articulated equipment. In order to fill this knowledge gap, the part-based tracking algorithms are introduced in this paper for tracking articulated equipment in construction sites. The part-based tracking is able to track different parts of target equipment while using multiple tracking algorithms at the same sequence. Some existing tracking methods have been chosen according to their outstanding performance in the computer vision community. Then, the part-based algorithms were created on the basis of selected visual tracking methods and tested by real construction sequences. In this way, the tracking performance was evaluated from effectiveness and robustness aspects. Throughout the quantification analysis, the tracking performance of articulated equipment was much more improved by using the part-based tracking algorithms. 展开更多
关键词 visual tracking Hydraulic Excavators Construction Safety Part-Based tracking
下载PDF
Hybrid Efficient Convolution Operators for Visual Tracking
13
作者 Yu Wang 《Journal on Artificial Intelligence》 2021年第2期63-72,共10页
Visual tracking is a classical computer vision problem with many applications.Efficient convolution operators(ECO)is one of the most outstanding visual tracking algorithms in recent years,it has shown great performanc... Visual tracking is a classical computer vision problem with many applications.Efficient convolution operators(ECO)is one of the most outstanding visual tracking algorithms in recent years,it has shown great performance using discriminative correlation filter(DCF)together with HOG,color maps and VGGNet features.Inspired by new deep learning models,this paper propose a hybrid efficient convolution operators integrating fully convolution network(FCN)and residual network(ResNet)for visual tracking,where FCN and ResNet are introduced in our proposed method to segment the objects from backgrounds and extract hierarchical feature maps of objects,respectively.Compared with the traditional VGGNet,our approach has higher accuracy for dealing with the issues of segmentation and image size.The experiments show that our approach would obtain better performance than ECO in terms of precision plot and success rate plot on OTB-2013 and UAV123 datasets. 展开更多
关键词 visual tracking deep learning convolutional neural network hybrid convolution operator
下载PDF
Visual tracking for underwater sea cucumber via correlation filters
14
作者 Honglei Wei Xiangzhi Kong +2 位作者 Xianyi Zhai Qiang Tong Guibing Pang 《International Journal of Agricultural and Biological Engineering》 SCIE 2023年第3期247-253,共7页
One of the essential techniques for using underwater robots to fish sea cucumbers is that the robots must track sea cucumbers using computer vision technology.Tracking underwater targets is a challenging task due to s... One of the essential techniques for using underwater robots to fish sea cucumbers is that the robots must track sea cucumbers using computer vision technology.Tracking underwater targets is a challenging task due to suspension,water absorption,and light scattering.This study proposed a simple but effective algorithm for sea cucumber tracking based on Kernelized Correlation Filters(KCF)framework.This method tracked the head and tail of the sea cucumber respectively and calculated the scale change according to the distance between the head and tail.The KCF method was improved on three strategies.First of all,the target was searched at the predicted position to improve accuracy.Secondly,an adaptive learning rate updating method based on the detection score of each frame was proposed.Finally,the adaptive size of the histogram of the oriented gradient(HOG)feature was used to balance the accuracy and efficiency.Experimental results showed that the algorithm had good tracking performance. 展开更多
关键词 visual tracking correlation filters kernelized correlation filters sea cucumber scale estimation UNDERWATER
原文传递
Dynamic Visible Light Positioning Based on Enhanced Visual Target Tracking
15
作者 Xiangyu Liu Jingyu Hao +1 位作者 Lei Guo Song Song 《China Communications》 SCIE CSCD 2023年第10期276-291,共16页
In visible light positioning systems,some scholars have proposed target tracking algorithms to balance the relationship among positioning accuracy,real-time performance,and robustness.However,there are still two probl... In visible light positioning systems,some scholars have proposed target tracking algorithms to balance the relationship among positioning accuracy,real-time performance,and robustness.However,there are still two problems:(1)When the captured LED disappears and the uncertain LED reappears,existing tracking algorithms may recognize the landmark in error;(2)The receiver is not always able to achieve positioning under various moving statuses.In this paper,we propose an enhanced visual target tracking algorithm to solve the above problems.First,we design the lightweight recognition/demodulation mechanism,which combines Kalman filtering with simple image preprocessing to quickly track and accurately demodulate the landmark.Then,we use the Gaussian mixture model and the LED color feature to enable the system to achieve positioning,when the receiver is under various moving statuses.Experimental results show that our system can achieve high-precision dynamic positioning and improve the system’s comprehensive performance. 展开更多
关键词 visible light positioning visual target tracking gaussian mixture model kalman filtering system performance
下载PDF
Robust Visual Tracking Based on Convolutional Features with Illumination and Occlusion Handing 被引量:6
16
作者 Kang Li Fa-Zhi He Hai-Ping Yu 《Journal of Computer Science & Technology》 SCIE EI CSCD 2018年第1期223-236,共14页
Visual tracking is an important area in computer vision. How to deal with illumination and occlusion problems is a challenging issue. This paper presents a novel and efficient tracking algorithm to handle such problem... Visual tracking is an important area in computer vision. How to deal with illumination and occlusion problems is a challenging issue. This paper presents a novel and efficient tracking algorithm to handle such problems. On one hand, a target's initial appearance always has clear contour, which is light-invariant and robust to illumination change. On the other hand, features play an important role in tracking, among which convolutional features have shown favorable performance. Therefore, we adopt convolved contour features to represent the target appearance. Generally speaking, first-order derivative edge gradient operators are efficient in detecting contours by convolving them with images. Especially, the Prewitt operator is more sensitive to horizontal and vertical edges, while the Sobel operator is more sensitive to diagonal edges. Inherently, Prewitt and Sobel are complementary with each other. Technically speaking, this paper designs two groups of Prewitt and Sobel edge detectors to extract a set of complete convolutional features, which include horizontal, vertical and diagonal edges features. In the first frame, contour features are extracted from the target to construct the initial appearance model. After the analysis of experimental image with these contour features, it can be found that the bright parts often provide more useful information to describe target characteristics. Therefore, we propose a method to compare the similarity between candidate sample and our trained model only using bright pixels, which makes our tracker able to deal with partial occlusion problem. After getting the new target, in order to adapt appearance change, we propose a corresponding online strategy to incrementally update our model. Experiments show that convolutional features extracted by well-integrated Prewitt and Sobel edge detectors can be eff^cient enough to learn robust appearance model. Numerous experimental results on nine challenging sequences show that our proposed approach is very effective and robust in comparison with the state-of-the-art trackers. 展开更多
关键词 visual tracking convolutional feature gradient operator online learning particle filter
原文传递
Advances in Deep Learning Methods for Visual Tracking:Literature Review and Fundamentals 被引量:4
17
作者 Xiao-Qin Zhang Run-Hua Jiang +3 位作者 Chen-Xiang Fan Tian-Yu Tong Tao Wang Peng-Cheng Huang 《International Journal of Automation and computing》 EI CSCD 2021年第3期311-333,共23页
Recently,deep learning has achieved great success in visual tracking tasks,particularly in single-object tracking.This paper provides a comprehensive review of state-of-the-art single-object tracking algorithms based ... Recently,deep learning has achieved great success in visual tracking tasks,particularly in single-object tracking.This paper provides a comprehensive review of state-of-the-art single-object tracking algorithms based on deep learning.First,we introduce basic knowledge of deep visual tracking,including fundamental concepts,existing algorithms,and previous reviews.Second,we briefly review existing deep learning methods by categorizing them into data-invariant and data-adaptive methods based on whether they can dynamically change their model parameters or architectures.Then,we conclude with the general components of deep trackers.In this way,we systematically analyze the novelties of several recently proposed deep trackers.Thereafter,popular datasets such as Object Tracking Benchmark(OTB)and Visual Object Tracking(VOT)are discussed,along with the performances of several deep trackers.Finally,based on observations and experimental results,we discuss three different characteristics of deep trackers,i.e.,the relationships between their general components,exploration of more effective tracking frameworks,and interpretability of their motion estimation components. 展开更多
关键词 Deep learning visual tracking data-invariant data-adaptive general components
原文传递
Robust visual tracking based on scale invariance and deep learning 被引量:2
18
作者 Nan REN Junping DU +3 位作者 Suguo ZHU Linghui LI Dan FAN JangMyung LEE 《Frontiers of Computer Science》 SCIE EI CSCD 2017年第2期230-242,共13页
Visual tracking is a popular research area in com- puter vision, which is very difficult to actualize because of challenges such as changes in scale and illumination, rota- tion, fast motion, and occlusion. Consequent... Visual tracking is a popular research area in com- puter vision, which is very difficult to actualize because of challenges such as changes in scale and illumination, rota- tion, fast motion, and occlusion. Consequently, the focus in this research area is to make tracking algorithms adapt to these changes, so as to implement stable and accurate vi- sual tracking. This paper proposes a visual tracking algorithm that integrates the scale invariance of SURF feature with deep learning to enhance the tracking robustness when the size of the object to be tracked changes significantly. Particle filter is used for motion estimation. The co^fidence of each parti- cle is computed via a deep neural network, and the result of particle filter is verified and corrected by mean shift because of its computational efficiency and insensitivity to external interference. Both qualitative and quantitative evaluations on challenging benchmark sequences demonstrate that the pro- posed tracking algorithm performs favorably against several state-of-the-art methods throughout the challenging factors in visual tracking, especially for scale variation. 展开更多
关键词 visual tracking SURF mean shift particle filter neural network
原文传递
Efficient and Robust Feature Model for Visual Tracking
19
作者 王路 卓晴 王文渊 《Tsinghua Science and Technology》 SCIE EI CAS 2009年第2期151-156,共6页
Long duration visual tracking of targets is quite challenging for computer vision, because the environments may be cluttered and distracting. Illumination variations and partial occlusions are two main difficulties in... Long duration visual tracking of targets is quite challenging for computer vision, because the environments may be cluttered and distracting. Illumination variations and partial occlusions are two main difficulties in real world visual tracking. Existing methods based on hostile appearance information cannot solve these problems effectively. This paper proposes a feature-based dynamic tracking approach that can track objects with partial occlusions and varying illumination. The method represents the tracked object by an invariant feature model. During the tracking, a new pyramid matching algorithm was used to match the object template with the observations to determine the observation likelihood. This matching is quite efficient in calculation and the spatial constraints among these features are also embedded. Instead of complicated optimization methods, the whole model is incorporated into a Bayesian filtering framework. The experiments on real world sequences demonstrate that the method can track objects accurately and robustly even with illumination variations and partial occlusions. 展开更多
关键词 visual tracking object model robust feature feature matching
原文传递
Robot visual guide with Fourier-Mellin based visual tracking
20
作者 Chao PENG Danhua CAO +1 位作者 Yubin WU Qun YANG 《Frontiers of Optoelectronics》 EI CSCD 2019年第4期413-421,共9页
Robot vision guide is an important research area in industrial automation,and image-based target pose estimation is one of the most challenging problems.We focus on target pose estimation and present a solution based ... Robot vision guide is an important research area in industrial automation,and image-based target pose estimation is one of the most challenging problems.We focus on target pose estimation and present a solution based on the binocular stereo vision in this paper.To improve the robustness and speed of pose estimation,we propose a novel visual tracking algorithm based on Fourier-Mellin transform to extract the target region.We evaluate the proposed tracking algorithm on online tracking benchmark-50(OTB-50)and the results show that it outperforms other lightweight trackers,especially when the target is rotated or scaled.The final experiment proves that the improved pose estimation approach can achieve a position accuracy of 1.84 mm and a speed of 7 FPS(frames per second).Besides,this approach is robust to the variances of illumination and can work well in the range of 250-700 lux. 展开更多
关键词 robot visual guide target pose estimation stereo vision visual tracking Fourier-Mellin transform(FMT)
原文传递
上一页 1 2 下一页 到第
使用帮助 返回顶部