期刊文献+
共找到39篇文章
< 1 2 >
每页显示 20 50 100
SMSTracker:A Self-Calibration Multi-Head Self-Attention Transformer for Visual Object Tracking
1
作者 Zhongyang Wang Hu Zhu Feng Liu 《Computers, Materials & Continua》 SCIE EI 2024年第7期605-623,共19页
Visual object tracking plays a crucial role in computer vision.In recent years,researchers have proposed various methods to achieve high-performance object tracking.Among these,methods based on Transformers have becom... Visual object tracking plays a crucial role in computer vision.In recent years,researchers have proposed various methods to achieve high-performance object tracking.Among these,methods based on Transformers have become a research hotspot due to their ability to globally model and contextualize information.However,current Transformer-based object tracking methods still face challenges such as low tracking accuracy and the presence of redundant feature information.In this paper,we introduce self-calibration multi-head self-attention Transformer(SMSTracker)as a solution to these challenges.It employs a hybrid tensor decomposition self-organizing multihead self-attention transformermechanism,which not only compresses and accelerates Transformer operations but also significantly reduces redundant data,thereby enhancing the accuracy and efficiency of tracking.Additionally,we introduce a self-calibration attention fusion block to resolve common issues of attention ambiguities and inconsistencies found in traditional trackingmethods,ensuring the stability and reliability of tracking performance across various scenarios.By integrating a hybrid tensor decomposition approach with a self-organizingmulti-head self-attentive transformer mechanism,SMSTracker enhances the efficiency and accuracy of the tracking process.Experimental results show that SMSTracker achieves competitive performance in visual object tracking,promising more robust and efficient tracking systems,demonstrating its potential to providemore robust and efficient tracking solutions in real-world applications. 展开更多
关键词 Visual object tracking tensor decomposition TRANSFORMER self-attention
下载PDF
Masked Autoencoders as Single Object Tracking Learners
2
作者 Chunjuan Bo XinChen Junxing Zhang 《Computers, Materials & Continua》 SCIE EI 2024年第7期1105-1122,共18页
Significant advancements have beenwitnessed in visual tracking applications leveragingViT in recent years,mainly due to the formidablemodeling capabilities of Vision Transformer(ViT).However,the strong performance of ... Significant advancements have beenwitnessed in visual tracking applications leveragingViT in recent years,mainly due to the formidablemodeling capabilities of Vision Transformer(ViT).However,the strong performance of such trackers heavily relies on ViT models pretrained for long periods,limitingmore flexible model designs for tracking tasks.To address this issue,we propose an efficient unsupervised ViT pretraining method for the tracking task based on masked autoencoders,called TrackMAE.During pretraining,we employ two shared-parameter ViTs,serving as the appearance encoder and motion encoder,respectively.The appearance encoder encodes randomly masked image data,while the motion encoder encodes randomly masked pairs of video frames.Subsequently,an appearance decoder and a motion decoder separately reconstruct the original image data and video frame data at the pixel level.In this way,ViT learns to understand both the appearance of images and the motion between video frames simultaneously.Experimental results demonstrate that ViT-Base and ViT-Large models,pretrained with TrackMAE and combined with a simple tracking head,achieve state-of-the-art(SOTA)performance without additional design.Moreover,compared to the currently popular MAE pretraining methods,TrackMAE consumes only 1/5 of the training time,which will facilitate the customization of diverse models for tracking.For instance,we additionally customize a lightweight ViT-XS,which achieves SOTA efficient tracking performance. 展开更多
关键词 Visual object tracking vision transformer masked autoencoder visual representation learning
下载PDF
Dynamic Visible Light Positioning Based on Enhanced Visual Target Tracking
3
作者 Xiangyu Liu Jingyu Hao +1 位作者 Lei Guo Song Song 《China Communications》 SCIE CSCD 2023年第10期276-291,共16页
In visible light positioning systems,some scholars have proposed target tracking algorithms to balance the relationship among positioning accuracy,real-time performance,and robustness.However,there are still two probl... In visible light positioning systems,some scholars have proposed target tracking algorithms to balance the relationship among positioning accuracy,real-time performance,and robustness.However,there are still two problems:(1)When the captured LED disappears and the uncertain LED reappears,existing tracking algorithms may recognize the landmark in error;(2)The receiver is not always able to achieve positioning under various moving statuses.In this paper,we propose an enhanced visual target tracking algorithm to solve the above problems.First,we design the lightweight recognition/demodulation mechanism,which combines Kalman filtering with simple image preprocessing to quickly track and accurately demodulate the landmark.Then,we use the Gaussian mixture model and the LED color feature to enable the system to achieve positioning,when the receiver is under various moving statuses.Experimental results show that our system can achieve high-precision dynamic positioning and improve the system’s comprehensive performance. 展开更多
关键词 visible light positioning visual target tracking gaussian mixture model kalman filtering system performance
下载PDF
Adaptive multi-feature tracking in particle swarm optimization based particle filter framework 被引量:7
4
作者 Miaohui Zhang Ming Xin Jie Yang 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2012年第5期775-783,共9页
This paper proposes a particle swarm optimization(PSO) based particle filter(PF) tracking framework,the embedded PSO makes particles move toward the high likelihood area to find the optimal position in the state t... This paper proposes a particle swarm optimization(PSO) based particle filter(PF) tracking framework,the embedded PSO makes particles move toward the high likelihood area to find the optimal position in the state transition stage,and simultaneously incorporates the newest observations into the proposal distribution in the update stage.In the proposed approach,likelihood measure functions involving multiple features are presented to enhance the performance of model fitting.Furthermore,the multi-feature weights are self-adaptively adjusted by a PSO algorithm throughout the tracking process.There are three main contributions.Firstly,the PSO algorithm is fused into the PF framework,which can efficiently alleviate the particles degeneracy phenomenon.Secondly,an effective convergence criterion for the PSO algorithm is explored,which can avoid particles getting stuck in local minima and maintain a greater particle diversity.Finally,a multi-feature weight self-adjusting strategy is proposed,which can significantly improve the tracking robustness and accuracy.Experiments performed on several challenging public video sequences demonstrate that the proposed tracking approach achieves a considerable performance. 展开更多
关键词 particle filter particle swarm optimization adaptive weight adjustment visual tracking
下载PDF
A correlative classifiers approach based on particle filter and sample set for tracking occluded target 被引量:6
5
作者 LI Kang HE Fa-zhi +1 位作者 YU Hai-ping CHEN Xiao 《Applied Mathematics(A Journal of Chinese Universities)》 SCIE CSCD 2017年第3期294-312,共19页
Target tracking is one of the most important issues in computer vision and has been applied in many fields of science, engineering and industry. Because of the occlusion during tracking, typical approaches with single... Target tracking is one of the most important issues in computer vision and has been applied in many fields of science, engineering and industry. Because of the occlusion during tracking, typical approaches with single classifier learn much of occluding background information which results in the decrease of tracking performance, and eventually lead to the failure of the tracking algorithm. This paper presents a new correlative classifiers approach to address the above problem. Our idea is to derive a group of correlative classifiers based on sample set method. Then we propose strategy to establish the classifiers and to query the suitable classifiers for the next frame tracking. In order to deal with nonlinear problem, particle filter is adopted and integrated with sample set method. For choosing the target from candidate particles, we define a similarity measurement between particles and sample set. The proposed sample set method includes the following steps. First, we cropped positive samples set around the target and negative samples set far away from the target. Second, we extracted average Haar-like feature from these samples and calculate their statistical characteristic which represents the target model. Third, we define the similarity measurement based on the statistical characteristic of these two sets to judge the similarity between candidate particles and target model. Finally, we choose the largest similarity score particle as the target in the new frame. A number of experiments show the robustness and efficiency of the proposed approach when compared with other state-of-the-art trackers. 展开更多
关键词 visual tracking sample set method online learning particle filter
下载PDF
Visual Object Tracking and Servoing Control of a Nano-Scale Quadrotor:System,Algorithms,and Experiments 被引量:6
6
作者 Yuzhen Liu Ziyang Meng +1 位作者 Yao Zou Ming Cao 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2021年第2期344-360,共17页
There are two main trends in the development of unmanned aerial vehicle(UAV)technologies:miniaturization and intellectualization,in which realizing object tracking capabilities for a nano-scale UAV is one of the most ... There are two main trends in the development of unmanned aerial vehicle(UAV)technologies:miniaturization and intellectualization,in which realizing object tracking capabilities for a nano-scale UAV is one of the most challenging problems.In this paper,we present a visual object tracking and servoing control system utilizing a tailor-made 38 g nano-scale quadrotor.A lightweight visual module is integrated to enable object tracking capabilities,and a micro positioning deck is mounted to provide accurate pose estimation.In order to be robust against object appearance variations,a novel object tracking algorithm,denoted by RMCTer,is proposed,which integrates a powerful short-term tracking module and an efficient long-term processing module.In particular,the long-term processing module can provide additional object information and modify the short-term tracking model in a timely manner.Furthermore,a positionbased visual servoing control method is proposed for the quadrotor,where an adaptive tracking controller is designed by leveraging backstepping and adaptive techniques.Stable and accurate object tracking is achieved even under disturbances.Experimental results are presented to demonstrate the high accuracy and stability of the whole tracking system. 展开更多
关键词 Nano-scale quadrotor nonlinear control positionbased visual servoing visual object tracking
下载PDF
OPTIMIZED MEANSHIFT TARGET REFERENCE MODEL BASED ON IMPROVED PIXEL WEIGHTING IN VISUAL TRACKING 被引量:4
7
作者 Chen Ken Song Kangkang +1 位作者 Kyoungho Choi Guo Yunyan 《Journal of Electronics(China)》 2013年第3期283-289,共7页
The generic Meanshift is susceptible to interference of background pixels with the target pixels in the kernel of the reference model, which compromises the tracking performance. In this paper, we enhance the target c... The generic Meanshift is susceptible to interference of background pixels with the target pixels in the kernel of the reference model, which compromises the tracking performance. In this paper, we enhance the target color feature by attenuating the background color within the kernel through enlarging the pixel weightings which map to the pixels on the target. This way, the background pixel interference is largely suppressed in the color histogram in the course of constructing the target reference model. In addition, the proposed method also reduces the number of Meanshift iterations, which speeds up the algorithmic convergence. The two tests validate the proposed approach with improved tracking robustness on real-world video sequences. 展开更多
关键词 Visual tracking MEANSHIFT Color feature histogram Pixel weighting tracking robustness
下载PDF
Visual tracking based on transfer learning of deep salience information 被引量:3
8
作者 Haorui Zuo Zhiyong Xu +1 位作者 Jianlin Zhang Ge Jia 《Opto-Electronic Advances》 2020年第9期30-40,共11页
In this paper,we propose a new visual tracking method in light of salience information and deep learning.Salience detection is used to exploit features with salient information of the image.Complicated representations... In this paper,we propose a new visual tracking method in light of salience information and deep learning.Salience detection is used to exploit features with salient information of the image.Complicated representations of image features can be gained by the function of every layer in convolution neural network(CNN).The characteristic of biology vision in attention-based salience is similar to the neuroscience features of convolution neural network.This motivates us to improve the representation ability of CNN with functions of salience detection.We adopt the fully-convolution networks(FCNs)to perform salience detection.We take parts of the network structure to perform salience extraction,which promotes the classification ability of the model.The network we propose shows great performance in tracking with the salient information.Compared with other excellent algorithms,our algorithm can track the target better in the open tracking datasets.We realize the 0.5592 accuracy on visual object tracking 2015(VOT15)dataset.For unmanned aerial vehicle 123(UAV123)dataset,the precision and success rate of our tracker is 0.710 and 0.429. 展开更多
关键词 convolution neural network transfer learning salience detection visual tracking
下载PDF
Real-Time Visual Tracking with Compact Shape and Color Feature 被引量:1
9
作者 Zhenguo Gao Shixiong Xia +4 位作者 Yikun Zhang Rui Yao Jiaqi Zhao Qiang Niu Haifeng Jiang 《Computers, Materials & Continua》 SCIE EI 2018年第6期509-521,共13页
The colour feature is often used in the object tracking.The tracking methods extract the colour features of the object and the background,and distinguish them by a classifier.However,these existing methods simply use ... The colour feature is often used in the object tracking.The tracking methods extract the colour features of the object and the background,and distinguish them by a classifier.However,these existing methods simply use the colour information of the target pixels and do not consider the shape feature of the target,so that the description capability of the feature is weak.Moreover,incorporating shape information often leads to large feature dimension,which is not conducive to real-time object tracking.Recently,the emergence of visual tracking methods based on deep learning has also greatly increased the demand for computing resources of the algorithm.In this paper,we propose a real-time visual tracking method with compact shape and colour feature,which forms low dimensional compact shape and colour feature by fusing the shape and colour characteristics of the candidate object region,and reduces the dimensionality of the combined feature through the Hash function.The structural classification function is trained and updated online with dynamic data flow for adapting to the new frames.Further,the classification and prediction of the object are carried out with structured classification function.The experimental results demonstrate that the proposed tracker performs superiorly against several state-of-the-art algorithms on the challenging benchmark dataset OTB-100 and OTB-13. 展开更多
关键词 Visual tracking compact feature colour feature structural learning
下载PDF
Robust visual tracking algorithm based on Monte Carlo approach with integrated attributes 被引量:1
10
作者 席涛 张胜修 颜诗源 《Journal of Harbin Institute of Technology(New Series)》 EI CAS 2010年第6期771-775,共5页
To improve the reliability and accuracy of visual tracker,a robust visual tracking algorithm based on multi-cues fusion under Bayesian framework is proposed.The weighed color and texture cues of the object are applied... To improve the reliability and accuracy of visual tracker,a robust visual tracking algorithm based on multi-cues fusion under Bayesian framework is proposed.The weighed color and texture cues of the object are applied to describe the moving object.An adjustable observation model is incorporated into particle filtering,which utilizes the properties of particle filter for coping with non-linear,non-Gaussian assumption and the ability to predict the position of the moving object in a cluttered environment and two complementary attributes are employed to estimate the matching similarity dynamically in term of the likelihood ratio factors;furthermore tunes the weight values according to the confidence map of the color and texture feature on-line adaptively to reconfigure the optimal observation likelihood model,which ensured attaining the maximum likelihood ratio in the tracking scenario even if in the situations where the object is occluded or illumination,pose and scale are time-variant.The experimental result shows that the algorithm can track a moving object accurately while the reliability of tracking in a challenging case is validated in the experimentation. 展开更多
关键词 visual tracking particle filter gabor wavelet monte carlo approach multi-cues fusion
下载PDF
Robust Object Tracking under Appearance Change Conditions 被引量:1
11
作者 Qi-Cong Wang Yuan-Hao Gong Chen-Hui Yang Cui-Hua Li Department of Computer Science, Xiamen University, Xiamen 361005, PRC 《International Journal of Automation and computing》 EI 2010年第1期31-38,共8页
We propose a robust visual tracking framework based on particle filter to deal with the object appearance changes due to varying illumination, pose variantions, and occlusions. We mainly improve the observation model ... We propose a robust visual tracking framework based on particle filter to deal with the object appearance changes due to varying illumination, pose variantions, and occlusions. We mainly improve the observation model and re-sampling process in a particle filter. We use on-line updating appearance model, affine transformation, and M-estimation to construct an adaptive observation model. On-line updating appearance model can adapt to the changes of illumination partially. Affine transformation-based similarity measurement is introduced to tackle pose variantions, and M-estimation is used to handle the occluded object in computing observation likelihood. To take advantage of the most recent observation and produce a suboptimal Gaussian proposal distribution, we incorporate Kalman filter into a particle filter to enhance the performance of the resampling process. To estimate the posterior probability density properly with lower computational complexity, we only employ a single Kalman filter to propagate Gaussian distribution. Experimental results have demonstrated the effectiveness and robustness of the proposed algorithm by tracking visual objects in the recorded video sequences. 展开更多
关键词 Visual tracking particle filter observation model Kalman filter expectation-maximization (EM) algorithm
下载PDF
A creative design of robotic visual tracking system in tailed welded blanks based on TRIZ 被引量:1
12
作者 张雷 赵明扬 +1 位作者 邹媛媛 赵立华 《China Welding》 EI CAS 2006年第4期23-25,共3页
According to the main tools of TRIZ, the theory of inventive problem solving, a new flowchart of the product conceptual design process to solve contradiction in TRIZ is proposed. In order to realize autonomous moving ... According to the main tools of TRIZ, the theory of inventive problem solving, a new flowchart of the product conceptual design process to solve contradiction in TRIZ is proposed. In order to realize autonomous moving and automatic weld seam tracking for welding robot in Tailed Welded Blanks, a creative design of robotic visual tracking system bused on CMOS has been developed by using the flowchart. The new system is not only used to inspect the workpiece ahead of a welding torch and measure the joint orientation and lateral deviation caused by curvature or discontinuity in the joint part, but also to record and measure the image size of the weld pool. Moreover, the hardware and software components are discussed in brief. 展开更多
关键词 visual tracking creative design TRIZ
下载PDF
MULTI-TARGET VISUAL TRACKING AND OCCLUSION DETECTION BY COMBINING BHATTACHARYYA COEFFICIENT AND KALMAN FILTER INNOVATION 被引量:1
13
作者 Chen Ken Chul Gyu Jhun 《Journal of Electronics(China)》 2013年第3期275-282,共8页
This paper introduces an approach for visual tracking of multi-target with occlusion occurrence. Based on the author's previous work in which the Overlap Coefficient (OC) is used to detect the occlusion, in this p... This paper introduces an approach for visual tracking of multi-target with occlusion occurrence. Based on the author's previous work in which the Overlap Coefficient (OC) is used to detect the occlusion, in this paper a method of combining Bhattacharyya Coefficient (BC) and Kalman filter innovation term is proposed as the criteria for jointly detecting the occlusion occurrence. Fragmentation of target is introduced in order to closely monitor the occlusion development. In the course of occlusion, the Kalman predictor is applied to determine the location of the occluded target, and the criterion for checking the re-appearance of the occluded target is also presented. The proposed approach is put to test on a standard video sequence, suggesting the satisfactory performance in multi-target tracking. 展开更多
关键词 Visual tracking Multi-target occlusion Bhattacharyya Coefficient (BC) Kalman filter
下载PDF
Hierarchical Template Matching for Robust Visual Tracking with Severe Occlusions 被引量:1
14
作者 Lizuo Jin Tirui Wu +1 位作者 Feng Liu Gang Zeng 《ZTE Communications》 2012年第4期54-59,共6页
To tackle the problem of severe occlusions in visual tracking, we propose a hierarchical template-matching method based on a layered appearance model. This model integrates holistic- and part-region matching in order ... To tackle the problem of severe occlusions in visual tracking, we propose a hierarchical template-matching method based on a layered appearance model. This model integrates holistic- and part-region matching in order to locate an object in a coarse-to-fine manner. Furthermore, in order to reduce ambiguity in object localization, only the discriminative parts of an object' s appearance template are chosen for similarity computing with respect to their cornerness measurements. The similarity between parts is computed in a layer-wise manner, and from this, occlusions can be evaluated. When the object is partly occluded, it can be located accurately by matching candidate regions with the appearance template. When it is completely occluded, its location can be predicted from its historical motion information using a Kalman filter. The proposed tracker is tested on several practical image sequences, and the experimental results show that it can consistently provide accurate object location for stable tracking, even for severe occlusions. 展开更多
关键词 visual tracking hierarchical template matching layeredappearance model occlusion analysis
下载PDF
Sensor planning method for visual tracking in 3D camera networks 被引量:1
15
作者 Anlong Ming Xin Chen 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2014年第6期1107-1116,共10页
Most sensors or cameras discussed in the sensor network community are usually 3D homogeneous, even though their2 D coverage areas in the ground plane are heterogeneous. Meanwhile, observed objects of camera networks a... Most sensors or cameras discussed in the sensor network community are usually 3D homogeneous, even though their2 D coverage areas in the ground plane are heterogeneous. Meanwhile, observed objects of camera networks are usually simplified as 2D points in previous literature. However in actual application scenes, not only cameras are always heterogeneous with different height and action radiuses, but also the observed objects are with 3D features(i.e., height). This paper presents a sensor planning formulation addressing the efficiency enhancement of visual tracking in 3D heterogeneous camera networks that track and detect people traversing a region. The problem of sensor planning consists of three issues:(i) how to model the 3D heterogeneous cameras;(ii) how to rank the visibility, which ensures that the object of interest is visible in a camera's field of view;(iii) how to reconfigure the 3D viewing orientations of the cameras. This paper studies the geometric properties of 3D heterogeneous camera networks and addresses an evaluation formulation to rank the visibility of observed objects. Then a sensor planning method is proposed to improve the efficiency of visual tracking. Finally, the numerical results show that the proposed method can improve the tracking performance of the system compared to the conventional strategies. 展开更多
关键词 camera model sensor planning camera network visual tracking
下载PDF
Hybrid Efficient Convolution Operators for Visual Tracking 被引量:1
16
作者 Yu Wang 《Journal on Artificial Intelligence》 2021年第2期63-72,共10页
Visual tracking is a classical computer vision problem with many applications.Efficient convolution operators(ECO)is one of the most outstanding visual tracking algorithms in recent years,it has shown great performanc... Visual tracking is a classical computer vision problem with many applications.Efficient convolution operators(ECO)is one of the most outstanding visual tracking algorithms in recent years,it has shown great performance using discriminative correlation filter(DCF)together with HOG,color maps and VGGNet features.Inspired by new deep learning models,this paper propose a hybrid efficient convolution operators integrating fully convolution network(FCN)and residual network(ResNet)for visual tracking,where FCN and ResNet are introduced in our proposed method to segment the objects from backgrounds and extract hierarchical feature maps of objects,respectively.Compared with the traditional VGGNet,our approach has higher accuracy for dealing with the issues of segmentation and image size.The experiments show that our approach would obtain better performance than ECO in terms of precision plot and success rate plot on OTB-2013 and UAV123 datasets. 展开更多
关键词 Visual tracking deep learning convolutional neural network hybrid convolution operator
下载PDF
Robust visual tracking for manipulators withunknown intrinsic and extrinsic parameters
17
作者 Chaoli WANG Xueming DING 《控制理论与应用(英文版)》 EI 2007年第4期420-426,共7页
This paper addresses the robust visual tracking of multi-feature points for a 3D manipulator with unknown intrinsic and extrinsic parameters of the vision system. This class of control systems are highly nonlinear con... This paper addresses the robust visual tracking of multi-feature points for a 3D manipulator with unknown intrinsic and extrinsic parameters of the vision system. This class of control systems are highly nonlinear control systems characterized as time-varying and strong coupling in states and unknown parameters. It is first pointed out that not only is the Jacobian image matrix nonsingular, but also its minimum singular value has a positive limit. This provides the foundation of kinematics and dynamics control of manipulators with visual feedback. Second, the Euler angle expressed rotation transformation is employed to estimate a subspace of the parameter space of the vision system. Based on the two results above, and arbitrarily chosen parameters in this subspace, the tracking controllers are proposed so that the image errors can be made as small as desired so long as the control gain is allowed to be large. The controller does not use visual velocity to achieve high and robust performance with low sampling rate of the vision system. The obtained results are proved by Lyapunov direct method. Experiments are included to demonstrate the effectiveness of the proposed controller. 展开更多
关键词 ROBUST Visual tracking MANIPULATOR CAMERA Intrinsic and extrinsic parameters
下载PDF
Robust Visual Tracking with Hierarchical Deep Features Weighted Fusion
18
作者 Dianwei Wang Chunxiang Xu +3 位作者 Daxiang Li Ying Liu Zhijie Xu Jing Wang 《Journal of Beijing Institute of Technology》 EI CAS 2019年第4期770-776,共7页
To solve the problem of low robustness of trackers under significant appearance changes in complex background,a novel moving target tracking method based on hierarchical deep features weighted fusion and correlation f... To solve the problem of low robustness of trackers under significant appearance changes in complex background,a novel moving target tracking method based on hierarchical deep features weighted fusion and correlation filter is proposed.Firstly,multi-layer features are extracted by a deep model pre-trained on massive object recognition datasets.The linearly separable features of Relu3-1,Relu4-1 and Relu5-4 layers from VGG-Net-19 are especially suitable for target tracking.Then,correlation filters over hierarchical convolutional features are learned to generate their correlation response maps.Finally,a novel approach of weight adjustment is presented to fuse response maps.The maximum value of the final response map is just the location of the target.Extensive experiments on the object tracking benchmark datasets demonstrate the high robustness and recognition precision compared with several state-of-the-art trackers under the different conditions. 展开更多
关键词 visual tracking convolution neural network correlation filter feature fusion
下载PDF
An Adaptive Padding Correlation Filter With Group Feature Fusion for Robust Visual Tracking
19
作者 Zihang Feng Liping Yan +1 位作者 Yuanqing Xia Bo Xiao 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2022年第10期1845-1860,共16页
In recent visual tracking research,correlation filter(CF)based trackers become popular because of their high speed and considerable accuracy.Previous methods mainly work on the extension of features and the solution o... In recent visual tracking research,correlation filter(CF)based trackers become popular because of their high speed and considerable accuracy.Previous methods mainly work on the extension of features and the solution of the boundary effect to learn a better correlation filter.However,the related studies are insufficient.By exploring the potential of trackers in these two aspects,a novel adaptive padding correlation filter(APCF)with feature group fusion is proposed for robust visual tracking in this paper based on the popular context-aware tracking framework.In the tracker,three feature groups are fused by use of the weighted sum of the normalized response maps,to alleviate the risk of drift caused by the extreme change of single feature.Moreover,to improve the adaptive ability of padding for the filter training of different object shapes,the best padding is selected from the preset pool according to tracking precision over the whole video,where tracking precision is predicted according to the prediction model trained by use of the sequence features of the first several frames.The sequence features include three traditional features and eight newly constructed features.Extensive experiments demonstrate that the proposed tracker is superior to most state-of-the-art correlation filter based trackers and has a stable improvement compared to the basic trackers. 展开更多
关键词 Adaptive padding context information correlation filter(CF) feature group fusion robust visual tracking
下载PDF
Enhancing the Robustness of Visual Object Tracking via Style Transfer
20
作者 Abdollah Amirkhani Amir Hossein Barshooi Amir Ebrahimi 《Computers, Materials & Continua》 SCIE EI 2022年第1期981-997,共17页
The performance and accuracy of computer vision systems are affected by noise in different forms.Although numerous solutions and algorithms have been presented for dealing with every type of noise,a comprehensive tech... The performance and accuracy of computer vision systems are affected by noise in different forms.Although numerous solutions and algorithms have been presented for dealing with every type of noise,a comprehensive technique that can cover all the diverse noises and mitigate their damaging effects on the performance and precision of various systems is still missing.In this paper,we have focused on the stability and robustness of one computer vision branch(i.e.,visual object tracking).We have demonstrated that,without imposing a heavy computational load on a model or changing its algorithms,the drop in the performance and accuracy of a system when it is exposed to an unseen noise-laden test dataset can be prevented by simply applying the style transfer technique on the train dataset and training the model with a combination of these and the original untrained data.To verify our proposed approach,it is applied on a generic object tracker by using regression networks.This method’s validity is confirmed by testing it on an exclusive benchmark comprising 50 image sequences,with each sequence containing 15 types of noise at five different intensity levels.The OPE curves obtained show a 40%increase in the robustness of the proposed object tracker against noise,compared to the other trackers considered. 展开更多
关键词 Style transfer visual object tracking ROBUSTNESS CORRUPTION
下载PDF
上一页 1 2 下一页 到第
使用帮助 返回顶部