Visual representation learning is ubiquitous in various real-world applications,including visual comprehension,video understanding,multi-modal analysis,human-computer interaction,and urban computing.Due to the emergen...Visual representation learning is ubiquitous in various real-world applications,including visual comprehension,video understanding,multi-modal analysis,human-computer interaction,and urban computing.Due to the emergence of huge amounts of multimodal heterogeneous spatial/temporal/spatial-temporal data in the big data era,the lack of interpretability,robustness,and out-of-distribution generalization are becoming the challenges of the existing visual models.The majority of the existing methods tend to fit the original data/variable distributions and ignore the essential causal relations behind the multi-modal knowledge,which lacks unified guidance and analysis about why modern visual representation learning methods easily collapse into data bias and have limited generalization and cognitive abilities.Inspired by the strong inference ability of human-level agents,recent years have therefore witnessed great effort in developing causal reasoning paradigms to realize robust representation and model learning with good cognitive ability.In this paper,we conduct a comprehensive review of existing causal reasoning methods for visual representation learning,covering fundamental theories,models,and datasets.The limitations of current methods and datasets are also discussed.Moreover,we propose some prospective challenges,opportunities,and future research directions for benchmarking causal reasoning algorithms in visual representation learning.This paper aims to provide a comprehensive overview of this emerging field,attract attention,encourage discussions,bring to the forefront the urgency of developing novel causal reasoning methods,publicly available benchmarks,and consensus-building standards for reliable visual representation learning and related real-world applications more efficiently.展开更多
Significant advancements have beenwitnessed in visual tracking applications leveragingViT in recent years,mainly due to the formidablemodeling capabilities of Vision Transformer(ViT).However,the strong performance of ...Significant advancements have beenwitnessed in visual tracking applications leveragingViT in recent years,mainly due to the formidablemodeling capabilities of Vision Transformer(ViT).However,the strong performance of such trackers heavily relies on ViT models pretrained for long periods,limitingmore flexible model designs for tracking tasks.To address this issue,we propose an efficient unsupervised ViT pretraining method for the tracking task based on masked autoencoders,called TrackMAE.During pretraining,we employ two shared-parameter ViTs,serving as the appearance encoder and motion encoder,respectively.The appearance encoder encodes randomly masked image data,while the motion encoder encodes randomly masked pairs of video frames.Subsequently,an appearance decoder and a motion decoder separately reconstruct the original image data and video frame data at the pixel level.In this way,ViT learns to understand both the appearance of images and the motion between video frames simultaneously.Experimental results demonstrate that ViT-Base and ViT-Large models,pretrained with TrackMAE and combined with a simple tracking head,achieve state-of-the-art(SOTA)performance without additional design.Moreover,compared to the currently popular MAE pretraining methods,TrackMAE consumes only 1/5 of the training time,which will facilitate the customization of diverse models for tracking.For instance,we additionally customize a lightweight ViT-XS,which achieves SOTA efficient tracking performance.展开更多
基金supported in part by National Natural Science Foundation of China(Nos.62002395,61976250 and U1811463)the National Key R&D Program of China(No.2021ZD0111601)the Guangdong Basic and Applied Basic Research Foundation,China(Nos.2021A15150123 and 2020B1515020048).
文摘Visual representation learning is ubiquitous in various real-world applications,including visual comprehension,video understanding,multi-modal analysis,human-computer interaction,and urban computing.Due to the emergence of huge amounts of multimodal heterogeneous spatial/temporal/spatial-temporal data in the big data era,the lack of interpretability,robustness,and out-of-distribution generalization are becoming the challenges of the existing visual models.The majority of the existing methods tend to fit the original data/variable distributions and ignore the essential causal relations behind the multi-modal knowledge,which lacks unified guidance and analysis about why modern visual representation learning methods easily collapse into data bias and have limited generalization and cognitive abilities.Inspired by the strong inference ability of human-level agents,recent years have therefore witnessed great effort in developing causal reasoning paradigms to realize robust representation and model learning with good cognitive ability.In this paper,we conduct a comprehensive review of existing causal reasoning methods for visual representation learning,covering fundamental theories,models,and datasets.The limitations of current methods and datasets are also discussed.Moreover,we propose some prospective challenges,opportunities,and future research directions for benchmarking causal reasoning algorithms in visual representation learning.This paper aims to provide a comprehensive overview of this emerging field,attract attention,encourage discussions,bring to the forefront the urgency of developing novel causal reasoning methods,publicly available benchmarks,and consensus-building standards for reliable visual representation learning and related real-world applications more efficiently.
基金supported in part by National Natural Science Foundation of China(No.62176041)in part by Excellent Science and Technique Talent Foundation of Dalian(No.2022RY21).
文摘Significant advancements have beenwitnessed in visual tracking applications leveragingViT in recent years,mainly due to the formidablemodeling capabilities of Vision Transformer(ViT).However,the strong performance of such trackers heavily relies on ViT models pretrained for long periods,limitingmore flexible model designs for tracking tasks.To address this issue,we propose an efficient unsupervised ViT pretraining method for the tracking task based on masked autoencoders,called TrackMAE.During pretraining,we employ two shared-parameter ViTs,serving as the appearance encoder and motion encoder,respectively.The appearance encoder encodes randomly masked image data,while the motion encoder encodes randomly masked pairs of video frames.Subsequently,an appearance decoder and a motion decoder separately reconstruct the original image data and video frame data at the pixel level.In this way,ViT learns to understand both the appearance of images and the motion between video frames simultaneously.Experimental results demonstrate that ViT-Base and ViT-Large models,pretrained with TrackMAE and combined with a simple tracking head,achieve state-of-the-art(SOTA)performance without additional design.Moreover,compared to the currently popular MAE pretraining methods,TrackMAE consumes only 1/5 of the training time,which will facilitate the customization of diverse models for tracking.For instance,we additionally customize a lightweight ViT-XS,which achieves SOTA efficient tracking performance.