期刊文献+
共找到249篇文章
< 1 2 13 >
每页显示 20 50 100
Scribble-Supervised Video Object Segmentation 被引量:3
1
作者 Peiliang Huang Junwei Han +2 位作者 Nian Liu Jun Ren Dingwen Zhang 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2022年第2期339-353,共15页
Recently,video object segmentation has received great attention in the computer vision community.Most of the existing methods heavily rely on the pixel-wise human annotations,which are expensive and time-consuming to ... Recently,video object segmentation has received great attention in the computer vision community.Most of the existing methods heavily rely on the pixel-wise human annotations,which are expensive and time-consuming to obtain.To tackle this problem,we make an early attempt to achieve video object segmentation with scribble-level supervision,which can alleviate large amounts of human labor for collecting the manual annotation.However,using conventional network architectures and learning objective functions under this scenario cannot work well as the supervision information is highly sparse and incomplete.To address this issue,this paper introduces two novel elements to learn the video object segmentation model.The first one is the scribble attention module,which captures more accurate context information and learns an effective attention map to enhance the contrast between foreground and background.The other one is the scribble-supervised loss,which can optimize the unlabeled pixels and dynamically correct inaccurate segmented areas during the training stage.To evaluate the proposed method,we implement experiments on two video object segmentation benchmark datasets,You Tube-video object segmentation(VOS),and densely annotated video segmentation(DAVIS)-2017.We first generate the scribble annotations from the original per-pixel annotations.Then,we train our model and compare its test performance with the baseline models and other existing works.Extensive experiments demonstrate that the proposed method can work effectively and approach to the methods requiring the dense per-pixel annotations. 展开更多
关键词 Convolutional neural networks(CNNs) SCRIBBLE self-attention video object segmentation weakly supervised
下载PDF
Objective Performance Evaluation of Video Segmentation Algorithms with Ground-Truth 被引量:1
2
作者 杨高波 张兆扬 《Journal of Shanghai University(English Edition)》 CAS 2004年第1期70-74,共5页
While the development of particular video segmentation algorithms has attracted considerable research interest, relatively little effort has been devoted to provide a methodology for evaluating their performance. In t... While the development of particular video segmentation algorithms has attracted considerable research interest, relatively little effort has been devoted to provide a methodology for evaluating their performance. In this paper, we propose a methodology to objectively evaluate video segmentation algorithm with ground-truth, which is based on computing the deviation of segmentation results from the reference segmentation. Four different metrics based on classification pixels, edges, relative foreground area and relative position respectively are combined to address the spatial accuracy. Temporal coherency is evaluated by utilizing the difference of spatial accuracy between successive frames. The experimental results show the feasibility of our approach. Moreover, it is computationally more efficient than previous methods. It can be applied to provide an offline ranking among different segmentation algorithms and to optimally set the parameters for a given algorithm. 展开更多
关键词 video object segmentation performance evaluation MPEG-4.
下载PDF
AUTOMATIC SEGMENTATION OF VIDEO OBJECT PLANES IN MPEG-4 BASED ON SPATIO-TEMPORAL INFORMATION
3
作者 XiaJinxiang HuangShunji 《Journal of Electronics(China)》 2004年第3期206-212,共7页
Segmentation of semantic Video Object Planes (VOP's) from video sequence is a key to the standard MPEG-4 with content-based video coding. In this paper, the approach of automatic Segmentation of VOP's Based on... Segmentation of semantic Video Object Planes (VOP's) from video sequence is a key to the standard MPEG-4 with content-based video coding. In this paper, the approach of automatic Segmentation of VOP's Based on Spatio-Temporal Information (SBSTI) is proposed.The proceeding results demonstrate the good performance of the algorithm. 展开更多
关键词 MPEG-4 图像分割 VOP 空间信息 SBSTI
下载PDF
Evaluating quality of motion for unsupervised video object segmentation
4
作者 CHENG Guanjun SONG Huihui 《Optoelectronics Letters》 EI 2024年第6期379-384,共6页
Current mainstream unsupervised video object segmentation(UVOS) approaches typically incorporate optical flow as motion information to locate the primary objects in coherent video frames. However, they fuse appearance... Current mainstream unsupervised video object segmentation(UVOS) approaches typically incorporate optical flow as motion information to locate the primary objects in coherent video frames. However, they fuse appearance and motion information without evaluating the quality of the optical flow. When poor-quality optical flow is used for the interaction with the appearance information, it introduces significant noise and leads to a decline in overall performance. To alleviate this issue, we first employ a quality evaluation module(QEM) to evaluate the optical flow. Then, we select high-quality optical flow as motion cues to fuse with the appearance information, which can prevent poor-quality optical flow from diverting the network's attention. Moreover, we design an appearance-guided fusion module(AGFM) to better integrate appearance and motion information. Extensive experiments on several widely utilized datasets, including DAVIS-16, FBMS-59, and You Tube-Objects, demonstrate that the proposed method outperforms existing methods. 展开更多
关键词 Evaluating quality of motion for unsupervised video object segmentation
原文传递
Automatic Video Segmentation Algorithm by Background Model and Color Clustering
5
作者 沙芸 王军 刘玉树 《Journal of Beijing Institute of Technology》 EI CAS 2003年第S1期134-138,共5页
In order to detect the object in video efficiently, an automatic and real time video segmentation algorithm based on background model and color clustering is proposed. This algorithm consists of four phases: backgroun... In order to detect the object in video efficiently, an automatic and real time video segmentation algorithm based on background model and color clustering is proposed. This algorithm consists of four phases: background restoration, moving objects extract, moving objects region clustering and post processing. The threshold of the background restoration is not given in advanced. It can be gotten automatically. And a new object region cluster algorithm based on background model and color clustering to remove significance noise is proposed. An efficient method of eliminating shadow is also used. This approach was compared with other methods on pixel error ratio. The experiment result indicates the algorithm is correct and efficient. 展开更多
关键词 video segmentation background restoration object region cluster
下载PDF
Motion feature descriptor based moving objects segmentation
6
作者 Yuan Hui Chang Yilin +2 位作者 Ma Yanzhuo Bai Donglin Lu Zhaoyang 《High Technology Letters》 EI CAS 2012年第1期84-89,共6页
关键词 运动物体分割 特征描述 模糊C-均值聚类算法 分割方法 运动信息 运动特征 运动强度 MFD
下载PDF
基于跟踪检测时序特征融合的视频遮挡目标分割方法
7
作者 郑申海 高茜 +1 位作者 刘鹏威 李伟生 《计算机科学》 CSCD 北大核心 2024年第S01期403-408,共6页
视频实例分割是近年来兴起的一项在图像实例分割基础上引入时序特性的视觉任务,旨在同时对每一帧的目标进行分割并实现帧间的目标跟踪。移动互联网和人工智能的迅猛发展产生了大量的视频数据,但由于拍摄角度、快速运动和部分遮挡等,视... 视频实例分割是近年来兴起的一项在图像实例分割基础上引入时序特性的视觉任务,旨在同时对每一帧的目标进行分割并实现帧间的目标跟踪。移动互联网和人工智能的迅猛发展产生了大量的视频数据,但由于拍摄角度、快速运动和部分遮挡等,视频中的物体往往会出现分裂或模糊的情况,使得从视频数据中准确地分割目标并对目标进行处理和分析面临着重大挑战。经查阅和实践发现,现有的视频实例分割方法在遮挡情况下的表现较差。针对上述问题,提出了一种改进的遮挡视频实例分割算法——通过融合Transformer和跟踪检测的时序特征来改善分割性能。为增强网络对空间位置信息的学习能力,该算法将时间维度引入Transformer网络中,并考虑到视频中目标检测、跟踪和分割之间的相互依赖和促进关系,提出了一种能够有效地聚合目标在视频中的跟踪偏移的融合跟踪模块和检测时序特征模块,提升了遮挡环境下的目标分割性能。通过在OVIS和YouTube-VIS数据集上进行的实验,验证了所提方法的有效性。相比当前的基准方法,该方法展现出了更好的分割精度,进一步证明了其优越性。 展开更多
关键词 视频实例分割 目标检测 目标跟踪 时序特征 遮挡目标
下载PDF
基于编码记忆网络的半监督视频目标分割方法
8
作者 尹亮 张钊 张宝鹏 《弹箭与制导学报》 北大核心 2024年第3期11-21,共11页
视频目标分割是计算机视觉中的一项关键任务,在自动驾驶、视频编码等领域具有重要意义。针对视频目标分割任务,提出使用一种高效的编码记忆网络(EMNet)实现半监督视频目标分割任务。该方法包含自适应参考帧选取模块、双路径匹配模块、... 视频目标分割是计算机视觉中的一项关键任务,在自动驾驶、视频编码等领域具有重要意义。针对视频目标分割任务,提出使用一种高效的编码记忆网络(EMNet)实现半监督视频目标分割任务。该方法包含自适应参考帧选取模块、双路径匹配模块、特征处理模块以及特征聚合模块。自适应参考帧选取模块综合考虑掩码置信度和相似度,选择包含丰富信息的参考帧。双路径匹配模块实现查询帧和参考帧之间的双向和双尺度匹配,提高目标特征匹配准确率。特征处理模块分别包含语义强化模块和特征细化模块,通过低通和高通滤波增强目标的语义和细节信息。并由特征聚合模块对各特征进行融合利用。最后通过在DAVIS2017数据集上的评估,证明所提出模型的有效性。 展开更多
关键词 视频目标分割 编码记忆网络 注意力机制 语义分割 深度学习
下载PDF
Full-duplex strategy for video object segmentation
9
作者 Ge-Peng Ji Deng-Ping Fan +3 位作者 Keren Fu Zhe Wu Jianbing Shen Ling Shao 《Computational Visual Media》 SCIE EI CSCD 2023年第1期155-175,共21页
Previous video object segmentation approachesmainly focus on simplex solutions linking appearance and motion,limiting effective feature collaboration between these two cues.In this work,we study a novel and efficient ... Previous video object segmentation approachesmainly focus on simplex solutions linking appearance and motion,limiting effective feature collaboration between these two cues.In this work,we study a novel and efficient full-duplex strategy network(FSNet)to address this issue,by considering a better mutual restraint scheme linking motion and appearance allowing exploitation of cross-modal features from the fusion and decoding stage.Specifically,we introduce a relational cross-attention module(RCAM)to achieve bidirectional message propagation across embedding sub-spaces.To improve the model’s robustness and update inconsistent features from the spatiotemporal embeddings,we adopt a bidirectional purification module after the RCAM.Extensive experiments on five popular benchmarks show that our FSNet is robust to various challenging scenarios(e.g.,motion blur and occlusion),and compares well to leading methods both for video object segmentation and video salient object detection.The project is publicly available at https://github.com/GewelsJI/FSNet. 展开更多
关键词 video object segmentation(vos) video salient object detection(V-SOD) visual attention
原文传递
Global video object segmentation with spatial constraint module
10
作者 Yadang Chen Duolin Wang +2 位作者 Zhiguo Chen Zhi-Xin Yang Enhua Wu 《Computational Visual Media》 SCIE EI CSCD 2023年第2期385-400,共16页
We present a lightweight and efficient semisupervised video object segmentation network based on the space-time memory framework.To some extent,our method solves the two difficulties encountered in traditional video o... We present a lightweight and efficient semisupervised video object segmentation network based on the space-time memory framework.To some extent,our method solves the two difficulties encountered in traditional video object segmentation:one is that the single frame calculation time is too long,and the other is that the current frame’s segmentation should use more information from past frames.The algorithm uses a global context(GC)module to achieve highperformance,real-time segmentation.The GC module can effectively integrate multi-frame image information without increased memory and can process each frame in real time.Moreover,the prediction mask of the previous frame is helpful for the segmentation of the current frame,so we input it into a spatial constraint module(SCM),which constrains the areas of segments in the current frame.The SCM effectively alleviates mismatching of similar targets yet consumes few additional resources.We added a refinement module to the decoder to improve boundary segmentation.Our model achieves state-of-the-art results on various datasets,scoring 80.1%on YouTube-VOS 2018 and a J&F score of 78.0%on DAVIS 2017,while taking 0.05 s per frame on the DAVIS 2016 validation dataset. 展开更多
关键词 video object segmentation semantic segmentation global context(GC)module spatial constraint
原文传递
MOTION-BASED REGION GROWING SEGMENTATION OF IMAGE SEQUENCES 被引量:1
11
作者 Lu Guanming Bi Houjie Jiang Ping(Department of Information Engineering, Nanjing University ofPosts & Telecommunications, Nanjing 210003) 《Journal of Electronics(China)》 2000年第1期53-58,共6页
This paper proposes a motion-based region growing segmentation scheme for the object-based video coding, which segments an image into homogeneous regions characterized by a coherent motion. It adopts a block matching ... This paper proposes a motion-based region growing segmentation scheme for the object-based video coding, which segments an image into homogeneous regions characterized by a coherent motion. It adopts a block matching algorithm to estimate motion vectors and uses morphological tools such as open-close by reconstruction and the region-growing version of the watershed algorithm for spatial segmentation to improve the temporal segmentation. In order to determine the reliable motion vectors, this paper also proposes a change detection algorithm and a multi-candidate pro- screening motion estimation method. Preliminary simulation results demonstrate that the proposed scheme is feasible. The main advantage of the scheme is its low computational load. 展开更多
关键词 CHANGE detection MOTION estimation Image segmentation object-based video CODING
下载PDF
一种用于视频对象分割的仿U形网络
12
作者 黄志勇 韩莎莎 +3 位作者 陈致君 姚玉 熊彪 马凯 《图学学报》 CSCD 北大核心 2023年第1期104-111,共8页
在半监督的分割任务中,单镜头视频对象分割(OSVOS)方法根据第一帧的对象标记掩模进行引导,从视频画面中分离出后续帧中的前景对象。虽然取得了令人印象深刻的分割结果,但其不适用于前景对象外观变化显著或前景对象与背景外观相似的情形... 在半监督的分割任务中,单镜头视频对象分割(OSVOS)方法根据第一帧的对象标记掩模进行引导,从视频画面中分离出后续帧中的前景对象。虽然取得了令人印象深刻的分割结果,但其不适用于前景对象外观变化显著或前景对象与背景外观相似的情形。针对这些问题,提出一种用于视频对象分割的仿U形网络结构。将注意力机制加入到此网络的编码器和解码器之间,以便在特征图之间建立关联来产生全局语义信息。同时,优化损失函数,进一步解决了类别间的不平衡问题,提高了模型的鲁棒性。此外,还将多尺度预测与全连接条件随机场(FC/Dense CRF)结合,提高了分割结果边缘的平滑度。在具有挑战性的DAVIS 2016数据集上进行了大量实验,此方法与其他最先进方法相比获得了具有竞争力的分割结果。 展开更多
关键词 半监督视频对象分割 注意力机制 损失函数 多尺度特征 全连接条件随机场
下载PDF
基于运动引导的高效无监督视频目标分割网络 被引量:1
13
作者 赵子成 张开华 +1 位作者 樊佳庆 刘青山 《自动化学报》 EI CAS CSCD 北大核心 2023年第4期872-880,共9页
大量基于深度学习的无监督视频目标分割(Unsupervised video object segmentation,UVOS)算法存在模型参数量与计算量较大的问题,这显著限制了算法在实际中的应用.提出了基于运动引导的视频目标分割网络,在大幅降低模型参数量与计算量的... 大量基于深度学习的无监督视频目标分割(Unsupervised video object segmentation,UVOS)算法存在模型参数量与计算量较大的问题,这显著限制了算法在实际中的应用.提出了基于运动引导的视频目标分割网络,在大幅降低模型参数量与计算量的同时,提升视频目标分割性能.整个模型由双流网络、运动引导模块、多尺度渐进融合模块三部分组成.具体地,首先,RGB图像与光流估计输入双流网络提取物体外观特征与运动特征;然后,运动引导模块通过局部注意力提取运动特征中的语义信息,用于引导外观特征学习丰富的语义信息;最后,多尺度渐进融合模块获取双流网络的各个阶段输出的特征,将深层特征渐进地融入浅层特征,最终提升边缘分割效果.在3个标准数据集上进行了大量评测,实验结果表明了该方法的优越性能. 展开更多
关键词 无监督视频目标分割 运动引导 局部注意力 互注意力
下载PDF
Deep Learning-based Moving Object Segmentation:Recent Progress and Research Prospects 被引量:1
14
作者 Rui Jiang Ruixiang Zhu +3 位作者 Hu Su Yinlin Li Yuan Xie Wei Zou 《Machine Intelligence Research》 EI CSCD 2023年第3期335-369,共35页
Moving object segmentation(MOS),aiming at segmenting moving objects from video frames,is an important and challenging task in computer vision and with various applications.With the development of deep learning(DL),MOS... Moving object segmentation(MOS),aiming at segmenting moving objects from video frames,is an important and challenging task in computer vision and with various applications.With the development of deep learning(DL),MOS has also entered the era of deep models toward spatiotemporal feature learning.This paper aims to provide the latest review of recent DL-based MOS methods proposed during the past three years.Specifically,we present a more up-to-date categorization based on model characteristics,then compare and discuss each category from feature learning(FL),and model training and evaluation perspectives.For FL,the methods reviewed are divided into three types:spatial FL,temporal FL,and spatiotemporal FL,then analyzed from input and model architectures aspects,three input types,and four typical preprocessing subnetworks are summarized.In terms of training,we discuss ideas for enhancing model transferability.In terms of evaluation,based on a previous categorization of scene dependent evaluation and scene independent evaluation,and combined with whether used videos are recorded with static or moving cameras,we further provide four subdivided evaluation setups and analyze that of reviewed methods.We also show performance comparisons of some reviewed MOS methods and analyze the advantages and disadvantages of reviewed MOS methods in terms of technology.Finally,based on the above comparisons and discussions,we present research prospects and future directions. 展开更多
关键词 Moving object segmentation(MOS) change detection background subtraction deep learning(DL) video understanding
原文传递
深度信号引导学习混合变换器的高性能无监督视频目标分割
15
作者 苏天康 宋慧慧 +1 位作者 樊佳庆 张开华 《电子学报》 EI CAS CSCD 北大核心 2023年第5期1388-1395,共8页
现存的无监督视频目标分割方法通常使用光流作为运动线索来提升模型性能.然而,光流的估计常存在误差,这将导致双流网络易对噪声过拟合.为此,本文提出一种基于混合变换器的无监督视频目标分割算法,通过引入深度信号引导变换器高效融合不... 现存的无监督视频目标分割方法通常使用光流作为运动线索来提升模型性能.然而,光流的估计常存在误差,这将导致双流网络易对噪声过拟合.为此,本文提出一种基于混合变换器的无监督视频目标分割算法,通过引入深度信号引导变换器高效融合不同模态数据,以学习更加鲁棒的特征表达,从而减轻模型对噪声的过拟合.首先,设计一个新颖的混合注意力模块来获得全局感受野并对不同模态的特征进行充分交互,以增强特征的全局语义信息来提升模型的抗干扰能力.接着,为了进一步感知精细化的目标边缘,设计了一个局部-非局部语义增强模块,将局部语义的归纳偏置引入补充学习非局部语义特征,在提升模型抗干扰力的同时突出更精细化的目标区域.最后,增强后的特征输入变换器的解码器,预测得到高质量的分割结果 .与最先进的方法相比,本文所提算法在四个标准数据集上都获得了领先的性能,充分表明了本文所提方法的有效性. 展开更多
关键词 无监督视频目标分割 混合变换器 混合注意力 多模态 深度估计 鲁棒特征
下载PDF
基于空间加权对数似然比相关滤波与Deep Snake的目标轮廓跟踪
16
作者 李豪 袁广林 +2 位作者 秦晓燕 琚长瑞 朱虹 《电子学报》 EI CAS CSCD 北大核心 2023年第1期105-116,共12页
近年来,目标跟踪中目标的状态表示已由粗糙的矩形框转化为精细的目标掩膜.然而,现有方法利用区域分割得到目标掩膜,速度慢并且掩膜精度受限于目标跟踪框.针对以上问题,本文提出基于空间加权对数似然比相关滤波与Deep Snake的目标轮廓跟... 近年来,目标跟踪中目标的状态表示已由粗糙的矩形框转化为精细的目标掩膜.然而,现有方法利用区域分割得到目标掩膜,速度慢并且掩膜精度受限于目标跟踪框.针对以上问题,本文提出基于空间加权对数似然比相关滤波与Deep Snake的目标轮廓跟踪方法 .该方法包括三个阶段:在第一阶段,利用提出的空间加权对数似然比相关滤波器估计目标的初始矩形框;在第二阶段,通过Deep Snake将初始矩形框变形为目标轮廓;在第三阶段,根据目标轮廓拟合出跟踪结果 .对提出的方法在OTB(Object Tracking Benchmark)-2015和VOT(Visual Object Tracking)-2018数据集上进行了实验验证,结果表明:与现有先进的目标跟踪方法相比,本文提出的跟踪方法具有较优的性能. 展开更多
关键词 目标跟踪 深度主动轮廓 相关滤波 空间加权 对数似然比 视频目标分割
下载PDF
基于目标检测的视频分析方法应用
17
作者 陈光 乔梁 +1 位作者 何赵亮 熊芹 《集成电路应用》 2023年第9期338-340,共3页
阐述目标检测技术与目标跟踪技术是视频监控运行中的关键技术,目标检测技术包括光流法、时间差分法、蜂群算法、背景减除法、图像分割法,分析基于目标检测的视频分析方法在消防安全隐患检测、车辆异常行为检测中的应用。
关键词 目标检测 视频分析 蜂群算法 图像分割法
下载PDF
目标基视频编码中的运动目标提取与跟踪新算法 被引量:15
18
作者 朱仲杰 蒋刚毅 +2 位作者 郁梅 王让定 吴训威 《电子学报》 EI CAS CSCD 北大核心 2003年第9期1426-1428,共3页
自动、快速的视频目标提取与跟踪是目标基视频编码中的一项关键技术 .本文提出一种运动目标提取与跟踪新算法 .首先 ,根据多帧运动信息和高阶统计检测方法得到二值运动掩模图像 ,然后提出一种改进分水岭算法对运动区域及其周围部分进行... 自动、快速的视频目标提取与跟踪是目标基视频编码中的一项关键技术 .本文提出一种运动目标提取与跟踪新算法 .首先 ,根据多帧运动信息和高阶统计检测方法得到二值运动掩模图像 ,然后提出一种改进分水岭算法对运动区域及其周围部分进行分割 .将二者所得结果进行投影运算 ,得到最终运动目标 .最后提出一种运动目标跟踪新算法 ,能对目标进行有效的跟踪 . 展开更多
关键词 视频目标分割 有限区域分割 改进分水岭算法 运动跟踪
下载PDF
基于模板匹配的视频对象分割 被引量:7
19
作者 宋立锋 韦岗 王群生 《电子学报》 EI CAS CSCD 北大核心 2002年第7期1075-1078,共4页
视频对象分割是MPEG 4标准关键技术 .本文结合模板匹配和基于运动估值和补偿的对象跟踪方法 ,提出了一种可以从复杂场景中分割出MPEG 4视频对象的新方法 .在使用运动估值和补偿得到分割掩膜后 ,以初始帧对象颜色为模板 ,在当前帧的轮廓... 视频对象分割是MPEG 4标准关键技术 .本文结合模板匹配和基于运动估值和补偿的对象跟踪方法 ,提出了一种可以从复杂场景中分割出MPEG 4视频对象的新方法 .在使用运动估值和补偿得到分割掩膜后 ,以初始帧对象颜色为模板 ,在当前帧的轮廓边界区域通过模板匹配检测对象 ,使轮廓精确化 .本文方法在一定范围内有效解决了遮挡问题 ,并能够以初始帧跟踪任意长序列中的对象 . 展开更多
关键词 视频对象分割 遮挡 初始帧 对象跟踪 模板匹配 MPEG-4
下载PDF
自动分割及跟踪视频运动对象的一种实现方法 被引量:28
20
作者 韩军 熊璋 +1 位作者 孙文彦 龚声蓉 《中国图象图形学报(A辑)》 CSCD 北大核心 2001年第8期732-738,共7页
随着 MPEG- 4压缩标准的制定 ,分割及跟踪视频运动对象的研究显得极其重要 .在 MPEG- 4视频编码标准中 ,为了实现基于视频内容的交互功能 ,其视频序列的每一帧由视频对象面 ( VOP)来表示 .为了生成视频对象面 ,需要对视频序列中的运动... 随着 MPEG- 4压缩标准的制定 ,分割及跟踪视频运动对象的研究显得极其重要 .在 MPEG- 4视频编码标准中 ,为了实现基于视频内容的交互功能 ,其视频序列的每一帧由视频对象面 ( VOP)来表示 .为了生成视频对象面 ,需要对视频序列中的运动对象进行有效的分割 ;并跟踪运动对象随时间的变化 ,为此提出并实现了一种用于分割及跟踪视频运动对象的时空联合方法 .该方法首先采用连续帧间差的 4次统计量假设检验 ,确定运动对象的位置 ,自动地分离出运动区域与背景区域 ;在运动区域内 ,采用数学形态学的分水线算法来精确地提取运动对象的轮廓 ;最后 ,将提取到的运动对象作为模板 ,对后续的视频序列 ,用 Hausdorff距离度量 ,来跟踪并提取后续帧中运动对象 .实验结果表明 ,该方法能有效地分割和跟踪视频运动对象 ,且能有效减少计算复杂度 ,其调整参数也较少 . 展开更多
关键词 视频分割 图象分割 跟踪运动对象 分水线算法 HAUSDORFF距离 HPEG-4 图象压缩 自动分割
下载PDF
上一页 1 2 13 下一页 到第
使用帮助 返回顶部