摘要
在传统的基于内容视频检索的方法中,由于视频的领域较宽,视频的低级视觉特征和高级概念之间存在着较大的语义鸿沟,常导致检索效果不佳.本文认为更有现实意义的做法是,以含有比镜头更多语义信息的事件相关故事单元为检索单位,通过提取事件相关媒体中的文本信息并利用机器学习方法自动建立事件类的模型,从而提供概念化的故事单元查询方式.本文提出了组合特征选择方法和一种二阶段修剪KNN:TSP-KNN,组合特征选择方法相对于MI方法更适合事件相关故事单元的检索.二阶段修剪KNN先对训练集进行修剪,然后再用KNN训练得到分类器,该方法解决了样本混叠以及多中心分布问题.实验结果表明所提出的方法是有效的,明显地提高了事件相关故事单元的检索性能.
In the traditional approach of content-based video retrieval, the wide video domain results in the wide semantic gap between the low-level features and the high-level concepts. This paper takes event relevant story units which have more semantic information than shots as retrieval unit, then extracts textual information from event relevant media and uses machine learning methods to auto- matically construct models for event classes. Thus providing users with a conceptualized way to story units query. This paper presents a combined feature selection method and a two-stage pruning K-Nearest Neighbor algorithm: TSP-KNN. The combined feature selection method is better than Mutual Information(MI) in retrieval of event relevant story units. The two-stage pruning algorithm first prunes the training set, then trains the new set with KNN to obtain a classifier, the method solves overlap problem of the sample set and the multicenter distribution problem. The experimental results show that the proposed method is feasible and advanced on retrieving relevant story units of event.
出处
《信号处理》
CSCD
北大核心
2006年第5期755-760,共6页
Journal of Signal Processing
基金
国家自然科学基金项目(60473117)
国家"八六三"高技术研究发展计划基金项目(2001AA115123)