摘要
针对视频数据非结构化和体量大的特点,提出了一种基于Hadoop并发处理框架下的分片解码处理方法。该框架采用FFmpeg作为视频解码器,OpenCV作为图片处理引擎,对FFmpeg源码进行了修改和扩展,可在Hadoop环境下支持分片解码,从而实现海量视频的快速处理。同时,利用MapReduce细节处理功能寻求最佳分片大小,使得处理性能达到最优。人脸检测试验数据表明,8台4核机器上采用分片解码技术组成的Hadoop集群,比相同机器组的传统分布式集群处理速度提升了约45%。
Based on characteristics of large size and unstructured data of the digital video, an ex- tensible video processing framework is presented based on Hadoop to parallelize video processing tasks in a cloud environment. The framework uses FFmpeg for a video coder and OpenCV for an image processing engine to modify and expand the FFmpeg source code. It can support the frag- ment decoding in the Hadoop environment for realizing the rapid processing of massive video. It exploits MapReduce implementation details to minimize video image copy for optimizing the pro- cessing performance. A face recognizing system is implemented on top of the framework for the demonstration. In 8 4-core environment using fragment decoding in a Hadoop cluster with tradi- tional distributed scheduling environment, the system shows 45 % up of processing speed.
出处
《指挥信息系统与技术》
2015年第2期57-60,共4页
Command Information System and Technology
基金
软件新技术与产业化协同创新中心部分资助项目