摘要
近年来,学术和工业界广泛利用大数据处理系统来处理视频分析等领域基于深度神经网络(deep neural networks, DNN)的推理负载。在这种场景下,因大数据系统中多个并行推理任务重复加载相同且只读的DNN模型,导致系统无法充分利用GPU资源,成为了推理性能提升的瓶颈。针对该问题,该文提出了一个面向单GPU卡的模型共享技术,在DNN推理任务之间共享同一份模型数据。在此基础上,为了使模型共享技术作用于分布式环境下的每一块GPU,该文还设计了支持多GPU卡模型共享的分配器。将上述优化技术集成到在GPU平台上运行的Spark中,实现了一个支持大规模推理负载的分布式原型系统。实验结果表明,针对基于YOLO-v3的交通视频处理负载,相对于未采用模型共享技术的系统,模型共享技术能够提升系统吞吐量达136%。
Big data processing is being widely used in academia and industry to handle DNN-based inference workloads for fields such as video analyses. In such cases, multiple parallel inference tasks in the big data processing system repeatedly load the same, read-only DNN model so the system does not fully utilize the GPU resources which creates a bottleneck that limits the inference performance. This paper presents a model sharing technique for single GPU cards that enables sharing of the same model among various DNN inference tasks. An allocator is used to make the model sharing technique work for each GPU in the distributed environment. This method was implemented in Spark on a GPU platform in a distributed data processing system that supports large-scale inference workloads. Tests show that for video analyses on the YOLO-v3 model, the model sharing reduces the GPU memory overhead and improves system throughput by up to 136% compared to a system without the model sharing technique.
作者
丁光耀
陈启航
徐辰
钱卫宁
周傲英
DING Guangyao;CHEN Qihang;XU Chen;QIAN Weining;ZHOU Aoying(School of Data Science and Engineering,East China Normal University,Shanghai 200333,China)
出处
《清华大学学报(自然科学版)》
EI
CAS
CSCD
北大核心
2022年第9期1435-1441,共7页
Journal of Tsinghua University(Science and Technology)
基金
国家自然科学基金资助项目(61902128)。