摘要
深度学习程序在广泛领域取得了巨大成功,然而其内部错误可能导致严重的资源浪费,甚至引发灾难性故障。本文分析了导致程序在实际运行中出现任务执行失败的典型缺陷及其关键影响因素,提出基于静态分析与自注意力机制网络的深度学习程序内存预测方法,在程序内存估计任务上达到平均8.38%的相对预测误差,可以有效预防内存溢出问题、协助合理优化硬件资源配置。
Deep learning programs have achieved tremendous success in a wide range of fields.However,internal errors in these programs can lead to significant resource wastage and even result in catastrophic failures.This paper analyzes typical defects that cause task execution failures during the practical operation of programs,as well as their key influencing factors.A deep learning program memory prediction method is proposed based on static analysis and a self-attention mechanism network.This method achieves a mean relative error of 8.38%in memory estimation tasks,effectively preventing memory overflow issues and assisting in the rational optimization of hardware resource allocation.
作者
刘晨
陆杰
李炼
LIU Chen;LU Jie;LI Lian(State Key Laboratory of Processors,Institute of Computing Technology,Chinese Academy of Sciences,Beijing 100190;University of Chinese Academy of Sciences,Beijing 100049)
出处
《高技术通讯》
CAS
北大核心
2024年第10期1036-1045,共10页
Chinese High Technology Letters
基金
国家自然科学基金(62132020)资助项目。
关键词
深度学习
静态分析
内存预测
deep learning
static analysis
memory estimation