摘要
在移动边缘计算(MEC)服务器计算资源有限且计算任务具有时延约束的情况下,为缩短任务完成时间并降低终端能耗,提出针对卸载决策与资源分配的联合优化方法。在多用户多服务器MEC环境下设计一种新的目标函数以构建数学模型,结合深度强化学习理论提出改进的Nature Deep Q-learning算法Based DQN。实验结果表明,在不同目标函数中,Based DQN算法的优化效果优于全部本地卸载算法、随机卸载与分配算法、最小完成时间算法和多平台卸载智能资源分配算法,且在新目标函数下优势更为突出,验证了所提优化方法的有效性。
The computing resources of Mobile Edge Computing(MEC)servers are limited while the computing tasks have delay constraints.To reduce the completion time of computing tasks and the terminal energy consumption,a joint optimization method for offloading decision and resource allocation is proposed.In the multi-user and multi-server MEC environment,a new objective function is designed to build the mathematical model.Based on the model and the deep reinforcement learning theory,an improved Nature Deep Q-learning algorithm(Based DQN)is proposed.The experimental results show that among the various objective functions,the new objective function provides the Based DQN algorithm with more eminent advantages in optimization performance over all the local offloading algorithms,random offloading and allocation algorithms,Minimum Complete Time algorithms(MCT)and multi-platform offloading intelligent resource allocation algorithms.The effectiveness of the objective function and the algorithm is verified.
作者
杨天
杨军
YANG Tian;YANG Jun(School of Information Engineering,Ningxia University,Yinchuan,Ningxia 750021,China)
出处
《计算机工程》
CAS
CSCD
北大核心
2021年第8期37-44,共8页
Computer Engineering
基金
宁夏自然科学基金“基于边缘计算的大规模无线传感器网络关键技术研究及在特色农业中的应用”(2020AAC03036)。
关键词
移动边缘计算
计算资源
时延约束
卸载决策
资源分配
深度强化学习
Mobile Edge Computing(MEC)
computing resource
delay constraint
offloading decision
resource allocation
Deep Reinforcement Learning(DRL)