摘要
移动边缘计算(mobile edge computing,MEC)技术能为用户提供数据处理服务,但MEC服务器的计算资源有限,用户合理地向MEC服务器迁移任务及MEC服务器基于任务要求给用户合理分配资源是提高用户端能效的关键因素。提出基于深度确定策略梯度的能效优化算法(deep deterministic policy gradient-based energy efficiency opti-mization,DDPG-EEO)。在满足时延要求的前提下,建立关于任务卸载率和资源分配策略的最大化能效的优化问题。再将优化问题描述成马尔可夫决策过程(Markov decision process,MDP),并利用深度确定策略梯度求解。仿真结果表明,DDPG-EEO算法降低了UTs端的能耗,并提高了任务完成率。
Although Mobile Edge Computing(MEC)technology can provide users with data processing services,the computing resources of MEC servers are also limited.Therefore,reasonable migration of tasks from users to MEC servers and reasonable allocation of resources from MEC servers to users based on task requirements are key factors to improve user energy efficiency.To solve this problem,a Deep Deterministic Policy Gradient-based Energy Efficiency Optimization(DDPG-EEO)algorithm is proposed.Under the premise of meeting with the time delay requirements,the optimization problem about the maximum energy efficiency of task offloading rate and resource allocation strategy is established.Then,the optimization problem is described as Markov Decision Process(MDP)and solved by Deterministic Policy Gradient-based.The simulation results show that the DDPG-EEO algorithm reduces the energy consumption of UTs and improves the task accomplishment rate.
作者
陈卡
CHEN Ka(Zhumadian Vocational and Technical College,Zhumadian 463000,China)
出处
《火力与指挥控制》
CSCD
北大核心
2024年第7期44-49,共6页
Fire Control & Command Control
基金
河南省科技攻关计划项目(212102210516)
河南省软科学研究计划项目(182400410608)
河南省高等教育教学改革研究与实践立项项目(2021SJGLX865)。
关键词
移动边缘计算
任务卸载
资源分配
强化学习
深度确定策略梯度
mobile edge computing
task offloading
resources allocation
reinforcement learning
deep deterministic policy gradient