期刊文献+

多约束边环境下计算卸载与资源分配联合优化

Joint Optimization of Computation Offloading and Resource Allocation in Multi-constraint Edge Computing
下载PDF
导出
摘要 移动边缘计算(Mobile Edge Computing,MEC)将计算与存储资源部署到网络边缘,用户可将移动设备上的任务卸载到附近的边缘服务器,得到一种低延迟、高可靠的服务体验.然而,由于动态的系统状态和多变的用户需求,MEC环境下的计算卸载与资源分配面临着巨大的挑战.现有解决方案通常依赖于系统先验知识,无法适应多约束条件下动态的MEC环境,导致了过度的时延与能耗.为解决上述重要挑战,本文提出了一种新型的基于深度强化学习的计算卸载与资源分配联合优化方法(Joint computation Offloading and resource Allocation with deep Reinforcement Learning,JOA-RL).针对多用户时序任务,JOA-RL方法能够根据计算资源与网络状况,生成合适的计算卸载与资源分配方案,提高执行任务成功率并降低执行任务的时延与能耗.同时,JOA-RL方法融入了任务优先级预处理机制,能够根据任务数据量与移动设备性能为任务分配优先级.大量仿真实验验证了JOA-RL方法的可行性和有效性.与其他基准方法相比,JOA-RL方法在任务最大容忍时延与设备电量约束下能够在时延与能耗之间取得更好的平衡,且展现出了更高的任务执行成功率. Mobile Edge Computing(MEC)deploys computational and storage resources to the network edge.Therefore,users can offload the tasks on mobile devices to nearby edge servers and obtain a low latency and high reliability service experience.However,due to dynamic system state and variable user demands,computation offloading and resource allocation in MEC environments are facing great challenges.Existing solutions may not well adapt to dynamic MEC environments with multiple constraints,because they depend on the system prior knowledge,which leads to excessive delay and energy consumption.In order to solve the above important challenges,we propose a new Joint computation Offloading and resource Allocation with deep Reinforcement Learning(JOA-RL)method.For multi-user sequential tasks,the JOA-RL can generate appropriate computation offloading and resource allocation schemes according to the current computational resources and network conditions,aiming to improve the success rate of task execution and reduce the task execution delay and energy consumption.Meanwhile,the JOA-RL introduce a preprocessing mechanism of task priority,which can assign priority to tasks according to their data volume and the performance of mobile devices.Extensive experiments verify the feasibility and effectiveness of the proposed JOA-RL method.The results show that the JOA-RL can achieve better balance between delay and energy consumption than other benchmark methods under the constraints of maximum delay tolerance and device power,which also shows higher success rate of task execution.
作者 熊兵 张俊杰 黄思进 陈哲毅 于正欣 陈星 XIONG Bing;ZHANG Junjie;HUANG Sijin;CHEN Zheyi;YU Zhengxin;CHEN Xing(College of Computer and Data Science,Fuzhou University,Fuzhou 350116,China;Fujian Provincial Key Laboratory of Networking Computing and Intelligent Information Processing,Fuzhou 350116,China;School of Computing and Communications,Lancaster University,Lancaster LA14YW)
出处 《小型微型计算机系统》 CSCD 北大核心 2024年第2期405-412,共8页 Journal of Chinese Computer Systems
基金 中央引导地方科技发展资金项目(2022L3004)资助 国家自然科学基金项目(62072108)资助 福建省自然科学基金杰青项目(2020J06014)资助 福建省财政厅科研专项经费项目(83021094)资助。
关键词 移动边缘计算 计算卸载 资源分配 多约束优化 深度强化学习 mobile edge computing computation offloading resource allocation multi-constraint optimization deep reinforcement learning
  • 相关文献

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部