摘要
针对边缘计算应用对实时性的要求,引入软件定义网络和网络功能虚拟化技术对边缘计算网络进行重构.基于此,考虑以最大化长期平均实时任务处理成功率为目标的计算和通信资源在线分配问题.通过建立马尔可夫决策过程模型,提出基于Q学习的资源在线分配方法.Q学习在状态动作空间较大时内存占用大且会发生维度灾难,鉴于此,进一步提出基于DQN的资源在线分配方法.实验结果表明,所提出算法能够较快收敛,且DQN算法相较于Q学习和其他基准方法能够获得更高的实时任务处理成功率.
To meet the real-time requirement of the edge computing applications,technologies of software defined network and network function virtualization are introduced to reconstruct the edge computing network.On this basis,we consider the design of online computing and communication resource allocation method,aiming at maximizing the longterm average probability of successfully processing the real-time tasks.By establishing a Markov decision process framework,an online resource allocation method based on Q-learning is proposed.Nevertheless,Q-learning occupies a lot of memory when the state action space is large,and it is prone to dimensional disasters.Therefore,a DQN-based online resource allocation method is proposed.Simulation results show that both proposed algorithms converge quickly and the average probability of successfully processing the real-time tasks achieved by the DQN algorithm is the highest among all the baseline algorithms.
作者
李燕君
蒋华同
高美惠
LI Yan-jun;JIANG Hua-tong;GAO Mei-hui(School of Computer Science and Technology,Zhejiang University of Technology,Hangzhou 310023,China)
出处
《控制与决策》
EI
CSCD
北大核心
2022年第11期2880-2886,共7页
Control and Decision
基金
国家自然科学基金项目(61772472)
浙江省自然科学基金项目(LZ21F020005)
浙江省属高校基本科研业务费专项资金项目(RF-A2019002)。
关键词
边缘计算
资源分配
实时任务
马尔可夫决策过程
Q学习
深度强化学习
edge computing
resource allocation
real-time task
Markov decision process
Q-learning
deep reinforcement learning