期刊文献+

基于部分可测马尔科夫决策过程业务感知的微基站休眠时长确定策略 被引量:2

Micro Base Station Sleeping Cycle Determination Strategy Based on Partially Observed Markov Decision Process Traffic Aware
下载PDF
导出
摘要 针对密集组网场景中业务不确定性引起的基站休眠周期难以确定的问题,该文提出一种基于部分可测马尔可夫决策过程(Partially Observed Markov Decision Process,POMDP)业务感知的微基站休眠时长确定策略。该策略将周期分为长周期和短周期,每个周期由轻度和深度两个阶段构成。通过POMDP感知到达基站的业务状态,动态调整周期时长,进而选取适合当前周期的时长。仿真结果表明,该策略可以根据业务感知提前确定微基站关断时长,与基于业务门限值的基站关断机制相比节能效果更好。 In order to solve the problem that the sleeping cycles are difficult to be determined duo to the traffic uncertainty in dense network scenarios, this paper proposes a Micro base station sleeping cycle determination strategy which based on the Partially Observed Markov Decision Process (POMDP) traffic-aware. In this strategy, the sleeping cycle is divided into long cycle and short cycle, and each cycle consists of deep and light stage. Based on the POMDP traffic-aware, it can dynamic adjusting the cycle and determine the proper length of cycle. Both the analytical and simulation results show that compare with sleeping strategy based on the traffic threshold, the base station sleeping strategy based on traffic awareness can effectively reduce the energy consumption of the micro base stations in the dense network by adjusting the sleeping time of the micro base stations in real time.
出处 《电子与信息学报》 EI CSCD 北大核心 2018年第1期130-136,共7页 Journal of Electronics & Information Technology
基金 国家高科技研究发展计划(2014AA01A701) 国家自然科学基金(61571073)~~
关键词 密集组网 关断机制 部分可测马尔可夫过程 业务感知 长/短休眠周期 动态调整 Dense network Sleeping strategy Partially Observed Markov Decision Process (POMDP) Traffic- aware Long/short sleeping cycle Dynamic adjusting
  • 相关文献

参考文献1

二级参考文献9

  • 1Sondik E J. The Optimal Control of Partially Observable Markov Processes over the Infinite Horizon: Discounted Costs[J]. Operations Research, 1978, 26(2): 282-304.
  • 2Kaelbling L P, Littman M L, Cassandra AR. Planning and Acting in Partially Observable Stochastic Domains[ C] // Artificial Intelligence, 1998, 101: 99- 134.
  • 3Zhang N L, Zhang W. Speeding Up the Convergence of Value Iteration in Partially Observable Markov Decision Processes[J]. Journal of Artificial IntelLigence Research, 2001(14): 29-51.
  • 4Pineau J, Gordon G, Thrun S. Point-Based Value Iteration: An Anytime Algorithm for POMDPs[C]//// Proc. Int. Joint Conf. on Artificial Intelligence (IJCAI), Acapulco, Mexico,2003: 1025-1030.
  • 5Izadi M T, Precup D, Azar D. Belief Selection in Point- Based Planning Algorithms for Pomdps[ C]// Proceedings of Canadian Conference on Artificial Intelligence (AI), Quebec City, Canada, 2006: 383- 394.
  • 6Izadi M T, Precup D. Exploration in POMDP Belief Space and Its Impact on Value Iteration Approximation[ C]// European Conference on Artificial Intelligence (ECAI). Riva del Garda, Italy, 2006.
  • 7Shani G, Brafman R I, Shimony S E. Forward Search Value Iteration For POMDPs[ C]//Proc. Int. Joint Conf. on Artificial Intelligence(IJCAI), 2007 : 2619 - 2624.
  • 8Pineau J, Gordon G, Thrun S. Point-Based Approximations for Fast POMDP Solving[R]. Technical Report, SOCS-TR-2005.4, School of Computer Science, McGill University, 2005:1 - 45.
  • 9刘克.实用马尔科夫决策过程[M].北京:清华大学出版社,2004.

共引文献2

同被引文献16

引证文献2

二级引证文献6

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部