期刊文献+

一种面向战术边缘的智能云服务模型 被引量:1

An Intelligent Cloud Service Model for Tactical Edge-Oriented
下载PDF
导出
摘要 针对战术边缘环境下遂行作战任务时面临资源紧张匮乏、数据处理能力弱、指挥通信时延大及相应的需求等问题,引入新兴的边缘计算技术和人工智能算法,提出一种面向战术边缘的智能云服务模型,以实现将强大云服务能力向战术边缘环境扩展,为战场终端用户提供快速、稳定、高效的信息服务和数据处理能力,分别从模型框架、功能服务、指挥控制、相关技术等方面对模型进行描述;通过仿真实例对模型进行验证分析。 To solve the problems of short of resources,poor data processing ability,long command and communication delay and other problems and corresponding requirements when carrying out combat missions in the tactical edge environment.By introducing the emerging edge computing technology and artificial intelligence algorithm,an intelligent cloud service model for tactical edge-oriented is put forward to extend the powerful cloud service capability to tactical edge environment and to provide battlefield end users with fast,stable and efficient information service and data processing capability.The model is described from the aspects of model framework,function service,command and control,and related technologies,etc.Finally,the model is verified and analyzed by a simulation example.
作者 郑会吉 邱鑫源 余思聪 崔翛龙 ZHENG Huiji;QIU Xinyuan;YU Sicong;CUI Xiaolong(Engineering University of PAP,Xi’an 710086,China;Counter-Terrorism Command Information Engineering Research Team,Engineering University of PAP,Xi’an 710086,China)
出处 《火力与指挥控制》 CSCD 北大核心 2023年第6期7-13,共7页 Fire Control & Command Control
基金 国家自然科学基金(U1603261) 网信融合基金资助项目(LXJH-10(A)-09)。
关键词 战术边缘 云服务 边缘计算 人工智能 tactical edge cloud service edge computing artificial intelligence
  • 相关文献

参考文献5

二级参考文献139

  • 1杨凯,丁晓璐,刘俊萍.物联网智能边缘计算研究及应用[J].电信科学,2019,35(S02):176-184. 被引量:10
  • 2MNIH V, KAVUKCUOGLU K, SILVER D, et al. Human-levelcontrol through deep reinforcement learning [J]. Nature, 2015,518(7540): 529 – 533.
  • 3SILVER D, HUANG A, MADDISON C, et al. Mastering the gameof Go with deep neural networks and tree search [J]. Nature, 2016,529(7587): 484 – 489.
  • 4AREL I. Deep reinforcement learning as foundation for artificialgeneral intelligence [M] //Theoretical Foundations of Artificial GeneralIntelligence. Amsterdam: Atlantis Press, 2012: 89 – 102.
  • 5TEAAURO G. TD-Gammon, a self-teaching backgammon program,achieves master-level play [J]. Neural Computation, 1994,6(2): 215 – 219.
  • 6SUTTON R S, BARTO A G. Reinforcement Learning: An Introduction[M]. Cambridge MA: MIT Press, 1998.
  • 7KEARNS M, SINGH S. Near-optimal reinforcement learning inpolynomial time [J]. Machine Learning, 2002, 49(2/3): 209 – 232.
  • 8KOCSIS L, SZEPESVARI C. Bandit based Monte-Carlo planning[C] //Proceedings of the European Conference on MachineLearning. Berlin: Springer, 2006: 282 – 293.
  • 9LITTMAN M L. Reinforcement learning improves behaviour fromevaluative feedback [J]. Nature, 2015, 521(7553): 445 – 451.
  • 10BELLMAN R. Dynamic programming and Lagrange multipliers[J]. Proceedings of the National Academy of Sciences, 1956,42(10): 767 – 769.

共引文献202

同被引文献9

引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部