摘要
从攻击者角度分析入侵意图和渗透行为对于指导网络安全防御具有重要意义。然而,现有的渗透路径大多依据瞬时的网络环境构建,导致路径参考价值降低。针对该问题,文中提出了一种基于最大熵强化学习的最优渗透路径生成方法,该方法可以在网络环境动态变化的情况下,以探索的形式捕获多种模式的近似最优行为。首先,依据攻击图和漏洞评分对渗透过程进行建模,通过量化攻击获益来刻画渗透行为的威胁程度;然后,考虑到入侵行为的复杂性,开发基于最大熵模型的Soft Q-学习方法,通过控制熵值和奖励的重要程度来保证求解渗透路径的过程具有稳定性;最后将该方法应用于动态变化的测试环境中,生成高可用的渗透路径。仿真实验结果表明,相比于现有基于强化学习的基准方法,所提方法具有更强的环境适应性,能够以更低的代价生成更高收益的渗透路径。
Analyzing intrusion intentions and penetration behaviors from the attackers’perspective is of great significance for guiding network security defense.However,most existing penetration paths are constructed based on the instantaneous network environment,resulting in reduced reference value.Aiming at this problem,this paper proposes an optimal penetration path generation method based on maximum entropy reinforcement learning,which can capture the approximate optimal behavior of multiple modes in the form of exploration under dynamic network environments.Firstly,the penetration process is modeled according to the attack graph and the vulnerability score,and the threat degree of the penetration behavior is described by quantifying the attack benefits.Then,considering the complexity of the intrusion behavior,a soft Q-learning method based on the maximum entropy model is developed.The stability of the penetration path is ensured by controlling the entropy value and the importance of the reward.Finally,the method is applied to a dynamic environment to generate a highly available penetration path.Simulation experimental results show that,compared with the existing baseline methods based on reinforcement learning,the proposed method has more robust environmental adaptability and can generate higher-yielding penetration paths at a lower cost.
作者
王焱
王天荆
沈航
白光伟
WANG Yan;WANG Tianjing;SHEN Hang;BAI Guangwei(College of Computer Science and Technology,Nanjing Tech University,Nanjing 211816,China)
出处
《计算机科学》
CSCD
北大核心
2024年第3期360-367,共8页
Computer Science
基金
国家自然科学基金(61502230,61501224)
江苏省自然科学基金(BK20201357)
江苏省“六大人才高峰”高层次人才项目(RJFW-020)。
关键词
最大熵强化学习
攻击图
Soft
Q-学习
渗透路径
Maximum entropy reinforcement learning
Attack graph
Soft Q-learning
Penetration path