期刊文献+

基于LIME的恶意代码对抗样本生成技术

Adversarial sample generation technology of malicious code based on LIME
下载PDF
导出
摘要 基于机器学习检测恶意代码技术的研究和分析,针对机器学习模型对抗样本的生成提出一种基于模型无关的局部可解释(LIME)的黑盒对抗样本生成方法。该方法可以对任意黑盒的恶意代码分类器生成对抗样本,绕过机器学习模型检测。使用简单模型模拟目标分类器的局部表现,获取特征权重;通过扰动算法生成扰动,根据生成的扰动对原恶意代码进行修改后生成对抗样本;基于2015年微软公布的常见恶意样本数据集和收集的来自50多个供应商的良性样本数据对所提方法进行实验,参照常见恶意代码分类器实现了18个基于不同算法或特征的目标分类器,使用所提方法对目标分类器进行攻击,使分类器的真阳性率均降低到接近0。此外,对MalGAN和ZOO两个先进的黑盒对抗样本生成方法与所提方法进行对比,实验结果表明:所提方法能够有效生成对抗样本,且方法本身具有适用范围广泛、能灵活控制扰动和健全性的优点。 Based on the research and analysis of machine learning technology to detect malicious code,a local interpretable model-agnostic explanations(LIME)-based black-box adversarial examples generation method is proposed to generate adversarial samples for any black-box malicious code classifier and bypass the detection of machine learning models.The method uses a simple model to simulate the target classifier's local performances,obtains the feature weights,and generates disturbances through the disturbance algorithm.According to the generated disturbances,the method modifies the original malicious code to generate adversarial samples.We test the method using Microsoft's common malicious sample data in 2015 and the collected benign sample data from more than 50 suppliers as follows:18 target classifiers based on different algorithms or features were implemented concerning common malicious code classifiers.Their classifiers'true positive rates were reduced to approximately zero when we attacked them using the method.Two advanced black-box sample generation methods,MalGAN and ZOO,were reproduced for comparison with this method.The experimental results show that the proposed method in this paper can effectively generate adversarial samples,and the method itself owns various strengths,including broad applicability,flexible control of disturbances,and soundness.
作者 黄天波 李成扬 刘永志 李燈辉 文伟平 HUANG Tianbo;LI Chengyang;LIU Yongzhi;LI Denghui;WEN Weiping(School of Software&Microelectronics,Peking University,Beijing 102600,China)
出处 《北京航空航天大学学报》 EI CAS CSCD 北大核心 2022年第2期331-338,共8页 Journal of Beijing University of Aeronautics and Astronautics
基金 国家自然科学基金(61872011)。
关键词 对抗样本 恶意代码 机器学习 模型无关的局部可解释(LIME) 目标分类器 adversarial samples malicious code machine learning local interpretable model-agnostic explanations(LIME) target classifiers
  • 相关文献

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部