期刊文献+

源码处理场景下人工智能系统鲁棒性验证方法 被引量:1

Robustness Verification Method for Artificial Intelligence Systems Based on Source Code Processing
下载PDF
导出
摘要 人工智能(artificial intelligence, AI)技术的发展为源码处理场景下AI系统提供了强有力的支撑.相较于自然语言处理,源码在语义空间上具有特殊性,源码处理相关的机器学习任务通常采用抽象语法树、数据依赖图、控制流图等方式获取代码的结构化信息并进行特征抽取.现有研究通过对源码结构的深入分析以及对分类器的灵活应用已经能够在实验场景下获得优秀的结果.然而,对于源码结构更为复杂的真实应用场景,多数源码处理相关的AI系统出现性能滑坡,难以在工业界落地,这引发了从业者对于AI系统鲁棒性的思考.由于基于AI技术开发的系统普遍是数据驱动的黑盒系统,直接衡量该类软件系统的鲁棒性存在困难.随着对抗攻击技术的兴起,在自然语言处理领域已有学者针对不同任务设计对抗攻击来验证模型的鲁棒性并进行大规模的实证研究.为了解决源码处理场景下AI系统在复杂代码场景下的不稳定性问题,提出一种鲁棒性验证方法 (robustness verification by Metropolis-Hastings attack method, RVMHM),首先使用基于抽象语法树的代码预处理工具提取模型的变量池,然后利用MHM源码攻击算法替换变量扰动模型的预测效果.通过干扰数据和模型交互过程,观察攻击前后的鲁棒性验证指标的变化量来衡量AI系统的鲁棒性.以漏洞预测作为基于源码处理的二分类典型场景为例,通过在3个开源项目的数据集上验证12组AI漏洞预测模型鲁棒性说明RVMHM方法针对源码处理场景下AI系统进行鲁棒性验证的有效性. The development of artificial intelligence(AI)technology provides strong support for AI systems based on source code processing.Compared with natural language processing,source code is special in semantic space.Machine learning tasks related to source code processing usually employ abstract syntax trees,data dependency graphs,and control flow graphs to obtain the structured information of codes and extract features.Existing studies can obtain excellent results in experimental scenarios through in-depth analysis of source code structures and flexible application of classifiers.However,for real application scenarios where the source code structures are more complex,most of the AI systems related to source code processing have poor performance and are difficult to implement in the industry,which triggers practitioners to consider the robustness of AI systems.As AI-based systems are generally data-driven black box systems,it is difficult to directly measure the robustness of these software systems.With the emerging adversarial attack techniques,some scholars in natural language processing have designed adversarial attacks for different tasks to verify the robustness of models and conducted largescale empirical studies.To solve the instability of AI systems based on source code processing in complex code scenarios,this study proposes robustness verification by Metropolis-Hastings attack method(RVMHM).Firstly,the code preprocessing tool based on abstract syntax trees is adopted to extract the variable pool of the model,and then the MHM source code attack algorithm is employed to replace the prediction effect of the variable perturbation model.The robustness of AI systems is measured by observing the changes in the robustness verification index before and after the attack by interfering with the data and model interaction process.With vulnerability prediction as a typical binary classification scenario of source code processing,this study verifies the robustness of 12 groups of AI vulnerability prediction models on three datasets of open source projects to illustrate the RVMHM effectiveness for robustness verification of source code processing based on AI systems.
作者 杨焱景 毛润丰 谭睿 沈海峰 荣国平 YANG Yan-Jing;MAO Run-Feng;TAN Rui;SHEN Hai-Feng;RONG Guo-Ping(Software Institute,Nanjing University,Nanjing 210093,China;Discipline of Information Technology,Peter Faber Business School,Australian Catholic University,Sydney NSW 2060,Australia)
出处 《软件学报》 EI CSCD 北大核心 2023年第9期4018-4036,共19页 Journal of Software
基金 国家自然科学基金(62072227,62202219) 国家重点研发计划(2019YFE0105500) 江苏省重点研发计划(BE2021002-2) 南京大学计算机软件新技术国家重点实验室创新项目(ZZKT2022A25) 海外开放课题(KFKT2022A09)。
关键词 源码结构化分析 源码对抗攻击 AI系统鲁棒性验证 code structure analysis code adversarial attack AI system quality evaluation
  • 相关文献

参考文献2

二级参考文献3

共引文献68

同被引文献20

引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部