摘要
人工智能模型自广泛使用以来,面临诸多安全风险。随着深度学习模型大规模在端侧设备上的部署,端侧模型面临新的安全挑战。由于深度神经网络具有相似的结构特征,攻击者得以运用反编译技术以获取模型的结构信息及其参数,从而重建模型。此过程会损害模型的知识产权并使模型面临白盒攻击的风险。针对模型反编译器对模型算子的定位与识别、参数获取、网络拓扑解析过程,提出了一种基于模型编译过程的混淆框架以防御模型提取攻击;在深度学习编译器的前端优化过程中设计并实现了算子混淆、参数混淆和网络拓扑混淆3种混淆手段;采用构造不透明谓词,插入虚假控制流,添加冗余内存访问等手段,干扰模型反编译器对模型的逆向工程。实验结果表明,提出的混淆框架DNNobfus能有效降低前沿的模型反编译工具对模型算子类型及网络连接识别的准确率,分别下降至21.63%和48.24%。此外,该框架平均时间效率为67.93%,平均空间效率为88.37%,均优于混淆工具Obfuscator-LLVM。
The proliferation of artificial intelligence models has rendered them vulnerable to a myriad of security threats.The extensive integration of deep learning models into edge devices has introduced novel security challenges.Given the analogous structural characteristics of deep neural networks,adversaries can employ decompilation tactics to extract model structural details and parameters,facilitating the reconstruction of these models.Such actions can compromise the intellectual property rights of the model and increase the risk of white-box attacks.To mitigate the capability of model decompilers to locate and identify model operators,acquire parameters,and parse network topologies,an obfuscation framework was proposed.This framework was embedded within the model compilation process to safeguard against model extraction attacks.During the frontend optimization phase of deep learning compilers,three obfuscation techniques were developed and integrated:operator obfuscation,parameter obfuscation,and network topology obfuscation.The framework introduced opaque predicates,incorporated fake control flows,and embedded redundant memory access to thwart the reverse engineering efforts of model decompilers.The experimental findings demonstrate that the obfuscation framework,named DNNobfus,significantly diminishes the accuracy of state-of-the-art model decompilation tools in identifying model operator types and network connections to 21.63%and 48.24%,respectively.Additionally,DNNobfus achieves an average time efficiency of 67.93%and an average space efficiency of 88.37%,surpassing the performance of the obfuscation tool Obfuscator-LLVM in both respects.
作者
宋飞扬
赵鑫淼
严飞
程斌林
张立强
杨小林
王洋
SONG Feiyang;ZHAO Xinmiao;YAN Fei;CHENG Binlin;ZHANG Liqiang;YANG Xiaolin;WANG Yang(Key Laboratory of Aerospace Information Security and Trusted Computing,Ministry of Education,School of Cyber Science and Engineering,Wuhan University,Wuhan 430072,China;School of Cyber Science and Technology,Shandong University,Qingdao 266237,China;Inspur Intelligent Technology Company Limited,Jinan 250101,China;Inspur Academy of Science and Technology,Jinan 250101,China)
出处
《网络与信息安全学报》
2024年第2期143-153,共11页
Chinese Journal of Network and Information Security
基金
湖北省重大研究计划项目(No.2023BAA027)
国家自然科学基金(No.62172144)
国家重点研发计划项目(No.2022YFB3103804)。
关键词
人工智能安全
代码混淆
逆向工程
模型保护
artificial intelligence safety
code obfuscation
reverse engineering
model protection