期刊文献+

AI系统的安全测评和防御加固方案 被引量:2

The Safety Evaluation and Defense Reinforcement of the AI System
下载PDF
导出
摘要 深度学习模型在很多人工智能任务上已取得出色表现,但精心设计的对抗样本却能欺骗训练有素的模型,诱导模型作出错误判断。对抗攻击的成功使得AI系统的可用性遭受质疑。为了提升AI系统安全性和鲁棒性,文章遵循安全开发流程,提出针对AI系统的安全测评和防御加固方案。该方案通过精准检测和拦截对抗攻击、科学评估模型鲁棒性、实时监控新型对抗攻击等措施,提升系统抵御对抗攻击的能力,帮助开发人员构建更安全的AI系统。 Deep learning models have performed well on many AI tasks,but elaborate adversarial samples can trick well-trained models into making false judgments.The success of the adversarial attack calls into question the usability of the AI system.In order to improve the security and robustness,the paper follow the security development lifecycle and propose a security evaluation and defense reinforcement scheme for the AI system.The scheme improves the system's ability to resist attacks and helps developers build a more secure AI system through measures such as accurate detection and interception of adversarial attacks,scientific evaluation of the model's robustness,and real-time monitoring of new adversarial attacks.
作者 王文华 郝新 刘焱 王洋 WANG Wenhua;HAO Xin;LIU Yan;WANG Yang(Baidu Security Department,Beijing 100085,China)
机构地区 百度安全
出处 《信息网络安全》 CSCD 北大核心 2020年第9期87-91,共5页 Netinfo Security
关键词 深度学习 对抗攻击 安全开发流程 deep learning adversarial attack security development lifecycle
  • 相关文献

同被引文献11

引证文献2

二级引证文献7

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部