期刊文献+

面向语音识别系统的黑盒对抗攻击方法 被引量:10

Black-box Adversarial Attack Toward Speech Recognition System
下载PDF
导出
摘要 随着深度学习方法在语音识别系统中的广泛应用,尤其是在自动驾驶、身份认证等安全等级较高的应用,语音识别系统的安全问题至关重要.深度学习给语音识别系统带来更便捷的训练步骤、更高的识别准确率的同时,也给系统的安全性带来了潜在风险.最近的研究表明深度神经网络容易受到对输入数据添加细微扰动的对抗攻击,导致模型输出错误的预测结果.当基于深度学习的语音识别系统被外加的细微扰动所攻击,自动驾驶汽车将会被恶意语音攻击执行危险操作,给自动驾驶系统带来了严重的安全隐患.针对语音识别系统的安全性,本文提出了一种面向语音识别系统的黑盒对抗攻击方法,采用布谷鸟搜索算法自动生成对抗语音样本,实现目标攻击.最后,利用生成的对抗语音样本攻击语音识别系统,挖掘当前性能优异的语音识别系统存在的安全漏洞,将本文提出的黑盒攻击方法在公共语音数据集、谷歌语音命令数据集、GTZAN数据集和LibriSpeech数据集上展开实验,验证了黑盒攻击方法的有效性.更进一步,利用对抗样本对其他语音识别系统进行攻击,验证其具有较强攻击迁移性,并对生成的对抗样本进行了主观评价试验,探究其隐蔽性. With the wide application of deep learning method in speech recognition system,especially in autopilot and personal identification,the security of speech recognition system is crucial.In recent years,the application of deep learning has brought more convenient training steps to speech recognition system,higher recognition accuracy with the potential risks to the security problem.Recent studies have shown that deep neural networks are vulnerable to adversarial attacks in the form of subtle perturbations added onto the input data,resulting in incorrect predictive results.If the speech recognition system based on deep learning is attacked by additional minor disturbances,the autopilot will be attacked by malicious voice attacks,which will bring great security risks to the autopilot system.Aiming at the security of speech recognition system,in this paper we propose a novel black-box adversarial attack toward speech recognition system,which uses the cuckoo search algorithm to automatically generate the adversarial speech examples to achieve the target attack.Using the generated speech examples to attack the speech recognition system,it is found that there are security vulnerabilities even in the current state-of-the-art speech recognition systems.Extensive experiments are carried out on public voice data set,Google voice command data set,GTZAN data set and LibriSpeech data set,to testify the effectiveness of the proposed black-box attack method.Furthermore,the generated adversarial examples are applied to attack other speech recognition system to testify the strong transfer attack capacity and makes a subjective evaluation test on the them to explore their concealment.
作者 陈晋音 叶林辉 郑海斌 杨奕涛 俞山青 CHEN Jin-yin;YE Lin-hui;ZHENG Hai-bin;YANG Yi-tao;YU Shan-qing(College of Information Engineering,Zhejiang University of Technology,Hangzhou 310023,China)
出处 《小型微型计算机系统》 CSCD 北大核心 2020年第5期1019-1029,共11页 Journal of Chinese Computer Systems
基金 浙江省自然科学基金项目(LY19F020025)资助 宁波市“科技创新2025”重大专项(2018B10063)资助 浙江省认知医疗工程技术研究中心(2018KFJJ07)资助.
关键词 语音识别 深度学习 布谷鸟搜索算法 对抗攻击 黑盒攻击 speech recognition deep learning cuckoo search algorithm adversarial attack black-box attack
  • 相关文献

同被引文献59

引证文献10

二级引证文献21

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部