期刊文献+

Relationship between manifold smoothness and adversarial vulnerability in deep learning with local errors

下载PDF
导出
摘要 Artificial neural networks can achieve impressive performances,and even outperform humans in some specific tasks.Nevertheless,unlike biological brains,the artificial neural networks suffer from tiny perturbations in sensory input,under various kinds of adversarial attacks.It is therefore necessary to study the origin of the adversarial vulnerability.Here,we establish a fundamental relationship between geometry of hidden representations(manifold perspective)and the generalization capability of the deep networks.For this purpose,we choose a deep neural network trained by local errors,and then analyze emergent properties of the trained networks through the manifold dimensionality,manifold smoothness,and the generalization capability.To explore effects of adversarial examples,we consider independent Gaussian noise attacks and fast-gradient-sign-method(FGSM)attacks.Our study reveals that a high generalization accuracy requires a relatively fast power-law decay of the eigen-spectrum of hidden representations.Under Gaussian attacks,the relationship between generalization accuracy and power-law exponent is monotonic,while a non-monotonic behavior is observed for FGSM attacks.Our empirical study provides a route towards a final mechanistic interpretation of adversarial vulnerability under adversarial attacks.
作者 Zijian Jiang Jianwen Zhou Haiping Huang 蒋子健;周健文;黄海平(PMI Laboratory,School of Physics,Sun Yat-sen University,Guangzhou 510275,China)
机构地区 PMI Laboratory
出处 《Chinese Physics B》 SCIE EI CAS CSCD 2021年第4期25-32,共8页 中国物理B(英文版)
基金 Project supported by the National Key R&D Program of China (Grant No. 2019YFA0706302) the start-up budget 74130-18831109 of the 100-talent-program of Sun Yat-sen University the National Natural Science Foundation of China (Grant No. 11805284)
  • 相关文献

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部