期刊文献+

联邦学习下对抗训练样本表示的研究 被引量:7

Towards training time attacks for federated machine learning systems
原文传递
导出
摘要 联邦机器学习系统由于能够在多方之间训练联合模型而无需各方共享训练数据,因此在学术界和工业界都获得了越来越多的关注和应用.与传统的机器学习框架相比,这类系统被认为具有保护数据隐私的良好潜力.另一方面,训练阶段攻击是一种通过故意扰动训练数据,从而希望在测试时操纵相应的学习系统预测行为的攻击方法.例如,DeepConfuse是最近的一种高效生成对抗训练数据的方法,展示了传统监督学习范式在此类攻击下的脆弱性.在本文中,作者扩展了DeepConfuse方法,将其应用在联邦机器学习框架中.这是首次针对联邦学习系统的训练阶段攻击.实验结果表明,在δ–准确率损失的衡量标准下,相比于传统的机器学习框架,联邦学习系统在DeepConfuse攻击下更加脆弱. Federated machine learning systems have gained more and more attention and popularity in both academia and industry because they can obtain a shared model among multiple parties without explicitly sharing the training data.Such a system is believed to have a good potential of protecting data privacy compared with the traditional machine learning frameworks.On the other hand,training time attacks are a procedure of purposefully modifying training data,hoping to manipulate the behavior of the corresponding trained system during test time.DeepConfuse,for instance,is one recent advance in generating adversarial training data with high efficiency.In this work,we extend the DeepConfuse framework so that it can be used in federated machine learning.This is the first training time attack for a federated learning system.The empirical results showed that the federated learning system is even more vulnerable under the DeepConfuse attack in terms ofδ-accuracy loss.
作者 冯霁 蔡其志 姜远 Ji FENG;Qi-Zhi CAI;Yuan JIANG(National Key Lab for Novel Software Technology,Nanjing University,Nanjing 210023,China;Sinovation Ventures AI Institute,Beijing 100080,China)
出处 《中国科学:信息科学》 CSCD 北大核心 2021年第6期900-911,共12页 Scientia Sinica(Informationis)
关键词 联邦学习 学件 表示学习 federated learning learnware representation learning
  • 相关文献

参考文献2

二级参考文献27

  • 1Lazarevic A, Obradovic Z. The distributed boosting algorithm. In: Proceedings of the 7th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, 2001. 311-316.
  • 2Tsoumakas G, Vlahavas I. Effective stacking of distributed classifiers. In: Proceedings of the 15th European Conference on Artificial Intelligence, Lyon, France, 2002. 340-344.
  • 3Caragea C, Caragea D, Honavar V. Learning support vector machines from distributed data sources. In: Proceedings of the 20th National Conference on Artificial Intelligence, Pittsburgh, PA, 2005. 1602-1603.
  • 4Aoun-Allah M, Mineau G. Distributed data mining: Why do more than aggregating models. In: Proceedings of the 20th International Joint Conference on Artificial Intelligence, Hyderabad, India, 2007. 2645-2650.
  • 5Bowyer K, Chawla N, Moore T, et al. A parallel decision tree builder for mining very large visualization datasets. In: Proceedings of 13th IEEE International Conference on Systems, Man, and Cybernetics, Nashville, TN, 2000. 1888-1893.
  • 6Caragea D, Silvescu A, Honavar V. Decision tree induction from distributed heterogeneous autonomous data sources. In: Proceedings of the 3rd International Conference on Intelligent Systems Design and Applications, Tulsa, 2003. 341-350.
  • 7Breiman L. Pasting bites together for prediction in large data sets. Mach Learn, 1999, 36:85-103.
  • 8Chawla N V, Hall L O, Bowyer K W, et al. Learning ensembles from bites: A scalable and accurate approach. J Mach Learn Research, 2004, 5:421-451.
  • 9Luo P, Xiong H, Lu K, et al. Distributed classification in peer-to-peer networks. In: Proceedings of the 13th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Jose, 2007. 968-976.
  • 10Zhong N, Liu J, Yao Y. In search of the wisdom web. IEEE Comput, 2002, 35:27-31.

共引文献19

同被引文献32

引证文献7

二级引证文献6

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部