摘要
联邦机器学习系统由于能够在多方之间训练联合模型而无需各方共享训练数据,因此在学术界和工业界都获得了越来越多的关注和应用.与传统的机器学习框架相比,这类系统被认为具有保护数据隐私的良好潜力.另一方面,训练阶段攻击是一种通过故意扰动训练数据,从而希望在测试时操纵相应的学习系统预测行为的攻击方法.例如,DeepConfuse是最近的一种高效生成对抗训练数据的方法,展示了传统监督学习范式在此类攻击下的脆弱性.在本文中,作者扩展了DeepConfuse方法,将其应用在联邦机器学习框架中.这是首次针对联邦学习系统的训练阶段攻击.实验结果表明,在δ–准确率损失的衡量标准下,相比于传统的机器学习框架,联邦学习系统在DeepConfuse攻击下更加脆弱.
Federated machine learning systems have gained more and more attention and popularity in both academia and industry because they can obtain a shared model among multiple parties without explicitly sharing the training data.Such a system is believed to have a good potential of protecting data privacy compared with the traditional machine learning frameworks.On the other hand,training time attacks are a procedure of purposefully modifying training data,hoping to manipulate the behavior of the corresponding trained system during test time.DeepConfuse,for instance,is one recent advance in generating adversarial training data with high efficiency.In this work,we extend the DeepConfuse framework so that it can be used in federated machine learning.This is the first training time attack for a federated learning system.The empirical results showed that the federated learning system is even more vulnerable under the DeepConfuse attack in terms ofδ-accuracy loss.
作者
冯霁
蔡其志
姜远
Ji FENG;Qi-Zhi CAI;Yuan JIANG(National Key Lab for Novel Software Technology,Nanjing University,Nanjing 210023,China;Sinovation Ventures AI Institute,Beijing 100080,China)
出处
《中国科学:信息科学》
CSCD
北大核心
2021年第6期900-911,共12页
Scientia Sinica(Informationis)
关键词
联邦学习
学件
表示学习
federated learning
learnware
representation learning