节点文献
面向联邦学习的对抗样本投毒攻击
Adversarial examples for poisoning attacks against federated learning
【摘要】 为了解决传统的机器学习中数据隐私和数据孤岛问题,联邦学习技术应运而生.现有的联邦学习方法采用多个不共享私有数据的参与方联合训练得到了更优的全局模型.然而研究表明,联邦学习仍然存在很多安全问题.典型地,如在训练阶段受到恶意参与方的攻击,导致联邦学习全局模型失效和参与方隐私泄露.本文通过研究对抗样本在训练阶段对联邦学习系统进行投毒攻击的有效性,以发现联邦学习系统的潜在安全问题.尽管对抗样本常用于在测试阶段对机器学习模型进行攻击,但本文中,恶意参与方将对抗样本用于本地模型训练,旨在使得本地模型学习混乱的样本分类特征,从而生成恶意的本地模型参数.为了让恶意参与方主导联邦学习训练过程,本文进一步使用了“学习率放大”的策略.实验表明,相比于Fed-Deepconfuse攻击方法,本文的攻击在CIFAR10数据集和MNIST数据集上均获得了更优的攻击性能.
【Abstract】 Federated learning was developed to solve the data privacy and data island in traditional machine learning. Existing federated learning methods use multiple participants who do not share private data to jointly train a better global model. However, research shows that security problems in federated learning remain numerous. Typically, federated learning is attacked by malicious participants during training, resulting in the failure of the global model and the leakage of the private data of the participants. This paper studies the effectiveness of adversarial example poisoning attacks on federated learning and further finds potential security problems in federated learning. Although adversarial examples are often used to attack machine learning models during testing, in this paper, malicious participants use adversarial examples for training the local models, aiming to make the local model learn chaotic sample classification features, thereby generating malicious local model parameters.To let the malicious participants dominate the federal learning and training process, we further use a strategy of “learning rate amplification.” Experiments show that compared with the Fed-Deepconfuse attack method, the attacks in this paper achieve better attack performance on the CIFAR10 and MNIST datasets.
- 【文献出处】 中国科学:信息科学 ,Scientia Sinica(Informationis) , 编辑部邮箱 ,2023年03期
- 【分类号】TP181;TP309
- 【下载频次】213