节点文献

面向联邦学习的对抗样本投毒攻击

Adversarial examples for poisoning attacks against federated learning

  • 推荐 CAJ下载
  • PDF下载
  • 不支持迅雷等下载工具,请取消加速工具后下载。

【作者】 王波代晓蕊王伟于菲魏飞赵梦楠

【Author】 Bo WANG;Xiaorui DAI;Wei WANG;Fei YU;Fei WEI;Mengnan ZHAO;School of Information and Communication Engineering,Dalian University of Technology;Intelligent Perception and Computing Research Center,Institute of Automation,Chinese Academy of Sciences;Department of Electrical Engineering,Arizona State University;

【通讯作者】 王伟;

【机构】 大连理工大学信息与通信工程学院中国科学院自动化研究所智能感知与计算研究中心Department of Electrical Engineering,Arizona State University

【摘要】 为了解决传统的机器学习中数据隐私和数据孤岛问题,联邦学习技术应运而生.现有的联邦学习方法采用多个不共享私有数据的参与方联合训练得到了更优的全局模型.然而研究表明,联邦学习仍然存在很多安全问题.典型地,如在训练阶段受到恶意参与方的攻击,导致联邦学习全局模型失效和参与方隐私泄露.本文通过研究对抗样本在训练阶段对联邦学习系统进行投毒攻击的有效性,以发现联邦学习系统的潜在安全问题.尽管对抗样本常用于在测试阶段对机器学习模型进行攻击,但本文中,恶意参与方将对抗样本用于本地模型训练,旨在使得本地模型学习混乱的样本分类特征,从而生成恶意的本地模型参数.为了让恶意参与方主导联邦学习训练过程,本文进一步使用了“学习率放大”的策略.实验表明,相比于Fed-Deepconfuse攻击方法,本文的攻击在CIFAR10数据集和MNIST数据集上均获得了更优的攻击性能.

【Abstract】 Federated learning was developed to solve the data privacy and data island in traditional machine learning. Existing federated learning methods use multiple participants who do not share private data to jointly train a better global model. However, research shows that security problems in federated learning remain numerous. Typically, federated learning is attacked by malicious participants during training, resulting in the failure of the global model and the leakage of the private data of the participants. This paper studies the effectiveness of adversarial example poisoning attacks on federated learning and further finds potential security problems in federated learning. Although adversarial examples are often used to attack machine learning models during testing, in this paper, malicious participants use adversarial examples for training the local models, aiming to make the local model learn chaotic sample classification features, thereby generating malicious local model parameters.To let the malicious participants dominate the federal learning and training process, we further use a strategy of “learning rate amplification.” Experiments show that compared with the Fed-Deepconfuse attack method, the attacks in this paper achieve better attack performance on the CIFAR10 and MNIST datasets.

【基金】 国家自然科学基金(批准号:U1936117,62106037,62076052);大连市科技创新基金应用基础研究项目(批准号:2021JJ12GX018);模式识别国家重点实验室开放课题基金(批准号:202100032);中央高校基本科研业务费(批准号:DUT21GF303)资助项目
  • 【文献出处】 中国科学:信息科学 ,Scientia Sinica(Informationis) , 编辑部邮箱 ,2023年03期
  • 【分类号】TP181;TP309
  • 【下载频次】213
节点文献中: 

本文链接的文献网络图示:

本文的引文网络