期刊文献+

面向联邦算力网络的隐私计算自适激励机制 被引量:1

Adaptive Incentive Mechanism for Privacy Computing in Federated Compute First Networks
下载PDF
导出
摘要 面对“人-机-物”超融合与万物智能互联远景的现实需求,联邦算力网络充分发挥联邦学习等分布式智能技术的数据聚合优势以及“信息高铁(低熵算力网)”的计算协同优势,高效利用网络中泛在离散部署的海量数据与算力资源,从而最大化满足多种高性能、智能化计算任务需求瓶颈.同时,为建立用户泛在协作计算过程中的全生命周期安全保障和对联邦算力网络的互信任基础,差分隐私等隐私计算技术的引入成为基础性需求之一.因此,在用户自身安全和隐私不受模型逆转、梯度泄露等新兴攻击威胁的前提下,如何对大量的个性化参与用户进行有效激励,促使其积极参与并真实共享本地数据和算力,是实现联邦算力任务实际部署的关键步骤之一.然而,当前联邦算力网络的激励机制大多主要侧重于用户数据评估与公平性等计算性能相关指标研究,缺少对用户隐私需求的关注,无法有效规约隐私噪声注入过程.边缘算力节点出于自身利益考量,往往夸大隐私预算需求,造成严重的冗余精度损失.针对这一问题,本文基于改进的斯塔克伯格主从博弈模型,提出一种面向联邦算力网络的隐私计算自适应激励方法,通过两阶段的动态博弈根据分布式计算过程中隐私注入尺度进行差异化定价激励.基于反向归纳法,参与用户之间首先进行博弈均衡获取最优的本地隐私噪声预算设置策略,随后联邦参数服务器求取最优的隐私支付策略.通过理论分析,本文所提方案能够取得纳什均衡下的最优解.此外,本文还进一步对参与用户的限制条件进行了讨论,得出了用户隐私成本需求的约束上界.在EMNIST、CIFAR等公有标准数据集上的实验结果也表明,该方法相比于基于合约理论、三方博弈等理论的现有隐私激励机制,能够显著提升分布式智能协同计算任务参与各方的平均效用,在满足用户隐私需求的同时提升计算性能,大幅减少冗余损耗. In consideration of the practical demands derived from“human-machine-thing”super-fusion and the vision of ubiquitous intelligence interconnection during the era of the internet of everything,federated compute first network are regarded as a promising solution,which jointly leverages the data aggregation advantages of distributed intelligent technologies,such as federated learning,and the collaborative computing advantages of the“information high-speed rail(i.e.,low-entropy compute first network)”in the same time.The federated compute first networks efficiently utilize the abundant data and computing resources deployed ubiquitously and fragmentally in the network to maximize the satisfaction of various high-performance and intelligent computing tasks.Simultaneously,to establish comprehensive security guarantees for users engaged in ubiquitous collaborative computing processes throughout the entire life cycle,as well as foster mutual trust between distributed clients and aggregation servers within the federated compute first network,introducing privacy-preserving computing technologies like differential privacy has become one of the most essential requirements.Therefore,under the premise that users’security and privacy are not subject to newly emerging threat patterns such as model reverse attacks and gradient leakage attacks,how to effectively motivate a large number of personalized participants to participate in actively and honestly sharing local data and computing power is one of the key steps to realize the real-world deployment of multiple different federated computing tasks.However,current incentive mechanisms in federated compute first networks primarily focus on training performance-oriented factors such as data quality evaluation and fairness research,with more attention to be paid to user privacy requirements.These methods cannot effectively regulate the privacy noise injection process during collaborative training and information sharing.At the same time,edge computing nodes usually exaggerate their local privacy budget demands due to their goals of maximizing personal privacy protection levels.The irreconcilable conflict between the unsuspecting aggregator and self-interested participants may result in severe redundant accuracy loss.To address this problem,we propose an adaptive incentive approach of privacy computing in this work for federated compute first networks based on an improved Stackelberg leader-follower game model.The proposed method employs a two-stage dynamic game to offer differential pricing incentives based on the scale of privacy injection during distributed computing.Using a backward induction approach,participating users first engage in game equilibrium to obtain the optimal local privacy budget of the noise injection strategy,followed by the optimal privacy payment strategy determined by the federated parameter server.Theoretical analysis shows that the proposed solution achieves the optimal Nash equilibrium.Furthermore,the paper discusses the constraints on participating users and derives an upper bound for their privacy cost requirements.Experimental results on two public image classification datasets such as EMNIST and CIFAR,which serve as standard benchmarks for distributed learning tasks over a long period,demonstrate that compared to existing privacy incentive mechanisms based on contract theory or three-party games,the proposed method significantly improves the average utility of all parties involved in distributed intelligent collaborative computing tasks while enhancing computational performance and considerably reducing redundancy loss while ensuring user privacy requirements are guaranteed.
作者 周赞 张笑燕 杨树杰 李鸿婧 况晓辉 叶何亮 许长桥 ZHOU Zan;ZHANG Xiao-Yan;YANG Shu-Jie;LI Hong-Jing;KUANG Xiao-Hui;YE He-Liang;XU Chang-Qiao(School of Computer Science,Beijing University of Posts and Telecommunications,Beijing 100876;State Key Laboratory of Networking and Switching Technology,Beijing 100876;National Key Laboratory of Science and Technology on Information System Security,Beijing 100101;Research Institute of China Telecom Co.,Ltd.,Guangzhou 510630)
出处 《计算机学报》 EI CAS CSCD 北大核心 2023年第12期2705-2725,共21页 Chinese Journal of Computers
基金 国家自然科学基金(62225105,No.62001057)资助。
关键词 联邦算力网络 隐私计算 隐私定价 个性化隐私 动态博弈 算力网络 federated compute first network private-preserving computation privacy pricing personalized privacy dynamic game compute first network
  • 相关文献

参考文献7

二级参考文献22

共引文献117

同被引文献5

引证文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部