To improve the agility, dynamics, composability, reusability, and development efficiency restricted by monolithic federation object model (FOM), a modular FOM is proposed by high level architecture (HLA) evolved p...To improve the agility, dynamics, composability, reusability, and development efficiency restricted by monolithic federation object model (FOM), a modular FOM is proposed by high level architecture (HLA) evolved product development group. This paper reviews the state-of-the-art of HLA evolved modular FOM. In particular, related concepts, the overall impact on HLA standards, extension principles, and merging processes are discussed. Also permitted and restricted combinations, and merging rules are provided, and the influence on HLA interface specification is given. The comparison between modular FOM and base object model (BOM) is performed to illustrate the importance of their combination. The applications of modular FOM are summarized. Finally, the significance to facilitate compoable simulation both in academia and practice is presented and future directions are pointed out.展开更多
高精度联邦学习模型的训练需要消耗大量的用户本地资源,参与训练的用户能够通过私自出售联合训练的模型获得非法收益.为实现联邦学习模型的产权保护,利用深度学习后门技术不影响主任务精度而仅对少量触发集样本造成误分类的特征,构建一...高精度联邦学习模型的训练需要消耗大量的用户本地资源,参与训练的用户能够通过私自出售联合训练的模型获得非法收益.为实现联邦学习模型的产权保护,利用深度学习后门技术不影响主任务精度而仅对少量触发集样本造成误分类的特征,构建一种基于模型后门的联邦学习水印(federated learning watermark based on backdoor,FLWB)方案,能够允许各参与训练的用户在其本地模型中分别嵌入私有水印,再通过云端的模型聚合操作将私有后门水印映射到全局模型作为联邦学习的全局水印.之后提出分步训练方法增强各私有后门水印在全局模型的表达效果,使得FLWB方案能够在不影响全局模型精度的前提下容纳各参与用户的私有水印.理论分析证明了FLWB方案的安全性,实验验证分步训练方法能够让全局模型在仅造成1%主任务精度损失的情况下有效容纳参与训练用户的私有水印.最后,采用模型压缩攻击和模型微调攻击对FLWB方案进行攻击测试,其结果表明FLWB方案在模型压缩到30%时仍能保留80%以上的水印,在4种不同的微调攻击下能保留90%以上的水印,具有很好的鲁棒性.展开更多
基金supported by the National Natural Science Foundation of China(6067406960574056).
文摘To improve the agility, dynamics, composability, reusability, and development efficiency restricted by monolithic federation object model (FOM), a modular FOM is proposed by high level architecture (HLA) evolved product development group. This paper reviews the state-of-the-art of HLA evolved modular FOM. In particular, related concepts, the overall impact on HLA standards, extension principles, and merging processes are discussed. Also permitted and restricted combinations, and merging rules are provided, and the influence on HLA interface specification is given. The comparison between modular FOM and base object model (BOM) is performed to illustrate the importance of their combination. The applications of modular FOM are summarized. Finally, the significance to facilitate compoable simulation both in academia and practice is presented and future directions are pointed out.
文摘高精度联邦学习模型的训练需要消耗大量的用户本地资源,参与训练的用户能够通过私自出售联合训练的模型获得非法收益.为实现联邦学习模型的产权保护,利用深度学习后门技术不影响主任务精度而仅对少量触发集样本造成误分类的特征,构建一种基于模型后门的联邦学习水印(federated learning watermark based on backdoor,FLWB)方案,能够允许各参与训练的用户在其本地模型中分别嵌入私有水印,再通过云端的模型聚合操作将私有后门水印映射到全局模型作为联邦学习的全局水印.之后提出分步训练方法增强各私有后门水印在全局模型的表达效果,使得FLWB方案能够在不影响全局模型精度的前提下容纳各参与用户的私有水印.理论分析证明了FLWB方案的安全性,实验验证分步训练方法能够让全局模型在仅造成1%主任务精度损失的情况下有效容纳参与训练用户的私有水印.最后,采用模型压缩攻击和模型微调攻击对FLWB方案进行攻击测试,其结果表明FLWB方案在模型压缩到30%时仍能保留80%以上的水印,在4种不同的微调攻击下能保留90%以上的水印,具有很好的鲁棒性.