In the realm of Intelligent Railway Transportation Systems,effective multi-party collaboration is crucial due to concerns over privacy and data silos.Vertical Federated Learning(VFL)has emerged as a promising approach...In the realm of Intelligent Railway Transportation Systems,effective multi-party collaboration is crucial due to concerns over privacy and data silos.Vertical Federated Learning(VFL)has emerged as a promising approach to facilitate such collaboration,allowing diverse entities to collectively enhance machine learning models without the need to share sensitive training data.However,existing works have highlighted VFL’s susceptibility to privacy inference attacks,where an honest but curious server could potentially reconstruct a client’s raw data from embeddings uploaded by the client.This vulnerability poses a significant threat to VFL-based intelligent railway transportation systems.In this paper,we introduce SensFL,a novel privacy-enhancing method to against privacy inference attacks in VFL.Specifically,SensFL integrates regularization of the sensitivity of embeddings to the original data into the model training process,effectively limiting the information contained in shared embeddings.By reducing the sensitivity of embeddings to the original data,SensFL can effectively resist reverse privacy attacks and prevent the reconstruction of the original data from the embeddings.Extensive experiments were conducted on four distinct datasets and three different models to demonstrate the efficacy of SensFL.Experiment results show that SensFL can effectively mitigate privacy inference attacks while maintaining the accuracy of the primary learning task.These results underscore SensFL’s potential to advance privacy protection technologies within VFL-based intelligent railway systems,addressing critical security concerns in collaborative learning environments.展开更多
Mental health is a significant issue worldwide,and the utilization of technology to assist mental health has seen a growing trend.This aims to alleviate the workload on healthcare professionals and aid individuals.Num...Mental health is a significant issue worldwide,and the utilization of technology to assist mental health has seen a growing trend.This aims to alleviate the workload on healthcare professionals and aid individuals.Numerous applications have been developed to support the challenges in intelligent healthcare systems.However,because mental health data is sensitive,privacy concerns have emerged.Federated learning has gotten some attention.This research reviews the studies on federated learning and mental health related to solving the issue of intelligent healthcare systems.It explores various dimensions of federated learning in mental health,such as datasets(their types and sources),applications categorized based on mental health symptoms,federated mental health frameworks,federated machine learning,federated deep learning,and the benefits of federated learning in mental health applications.This research conducts surveys to evaluate the current state of mental health applications,mainly focusing on the role of Federated Learning(FL)and related privacy and data security concerns.The survey provides valuable insights into how these applications are emerging and evolving,specifically emphasizing FL’s impact.展开更多
Benefiting from the development of Federated Learning(FL)and distributed communication systems,large-scale intelligent applications become possible.Distributed devices not only provide adequate training data,but also ...Benefiting from the development of Federated Learning(FL)and distributed communication systems,large-scale intelligent applications become possible.Distributed devices not only provide adequate training data,but also cause privacy leakage and energy consumption.How to optimize the energy consumption in distributed communication systems,while ensuring the privacy of users and model accuracy,has become an urgent challenge.In this paper,we define the FL as a 3-layer architecture including users,agents and server.In order to find a balance among model training accuracy,privacy-preserving effect,and energy consumption,we design the training process of FL as game models.We use an extensive game tree to analyze the key elements that influence the players’decisions in the single game,and then find the incentive mechanism that meet the social norms through the repeated game.The experimental results show that the Nash equilibrium we obtained satisfies the laws of reality,and the proposed incentive mechanism can also promote users to submit high-quality data in FL.Following the multiple rounds of play,the incentive mechanism can help all players find the optimal strategies for energy,privacy,and accuracy of FL in distributed communication systems.展开更多
Although Federated Deep Learning(FDL)enables distributed machine learning in the Internet of Vehicles(IoV),it requires multiple clients to upload model parameters,thus still existing unavoidable communication overhead...Although Federated Deep Learning(FDL)enables distributed machine learning in the Internet of Vehicles(IoV),it requires multiple clients to upload model parameters,thus still existing unavoidable communication overhead and data privacy risks.The recently proposed Swarm Learning(SL)provides a decentralized machine learning approach for unit edge computing and blockchain-based coordination.A Swarm-Federated Deep Learning framework in the IoV system(IoV-SFDL)that integrates SL into the FDL framework is proposed in this paper.The IoV-SFDL organizes vehicles to generate local SL models with adjacent vehicles based on the blockchain empowered SL,then aggregates the global FDL model among different SL groups with a credibility weights prediction algorithm.Extensive experimental results show that compared with the baseline frameworks,the proposed IoV-SFDL framework reduces the overhead of client-to-server communication by 16.72%,while the model performance improves by about 5.02%for the same training iterations.展开更多
High-efficiency and low-cost knowledge sharing can improve the decision-making ability of autonomous vehicles by mining knowledge from the Internet of Vehicles(IoVs).However,it is challenging to ensure high efficiency...High-efficiency and low-cost knowledge sharing can improve the decision-making ability of autonomous vehicles by mining knowledge from the Internet of Vehicles(IoVs).However,it is challenging to ensure high efficiency of local data learning models while preventing privacy leakage in a high mobility environment.In order to protect data privacy and improve data learning efficiency in knowledge sharing,we propose an asynchronous federated broad learning(FBL)framework that integrates broad learning(BL)into federated learning(FL).In FBL,we design a broad fully connected model(BFCM)as a local model for training client data.To enhance the wireless channel quality for knowledge sharing and reduce the communication and computation cost of participating clients,we construct a joint resource allocation and reconfigurable intelligent surface(RIS)configuration optimization framework for FBL.The problem is decoupled into two convex subproblems.Aiming to improve the resource scheduling efficiency in FBL,a double Davidon–Fletcher–Powell(DDFP)algorithm is presented to solve the time slot allocation and RIS configuration problem.Based on the results of resource scheduling,we design a reward-allocation algorithm based on federated incentive learning(FIL)in FBL to compensate clients for their costs.The simulation results show that the proposed FBL framework achieves better performance than the comparison models in terms of efficiency,accuracy,and cost for knowledge sharing in the IoV.展开更多
The application of artificial intelligence technology in Internet of Vehicles(lov)has attracted great research interests with the goal of enabling smart transportation and traffic management.Meanwhile,concerns have be...The application of artificial intelligence technology in Internet of Vehicles(lov)has attracted great research interests with the goal of enabling smart transportation and traffic management.Meanwhile,concerns have been raised over the security and privacy of the tons of traffic and vehicle data.In this regard,Federated Learning(FL)with privacy protection features is considered a highly promising solution.However,in the FL process,the server side may take advantage of its dominant role in model aggregation to steal sensitive information of users,while the client side may also upload malicious data to compromise the training of the global model.Most existing privacy-preserving FL schemes in IoV fail to deal with threats from both of these two sides at the same time.In this paper,we propose a Blockchain based Privacy-preserving Federated Learning scheme named BPFL,which uses blockchain as the underlying distributed framework of FL.We improve the Multi-Krum technology and combine it with the homomorphic encryption to achieve ciphertext-level model aggregation and model filtering,which can enable the verifiability of the local models while achieving privacy-preservation.Additionally,we develop a reputation-based incentive mechanism to encourage users in IoV to actively participate in the federated learning and to practice honesty.The security analysis and performance evaluations are conducted to show that the proposed scheme can meet the security requirements and improve the performance of the FL model.展开更多
Federated Learning(FL),a burgeoning technology,has received increasing attention due to its privacy protection capability.However,the base algorithm FedAvg is vulnerable when it suffers from so-called backdoor attacks...Federated Learning(FL),a burgeoning technology,has received increasing attention due to its privacy protection capability.However,the base algorithm FedAvg is vulnerable when it suffers from so-called backdoor attacks.Former researchers proposed several robust aggregation methods.Unfortunately,due to the hidden characteristic of backdoor attacks,many of these aggregation methods are unable to defend against backdoor attacks.What's more,the attackers recently have proposed some hiding methods that further improve backdoor attacks'stealthiness,making all the existing robust aggregation methods fail.To tackle the threat of backdoor attacks,we propose a new aggregation method,X-raying Models with A Matrix(XMAM),to reveal the malicious local model updates submitted by the backdoor attackers.Since we observe that the output of the Softmax layer exhibits distinguishable patterns between malicious and benign updates,unlike the existing aggregation algorithms,we focus on the Softmax layer's output in which the backdoor attackers are difficult to hide their malicious behavior.Specifically,like medical X-ray examinations,we investigate the collected local model updates by using a matrix as an input to get their Softmax layer's outputs.Then,we preclude updates whose outputs are abnormal by clustering.Without any training dataset in the server,the extensive evaluations show that our XMAM can effectively distinguish malicious local model updates from benign ones.For instance,when other methods fail to defend against the backdoor attacks at no more than 20%malicious clients,our method can tolerate 45%malicious clients in the black-box mode and about 30%in Projected Gradient Descent(PGD)mode.Besides,under adaptive attacks,the results demonstrate that XMAM can still complete the global model training task even when there are 40%malicious clients.Finally,we analyze our method's screening complexity and compare the real screening time with other methods.The results show that XMAM is about 10–10000 times faster than the existing methods.展开更多
In vehicle edge computing(VEC),asynchronous federated learning(AFL)is used,where the edge receives a local model and updates the global model,effectively reducing the global aggregation latency.Due to different amount...In vehicle edge computing(VEC),asynchronous federated learning(AFL)is used,where the edge receives a local model and updates the global model,effectively reducing the global aggregation latency.Due to different amounts of local data,computing capabilities and locations of the vehicles,renewing the global model with same weight is inappropriate.The above factors will affect the local calculation time and upload time of the local model,and the vehicle may also be affected by Byzantine attacks,leading to the deterioration of the vehicle data.However,based on deep reinforcement learning(DRL),we can consider these factors comprehensively to eliminate vehicles with poor performance as much as possible and exclude vehicles that have suffered Byzantine attacks before AFL.At the same time,when aggregating AFL,we can focus on those vehicles with better performance to improve the accuracy and safety of the system.In this paper,we proposed a vehicle selection scheme based on DRL in VEC.In this scheme,vehicle’s mobility,channel conditions with temporal variations,computational resources with temporal variations,different data amount,transmission channel status of vehicles as well as Byzantine attacks were taken into account.Simulation results show that the proposed scheme effectively improves the safety and accuracy of the global model.展开更多
In the assessment of car insurance claims,the claim rate for car insurance presents a highly skewed probability distribution,which is typically modeled using Tweedie distribution.The traditional approach to obtaining ...In the assessment of car insurance claims,the claim rate for car insurance presents a highly skewed probability distribution,which is typically modeled using Tweedie distribution.The traditional approach to obtaining the Tweedie regression model involves training on a centralized dataset,when the data is provided by multiple parties,training a privacy-preserving Tweedie regression model without exchanging raw data becomes a challenge.To address this issue,this study introduces a novel vertical federated learning-based Tweedie regression algorithm for multi-party auto insurance rate setting in data silos.The algorithm can keep sensitive data locally and uses privacy-preserving techniques to achieve intersection operations between the two parties holding the data.After determining which entities are shared,the participants train the model locally using the shared entity data to obtain the local generalized linear model intermediate parameters.The homomorphic encryption algorithms are introduced to interact with and update the model intermediate parameters to collaboratively complete the joint training of the car insurance rate-setting model.Performance tests on two publicly available datasets show that the proposed federated Tweedie regression algorithm can effectively generate Tweedie regression models that leverage the value of data fromboth partieswithout exchanging data.The assessment results of the scheme approach those of the Tweedie regressionmodel learned fromcentralized data,and outperformthe Tweedie regressionmodel learned independently by a single party.展开更多
Federated learning is an innovative machine learning technique that deals with centralized data storage issues while maintaining privacy and security.It involves constructing machine learning models using datasets spr...Federated learning is an innovative machine learning technique that deals with centralized data storage issues while maintaining privacy and security.It involves constructing machine learning models using datasets spread across several data centers,including medical facilities,clinical research facilities,Internet of Things devices,and even mobile devices.The main goal of federated learning is to improve robust models that benefit from the collective knowledge of these disparate datasets without centralizing sensitive information,reducing the risk of data loss,privacy breaches,or data exposure.The application of federated learning in the healthcare industry holds significant promise due to the wealth of data generated from various sources,such as patient records,medical imaging,wearable devices,and clinical research surveys.This research conducts a systematic evaluation and highlights essential issues for the selection and implementation of federated learning approaches in healthcare.It evaluates the effectiveness of federated learning strategies in the field of healthcare.It offers a systematic analysis of federated learning in the healthcare domain,encompassing the evaluation metrics employed.In addition,this study highlights the increasing interest in federated learning applications in healthcare among scholars and provides foundations for further studies.展开更多
With the rapid development of the Internet,network security and data privacy are increasingly valued.Although classical Network Intrusion Detection System(NIDS)based on Deep Learning(DL)models can provide good detecti...With the rapid development of the Internet,network security and data privacy are increasingly valued.Although classical Network Intrusion Detection System(NIDS)based on Deep Learning(DL)models can provide good detection accuracy,but collecting samples for centralized training brings the huge risk of data privacy leakage.Furthermore,the training of supervised deep learning models requires a large number of labeled samples,which is usually cumbersome.The“black-box”problem also makes the DL models of NIDS untrustworthy.In this paper,we propose a trusted Federated Learning(FL)Traffic IDS method called FL-TIDS to address the above-mentioned problems.In FL-TIDS,we design an unsupervised intrusion detection model based on autoencoders that alleviates the reliance on marked samples.At the same time,we use FL for model training to protect data privacy.In addition,we design an improved SHAP interpretable method based on chi-square test to perform interpretable analysis of the trained model.We conducted several experiments to evaluate the proposed FL-TIDS.We first determine experimentally the structure and the number of neurons of the unsupervised AE model.Secondly,we evaluated the proposed method using the UNSW-NB15 and CICIDS2017 datasets.The exper-imental results show that the unsupervised AE model has better performance than the other 7 intrusion detection models in terms of precision,recall and f1-score.Then,federated learning is used to train the intrusion detection model.The experimental results indicate that the model is more accurate than the local learning model.Finally,we use an improved SHAP explainability method based on Chi-square test to analyze the explainability.The analysis results show that the identification characteristics of the model are consistent with the attack characteristics,and the model is reliable.展开更多
Hybrid precoding is considered as a promising low-cost technique for millimeter wave(mm-wave)massive Multi-Input Multi-Output(MIMO)systems.In this work,referring to the time-varying propagation circumstances,with semi...Hybrid precoding is considered as a promising low-cost technique for millimeter wave(mm-wave)massive Multi-Input Multi-Output(MIMO)systems.In this work,referring to the time-varying propagation circumstances,with semi-supervised Incremental Learning(IL),we propose an online hybrid beamforming scheme.Firstly,given the constraint of constant modulus on analog beamformer and combiner,we propose a new broadnetwork-based structure for the design model of hybrid beamforming.Compared with the existing network structure,the proposed network structure can achieve better transmission performance and lower complexity.Moreover,to enhance the efficiency of IL further,by combining the semi-supervised graph with IL,we propose a hybrid beamforming scheme based on chunk-by-chunk semi-supervised learning,where only few transmissions are required to calculate the label and all other unlabelled transmissions would also be put into a training data chunk.Unlike the existing single-by-single approach where transmissions during the model update are not taken into the consideration of model update,all transmissions,even the ones during the model update,would make contributions to model update in the proposed method.During the model update,the amount of unlabelled transmissions is very large and they also carry some information,the prediction performance can be enhanced to some extent by these unlabelled channel data.Simulation results demonstrate the spectral efficiency of the proposed method outperforms that of the existing single-by-single approach.Besides,we prove the general complexity of the proposed method is lower than that of the existing approach and give the condition under which its absolute complexity outperforms that of the existing approach.展开更多
Federated learning has been explored as a promising solution for training machine learning models at the network edge,without sharing private user data.With limited resources at the edge,new solutions must be develope...Federated learning has been explored as a promising solution for training machine learning models at the network edge,without sharing private user data.With limited resources at the edge,new solutions must be developed to leverage the software and hardware resources as the existing solutions did not focus on resource management for network edge,specially for federated learning.In this paper,we describe the recent work on resource manage-ment at the edge and explore the challenges and future directions to allow the execution of federated learning at the edge.Problems such as the discovery of resources,deployment,load balancing,migration,and energy effi-ciency are discussed in the paper.展开更多
Scalability and information personal privacy are vital for training and deploying large-scale deep learning models.Federated learning trains models on exclusive information by aggregating weights from various devices ...Scalability and information personal privacy are vital for training and deploying large-scale deep learning models.Federated learning trains models on exclusive information by aggregating weights from various devices and taking advantage of the device-agnostic environment of web browsers.Nevertheless,relying on a main central server for internet browser-based federated systems can prohibit scalability and interfere with the training process as a result of growing client numbers.Additionally,information relating to the training dataset can possibly be extracted from the distributed weights,potentially reducing the privacy of the local data used for training.In this research paper,we aim to investigate the challenges of scalability and data privacy to increase the efficiency of distributed training models.As a result,we propose a web-federated learning exchange(WebFLex)framework,which intends to improve the decentralization of the federated learning process.WebFLex is additionally developed to secure distributed and scalable federated learning systems that operate in web browsers across heterogeneous devices.Furthermore,WebFLex utilizes peer-to-peer interactions and secure weight exchanges utilizing browser-to-browser web real-time communication(WebRTC),efficiently preventing the need for a main central server.WebFLex has actually been measured in various setups using the MNIST dataset.Experimental results show WebFLex’s ability to improve the scalability of federated learning systems,allowing a smooth increase in the number of participating devices without central data aggregation.In addition,WebFLex can maintain a durable federated learning procedure even when faced with device disconnections and network variability.Additionally,it improves data privacy by utilizing artificial noise,which accomplishes an appropriate balance between accuracy and privacy preservation.展开更多
When data privacy is imposed as a necessity,Federated learning(FL)emerges as a relevant artificial intelligence field for developing machine learning(ML)models in a distributed and decentralized environment.FL allows ...When data privacy is imposed as a necessity,Federated learning(FL)emerges as a relevant artificial intelligence field for developing machine learning(ML)models in a distributed and decentralized environment.FL allows ML models to be trained on local devices without any need for centralized data transfer,thereby reducing both the exposure of sensitive data and the possibility of data interception by malicious third parties.This paradigm has gained momentum in the last few years,spurred by the plethora of real-world applications that have leveraged its ability to improve the efficiency of distributed learning and to accommodate numerous participants with their data sources.By virtue of FL,models can be learned from all such distributed data sources while preserving data privacy.The aim of this paper is to provide a practical tutorial on FL,including a short methodology and a systematic analysis of existing software frameworks.Furthermore,our tutorial provides exemplary cases of study from three complementary perspectives:i)Foundations of FL,describing the main components of FL,from key elements to FL categories;ii)Implementation guidelines and exemplary cases of study,by systematically examining the functionalities provided by existing software frameworks for FL deployment,devising a methodology to design a FL scenario,and providing exemplary cases of study with source code for different ML approaches;and iii)Trends,shortly reviewing a non-exhaustive list of research directions that are under active investigation in the current FL landscape.The ultimate purpose of this work is to establish itself as a referential work for researchers,developers,and data scientists willing to explore the capabilities of FL in practical applications.展开更多
With the prevalence of the Internet of Things(IoT)systems,smart cities comprise complex networks,including sensors,actuators,appliances,and cyber services.The complexity and heterogeneity of smart cities have become v...With the prevalence of the Internet of Things(IoT)systems,smart cities comprise complex networks,including sensors,actuators,appliances,and cyber services.The complexity and heterogeneity of smart cities have become vulnerable to sophisticated cyber-attacks,especially privacy-related attacks such as inference and data poisoning ones.Federated Learning(FL)has been regarded as a hopeful method to enable distributed learning with privacypreserved intelligence in IoT applications.Even though the significance of developing privacy-preserving FL has drawn as a great research interest,the current research only concentrates on FL with independent identically distributed(i.i.d)data and few studies have addressed the non-i.i.d setting.FL is known to be vulnerable to Generative Adversarial Network(GAN)attacks,where an adversary can presume to act as a contributor participating in the training process to acquire the private data of other contributors.This paper proposes an innovative Privacy Protection-based Federated Deep Learning(PP-FDL)framework,which accomplishes data protection against privacy-related GAN attacks,along with high classification rates from non-i.i.d data.PP-FDL is designed to enable fog nodes to cooperate to train the FDL model in a way that ensures contributors have no access to the data of each other,where class probabilities are protected utilizing a private identifier generated for each class.The PP-FDL framework is evaluated for image classification using simple convolutional networks which are trained using MNIST and CIFAR-10 datasets.The empirical results have revealed that PF-DFL can achieve data protection and the framework outperforms the other three state-of-the-art models with 3%–8%as accuracy improvements.展开更多
As a representative emerging machine learning technique, federated learning(FL) has gained considerable popularity for its special feature of “making data available but not visible”. However, potential problems rema...As a representative emerging machine learning technique, federated learning(FL) has gained considerable popularity for its special feature of “making data available but not visible”. However, potential problems remain, including privacy breaches, imbalances in payment, and inequitable distribution.These shortcomings let devices reluctantly contribute relevant data to, or even refuse to participate in FL. Therefore, in the application of FL, an important but also challenging issue is to motivate as many participants as possible to provide high-quality data to FL. In this paper, we propose an incentive mechanism for FL based on the continuous zero-determinant(CZD) strategies from the perspective of game theory. We first model the interaction between the server and the devices during the FL process as a continuous iterative game. We then apply the CZD strategies for two players and then multiple players to optimize the social welfare of FL, for which we prove that the server can keep social welfare at a high and stable level. Subsequently, we design an incentive mechanism based on the CZD strategies to attract devices to contribute all of their high-accuracy data to FL.Finally, we perform simulations to demonstrate that our proposed CZD-based incentive mechanism can indeed generate high and stable social welfare in FL.展开更多
As a distributed machine learning method,federated learning(FL)has the advantage of naturally protecting data privacy.It keeps data locally and trains local models through local data to protect the privacy of local da...As a distributed machine learning method,federated learning(FL)has the advantage of naturally protecting data privacy.It keeps data locally and trains local models through local data to protect the privacy of local data.The federated learning method effectively solves the problem of artificial Smart data islands and privacy protection issues.However,existing research shows that attackersmay still steal user information by analyzing the parameters in the federated learning training process and the aggregation parameters on the server side.To solve this problem,differential privacy(DP)techniques are widely used for privacy protection in federated learning.However,adding Gaussian noise perturbations to the data degrades the model learning performance.To address these issues,this paper proposes a differential privacy federated learning scheme based on adaptive Gaussian noise(DPFL-AGN).To protect the data privacy and security of the federated learning training process,adaptive Gaussian noise is specifically added in the training process to hide the real parameters uploaded by the client.In addition,this paper proposes an adaptive noise reduction method.With the convergence of the model,the Gaussian noise in the later stage of the federated learning training process is reduced adaptively.This paper conducts a series of simulation experiments on realMNIST and CIFAR-10 datasets,and the results show that the DPFL-AGN algorithmperforms better compared to the other algorithms.展开更多
Exploring open fields with coordinated unmanned vehicles is popular in academia and industries.One of the most impressive applicable approaches is the Internet of Vehicles(lov).The IoV connects vehicles,road infrastru...Exploring open fields with coordinated unmanned vehicles is popular in academia and industries.One of the most impressive applicable approaches is the Internet of Vehicles(lov).The IoV connects vehicles,road infrastructures and communication facilities to provide solutions for exploration tasks.However,the coordination of acquiring information from multi-vehicles may risk data privacy.To this end,sharing high-quality experiences instead of raw data has become an urgent demand.This paper employs a Deep Reinforcement Learning(DRL)method to enable IoVs to generate training data with prioritized experience and states,which can support the IoV to explore the environment more efficiently.Moreover,a Federated Learning(FL)experience sharing model is established to guarantee the vehicles'privacy.The numerical results show that the proposed method presents a better successful sharing rate and a more stable convergence within the comparison of fundamental methods.The experiments also suggest that the proposed method could support agents without full information to achieve the tasks.展开更多
Data sharing and privacy protection are made possible by federated learning,which allows for continuous model parameter sharing between several clients and a central server.Multiple reliable and high-quality clients m...Data sharing and privacy protection are made possible by federated learning,which allows for continuous model parameter sharing between several clients and a central server.Multiple reliable and high-quality clients must participate in practical applications for the federated learning global model to be accurate,but because the clients are independent,the central server cannot fully control their behavior.The central server has no way of knowing the correctness of the model parameters provided by each client in this round,so clients may purposefully or unwittingly submit anomalous data,leading to abnormal behavior,such as becoming malicious attackers or defective clients.To reduce their negative consequences,it is crucial to quickly detect these abnormalities and incentivize them.In this paper,we propose a Federated Learning framework for Detecting and Incentivizing Abnormal Clients(FL-DIAC)to accomplish efficient and security federated learning.We build a detector that introduces an auto-encoder for anomaly detection and use it to perform anomaly identification and prevent the involvement of abnormal clients,in particular for the anomaly client detection problem.Among them,before the model parameters are input to the detector,we propose a Fourier transform-based anomaly data detectionmethod for dimensionality reduction in order to reduce the computational complexity.Additionally,we create a credit scorebased incentive structure to encourage clients to participate in training in order tomake clients actively participate.Three training models(CNN,MLP,and ResNet-18)and three datasets(MNIST,Fashion MNIST,and CIFAR-10)have been used in experiments.According to theoretical analysis and experimental findings,the FL-DIAC is superior to other federated learning schemes of the same type in terms of effectiveness.展开更多
基金supported by Systematic Major Project of Shuohuang Railway Development Co.,Ltd.,National Energy Group(Grant Number:SHTL-23-31)Beijing Natural Science Foundation(U22B2027).
文摘In the realm of Intelligent Railway Transportation Systems,effective multi-party collaboration is crucial due to concerns over privacy and data silos.Vertical Federated Learning(VFL)has emerged as a promising approach to facilitate such collaboration,allowing diverse entities to collectively enhance machine learning models without the need to share sensitive training data.However,existing works have highlighted VFL’s susceptibility to privacy inference attacks,where an honest but curious server could potentially reconstruct a client’s raw data from embeddings uploaded by the client.This vulnerability poses a significant threat to VFL-based intelligent railway transportation systems.In this paper,we introduce SensFL,a novel privacy-enhancing method to against privacy inference attacks in VFL.Specifically,SensFL integrates regularization of the sensitivity of embeddings to the original data into the model training process,effectively limiting the information contained in shared embeddings.By reducing the sensitivity of embeddings to the original data,SensFL can effectively resist reverse privacy attacks and prevent the reconstruction of the original data from the embeddings.Extensive experiments were conducted on four distinct datasets and three different models to demonstrate the efficacy of SensFL.Experiment results show that SensFL can effectively mitigate privacy inference attacks while maintaining the accuracy of the primary learning task.These results underscore SensFL’s potential to advance privacy protection technologies within VFL-based intelligent railway systems,addressing critical security concerns in collaborative learning environments.
文摘Mental health is a significant issue worldwide,and the utilization of technology to assist mental health has seen a growing trend.This aims to alleviate the workload on healthcare professionals and aid individuals.Numerous applications have been developed to support the challenges in intelligent healthcare systems.However,because mental health data is sensitive,privacy concerns have emerged.Federated learning has gotten some attention.This research reviews the studies on federated learning and mental health related to solving the issue of intelligent healthcare systems.It explores various dimensions of federated learning in mental health,such as datasets(their types and sources),applications categorized based on mental health symptoms,federated mental health frameworks,federated machine learning,federated deep learning,and the benefits of federated learning in mental health applications.This research conducts surveys to evaluate the current state of mental health applications,mainly focusing on the role of Federated Learning(FL)and related privacy and data security concerns.The survey provides valuable insights into how these applications are emerging and evolving,specifically emphasizing FL’s impact.
基金sponsored by the National Key R&D Program of China(No.2018YFB2100400)the National Natural Science Foundation of China(No.62002077,61872100)+4 种基金the Major Research Plan of the National Natural Science Foundation of China(92167203)the Guangdong Basic and Applied Basic Research Foundation(No.2020A1515110385)the China Postdoctoral Science Foundation(No.2022M710860)the Zhejiang Lab(No.2020NF0AB01)Guangzhou Science and Technology Plan Project(202102010440).
文摘Benefiting from the development of Federated Learning(FL)and distributed communication systems,large-scale intelligent applications become possible.Distributed devices not only provide adequate training data,but also cause privacy leakage and energy consumption.How to optimize the energy consumption in distributed communication systems,while ensuring the privacy of users and model accuracy,has become an urgent challenge.In this paper,we define the FL as a 3-layer architecture including users,agents and server.In order to find a balance among model training accuracy,privacy-preserving effect,and energy consumption,we design the training process of FL as game models.We use an extensive game tree to analyze the key elements that influence the players’decisions in the single game,and then find the incentive mechanism that meet the social norms through the repeated game.The experimental results show that the Nash equilibrium we obtained satisfies the laws of reality,and the proposed incentive mechanism can also promote users to submit high-quality data in FL.Following the multiple rounds of play,the incentive mechanism can help all players find the optimal strategies for energy,privacy,and accuracy of FL in distributed communication systems.
基金supported by the National Natural Science Foundation of China(NSFC)under Grant 62071179.
文摘Although Federated Deep Learning(FDL)enables distributed machine learning in the Internet of Vehicles(IoV),it requires multiple clients to upload model parameters,thus still existing unavoidable communication overhead and data privacy risks.The recently proposed Swarm Learning(SL)provides a decentralized machine learning approach for unit edge computing and blockchain-based coordination.A Swarm-Federated Deep Learning framework in the IoV system(IoV-SFDL)that integrates SL into the FDL framework is proposed in this paper.The IoV-SFDL organizes vehicles to generate local SL models with adjacent vehicles based on the blockchain empowered SL,then aggregates the global FDL model among different SL groups with a credibility weights prediction algorithm.Extensive experimental results show that compared with the baseline frameworks,the proposed IoV-SFDL framework reduces the overhead of client-to-server communication by 16.72%,while the model performance improves by about 5.02%for the same training iterations.
基金supported in part by the National Natural Science Foundation of China(62371116 and 62231020)in part by the Science and Technology Project of Hebei Province Education Department(ZD2022164)+2 种基金in part by the Fundamental Research Funds for the Central Universities(N2223031)in part by the Open Research Project of Xidian University(ISN24-08)Key Laboratory of Cognitive Radio and Information Processing,Ministry of Education(Guilin University of Electronic Technology,China,CRKL210203)。
文摘High-efficiency and low-cost knowledge sharing can improve the decision-making ability of autonomous vehicles by mining knowledge from the Internet of Vehicles(IoVs).However,it is challenging to ensure high efficiency of local data learning models while preventing privacy leakage in a high mobility environment.In order to protect data privacy and improve data learning efficiency in knowledge sharing,we propose an asynchronous federated broad learning(FBL)framework that integrates broad learning(BL)into federated learning(FL).In FBL,we design a broad fully connected model(BFCM)as a local model for training client data.To enhance the wireless channel quality for knowledge sharing and reduce the communication and computation cost of participating clients,we construct a joint resource allocation and reconfigurable intelligent surface(RIS)configuration optimization framework for FBL.The problem is decoupled into two convex subproblems.Aiming to improve the resource scheduling efficiency in FBL,a double Davidon–Fletcher–Powell(DDFP)algorithm is presented to solve the time slot allocation and RIS configuration problem.Based on the results of resource scheduling,we design a reward-allocation algorithm based on federated incentive learning(FIL)in FBL to compensate clients for their costs.The simulation results show that the proposed FBL framework achieves better performance than the comparison models in terms of efficiency,accuracy,and cost for knowledge sharing in the IoV.
基金supported by the National Natural Science Foundation of China under Grant 61972148.
文摘The application of artificial intelligence technology in Internet of Vehicles(lov)has attracted great research interests with the goal of enabling smart transportation and traffic management.Meanwhile,concerns have been raised over the security and privacy of the tons of traffic and vehicle data.In this regard,Federated Learning(FL)with privacy protection features is considered a highly promising solution.However,in the FL process,the server side may take advantage of its dominant role in model aggregation to steal sensitive information of users,while the client side may also upload malicious data to compromise the training of the global model.Most existing privacy-preserving FL schemes in IoV fail to deal with threats from both of these two sides at the same time.In this paper,we propose a Blockchain based Privacy-preserving Federated Learning scheme named BPFL,which uses blockchain as the underlying distributed framework of FL.We improve the Multi-Krum technology and combine it with the homomorphic encryption to achieve ciphertext-level model aggregation and model filtering,which can enable the verifiability of the local models while achieving privacy-preservation.Additionally,we develop a reputation-based incentive mechanism to encourage users in IoV to actively participate in the federated learning and to practice honesty.The security analysis and performance evaluations are conducted to show that the proposed scheme can meet the security requirements and improve the performance of the FL model.
基金Supported by the Fundamental Research Funds for the Central Universities(328202204)。
文摘Federated Learning(FL),a burgeoning technology,has received increasing attention due to its privacy protection capability.However,the base algorithm FedAvg is vulnerable when it suffers from so-called backdoor attacks.Former researchers proposed several robust aggregation methods.Unfortunately,due to the hidden characteristic of backdoor attacks,many of these aggregation methods are unable to defend against backdoor attacks.What's more,the attackers recently have proposed some hiding methods that further improve backdoor attacks'stealthiness,making all the existing robust aggregation methods fail.To tackle the threat of backdoor attacks,we propose a new aggregation method,X-raying Models with A Matrix(XMAM),to reveal the malicious local model updates submitted by the backdoor attackers.Since we observe that the output of the Softmax layer exhibits distinguishable patterns between malicious and benign updates,unlike the existing aggregation algorithms,we focus on the Softmax layer's output in which the backdoor attackers are difficult to hide their malicious behavior.Specifically,like medical X-ray examinations,we investigate the collected local model updates by using a matrix as an input to get their Softmax layer's outputs.Then,we preclude updates whose outputs are abnormal by clustering.Without any training dataset in the server,the extensive evaluations show that our XMAM can effectively distinguish malicious local model updates from benign ones.For instance,when other methods fail to defend against the backdoor attacks at no more than 20%malicious clients,our method can tolerate 45%malicious clients in the black-box mode and about 30%in Projected Gradient Descent(PGD)mode.Besides,under adaptive attacks,the results demonstrate that XMAM can still complete the global model training task even when there are 40%malicious clients.Finally,we analyze our method's screening complexity and compare the real screening time with other methods.The results show that XMAM is about 10–10000 times faster than the existing methods.
基金supported in part by the National Natural Science Foundation of China(No.61701197)in part by the National Key Research and Development Program of China(No.2021YFA1000500(4))in part by the 111 Project(No.B23008).
文摘In vehicle edge computing(VEC),asynchronous federated learning(AFL)is used,where the edge receives a local model and updates the global model,effectively reducing the global aggregation latency.Due to different amounts of local data,computing capabilities and locations of the vehicles,renewing the global model with same weight is inappropriate.The above factors will affect the local calculation time and upload time of the local model,and the vehicle may also be affected by Byzantine attacks,leading to the deterioration of the vehicle data.However,based on deep reinforcement learning(DRL),we can consider these factors comprehensively to eliminate vehicles with poor performance as much as possible and exclude vehicles that have suffered Byzantine attacks before AFL.At the same time,when aggregating AFL,we can focus on those vehicles with better performance to improve the accuracy and safety of the system.In this paper,we proposed a vehicle selection scheme based on DRL in VEC.In this scheme,vehicle’s mobility,channel conditions with temporal variations,computational resources with temporal variations,different data amount,transmission channel status of vehicles as well as Byzantine attacks were taken into account.Simulation results show that the proposed scheme effectively improves the safety and accuracy of the global model.
基金This research was funded by the National Natural Science Foundation of China(No.62272124)the National Key Research and Development Program of China(No.2022YFB2701401)+3 种基金Guizhou Province Science and Technology Plan Project(Grant Nos.Qiankehe Paltform Talent[2020]5017)The Research Project of Guizhou University for Talent Introduction(No.[2020]61)the Cultivation Project of Guizhou University(No.[2019]56)the Open Fund of Key Laboratory of Advanced Manufacturing Technology,Ministry of Education(GZUAMT2021KF[01]).
文摘In the assessment of car insurance claims,the claim rate for car insurance presents a highly skewed probability distribution,which is typically modeled using Tweedie distribution.The traditional approach to obtaining the Tweedie regression model involves training on a centralized dataset,when the data is provided by multiple parties,training a privacy-preserving Tweedie regression model without exchanging raw data becomes a challenge.To address this issue,this study introduces a novel vertical federated learning-based Tweedie regression algorithm for multi-party auto insurance rate setting in data silos.The algorithm can keep sensitive data locally and uses privacy-preserving techniques to achieve intersection operations between the two parties holding the data.After determining which entities are shared,the participants train the model locally using the shared entity data to obtain the local generalized linear model intermediate parameters.The homomorphic encryption algorithms are introduced to interact with and update the model intermediate parameters to collaboratively complete the joint training of the car insurance rate-setting model.Performance tests on two publicly available datasets show that the proposed federated Tweedie regression algorithm can effectively generate Tweedie regression models that leverage the value of data fromboth partieswithout exchanging data.The assessment results of the scheme approach those of the Tweedie regressionmodel learned fromcentralized data,and outperformthe Tweedie regressionmodel learned independently by a single party.
基金This work was supported by a research fund from Chosun University,2023。
文摘Federated learning is an innovative machine learning technique that deals with centralized data storage issues while maintaining privacy and security.It involves constructing machine learning models using datasets spread across several data centers,including medical facilities,clinical research facilities,Internet of Things devices,and even mobile devices.The main goal of federated learning is to improve robust models that benefit from the collective knowledge of these disparate datasets without centralizing sensitive information,reducing the risk of data loss,privacy breaches,or data exposure.The application of federated learning in the healthcare industry holds significant promise due to the wealth of data generated from various sources,such as patient records,medical imaging,wearable devices,and clinical research surveys.This research conducts a systematic evaluation and highlights essential issues for the selection and implementation of federated learning approaches in healthcare.It evaluates the effectiveness of federated learning strategies in the field of healthcare.It offers a systematic analysis of federated learning in the healthcare domain,encompassing the evaluation metrics employed.In addition,this study highlights the increasing interest in federated learning applications in healthcare among scholars and provides foundations for further studies.
基金supported by National Natural Science Fundation of China under Grant 61972208National Natural Science Fundation(General Program)of China under Grant 61972211+2 种基金National Key Research and Development Project of China under Grant 2020YFB1804700Future Network Innovation Research and Application Projects under Grant No.2021FNA020062021 Jiangsu Postgraduate Research Innovation Plan under Grant No.KYCX210794.
文摘With the rapid development of the Internet,network security and data privacy are increasingly valued.Although classical Network Intrusion Detection System(NIDS)based on Deep Learning(DL)models can provide good detection accuracy,but collecting samples for centralized training brings the huge risk of data privacy leakage.Furthermore,the training of supervised deep learning models requires a large number of labeled samples,which is usually cumbersome.The“black-box”problem also makes the DL models of NIDS untrustworthy.In this paper,we propose a trusted Federated Learning(FL)Traffic IDS method called FL-TIDS to address the above-mentioned problems.In FL-TIDS,we design an unsupervised intrusion detection model based on autoencoders that alleviates the reliance on marked samples.At the same time,we use FL for model training to protect data privacy.In addition,we design an improved SHAP interpretable method based on chi-square test to perform interpretable analysis of the trained model.We conducted several experiments to evaluate the proposed FL-TIDS.We first determine experimentally the structure and the number of neurons of the unsupervised AE model.Secondly,we evaluated the proposed method using the UNSW-NB15 and CICIDS2017 datasets.The exper-imental results show that the unsupervised AE model has better performance than the other 7 intrusion detection models in terms of precision,recall and f1-score.Then,federated learning is used to train the intrusion detection model.The experimental results indicate that the model is more accurate than the local learning model.Finally,we use an improved SHAP explainability method based on Chi-square test to analyze the explainability.The analysis results show that the identification characteristics of the model are consistent with the attack characteristics,and the model is reliable.
基金supported by the National Science Foundation of China under Grant No.62101467.
文摘Hybrid precoding is considered as a promising low-cost technique for millimeter wave(mm-wave)massive Multi-Input Multi-Output(MIMO)systems.In this work,referring to the time-varying propagation circumstances,with semi-supervised Incremental Learning(IL),we propose an online hybrid beamforming scheme.Firstly,given the constraint of constant modulus on analog beamformer and combiner,we propose a new broadnetwork-based structure for the design model of hybrid beamforming.Compared with the existing network structure,the proposed network structure can achieve better transmission performance and lower complexity.Moreover,to enhance the efficiency of IL further,by combining the semi-supervised graph with IL,we propose a hybrid beamforming scheme based on chunk-by-chunk semi-supervised learning,where only few transmissions are required to calculate the label and all other unlabelled transmissions would also be put into a training data chunk.Unlike the existing single-by-single approach where transmissions during the model update are not taken into the consideration of model update,all transmissions,even the ones during the model update,would make contributions to model update in the proposed method.During the model update,the amount of unlabelled transmissions is very large and they also carry some information,the prediction performance can be enhanced to some extent by these unlabelled channel data.Simulation results demonstrate the spectral efficiency of the proposed method outperforms that of the existing single-by-single approach.Besides,we prove the general complexity of the proposed method is lower than that of the existing approach and give the condition under which its absolute complexity outperforms that of the existing approach.
基金supported by CAPES,CNPq,and grant 15/24494-8,Sao Paulo Research Foundation(FAPESP).
文摘Federated learning has been explored as a promising solution for training machine learning models at the network edge,without sharing private user data.With limited resources at the edge,new solutions must be developed to leverage the software and hardware resources as the existing solutions did not focus on resource management for network edge,specially for federated learning.In this paper,we describe the recent work on resource manage-ment at the edge and explore the challenges and future directions to allow the execution of federated learning at the edge.Problems such as the discovery of resources,deployment,load balancing,migration,and energy effi-ciency are discussed in the paper.
基金This work has been funded by King Saud University,Riyadh,Saudi Arabia,through Researchers Supporting Project Number(RSPD2024R857).
文摘Scalability and information personal privacy are vital for training and deploying large-scale deep learning models.Federated learning trains models on exclusive information by aggregating weights from various devices and taking advantage of the device-agnostic environment of web browsers.Nevertheless,relying on a main central server for internet browser-based federated systems can prohibit scalability and interfere with the training process as a result of growing client numbers.Additionally,information relating to the training dataset can possibly be extracted from the distributed weights,potentially reducing the privacy of the local data used for training.In this research paper,we aim to investigate the challenges of scalability and data privacy to increase the efficiency of distributed training models.As a result,we propose a web-federated learning exchange(WebFLex)framework,which intends to improve the decentralization of the federated learning process.WebFLex is additionally developed to secure distributed and scalable federated learning systems that operate in web browsers across heterogeneous devices.Furthermore,WebFLex utilizes peer-to-peer interactions and secure weight exchanges utilizing browser-to-browser web real-time communication(WebRTC),efficiently preventing the need for a main central server.WebFLex has actually been measured in various setups using the MNIST dataset.Experimental results show WebFLex’s ability to improve the scalability of federated learning systems,allowing a smooth increase in the number of participating devices without central data aggregation.In addition,WebFLex can maintain a durable federated learning procedure even when faced with device disconnections and network variability.Additionally,it improves data privacy by utilizing artificial noise,which accomplishes an appropriate balance between accuracy and privacy preservation.
基金the R&D&I,Spain grants PID2020-119478GB-I00 and,PID2020-115832GB-I00 funded by MCIN/AEI/10.13039/501100011033.N.Rodríguez-Barroso was supported by the grant FPU18/04475 funded by MCIN/AEI/10.13039/501100011033 and by“ESF Investing in your future”Spain.J.Moyano was supported by a postdoctoral Juan de la Cierva Formación grant FJC2020-043823-I funded by MCIN/AEI/10.13039/501100011033 and by European Union NextGenerationEU/PRTR.J.Del Ser acknowledges funding support from the Spanish Centro para el Desarrollo Tecnológico Industrial(CDTI)through the AI4ES projectthe Department of Education of the Basque Government(consolidated research group MATHMODE,IT1456-22)。
文摘When data privacy is imposed as a necessity,Federated learning(FL)emerges as a relevant artificial intelligence field for developing machine learning(ML)models in a distributed and decentralized environment.FL allows ML models to be trained on local devices without any need for centralized data transfer,thereby reducing both the exposure of sensitive data and the possibility of data interception by malicious third parties.This paradigm has gained momentum in the last few years,spurred by the plethora of real-world applications that have leveraged its ability to improve the efficiency of distributed learning and to accommodate numerous participants with their data sources.By virtue of FL,models can be learned from all such distributed data sources while preserving data privacy.The aim of this paper is to provide a practical tutorial on FL,including a short methodology and a systematic analysis of existing software frameworks.Furthermore,our tutorial provides exemplary cases of study from three complementary perspectives:i)Foundations of FL,describing the main components of FL,from key elements to FL categories;ii)Implementation guidelines and exemplary cases of study,by systematically examining the functionalities provided by existing software frameworks for FL deployment,devising a methodology to design a FL scenario,and providing exemplary cases of study with source code for different ML approaches;and iii)Trends,shortly reviewing a non-exhaustive list of research directions that are under active investigation in the current FL landscape.The ultimate purpose of this work is to establish itself as a referential work for researchers,developers,and data scientists willing to explore the capabilities of FL in practical applications.
文摘With the prevalence of the Internet of Things(IoT)systems,smart cities comprise complex networks,including sensors,actuators,appliances,and cyber services.The complexity and heterogeneity of smart cities have become vulnerable to sophisticated cyber-attacks,especially privacy-related attacks such as inference and data poisoning ones.Federated Learning(FL)has been regarded as a hopeful method to enable distributed learning with privacypreserved intelligence in IoT applications.Even though the significance of developing privacy-preserving FL has drawn as a great research interest,the current research only concentrates on FL with independent identically distributed(i.i.d)data and few studies have addressed the non-i.i.d setting.FL is known to be vulnerable to Generative Adversarial Network(GAN)attacks,where an adversary can presume to act as a contributor participating in the training process to acquire the private data of other contributors.This paper proposes an innovative Privacy Protection-based Federated Deep Learning(PP-FDL)framework,which accomplishes data protection against privacy-related GAN attacks,along with high classification rates from non-i.i.d data.PP-FDL is designed to enable fog nodes to cooperate to train the FDL model in a way that ensures contributors have no access to the data of each other,where class probabilities are protected utilizing a private identifier generated for each class.The PP-FDL framework is evaluated for image classification using simple convolutional networks which are trained using MNIST and CIFAR-10 datasets.The empirical results have revealed that PF-DFL can achieve data protection and the framework outperforms the other three state-of-the-art models with 3%–8%as accuracy improvements.
基金partially supported by the National Natural Science Foundation of China (62173308)the Natural Science Foundation of Zhejiang Province of China (LR20F030001)the Jinhua Science and Technology Project (2022-1-042)。
文摘As a representative emerging machine learning technique, federated learning(FL) has gained considerable popularity for its special feature of “making data available but not visible”. However, potential problems remain, including privacy breaches, imbalances in payment, and inequitable distribution.These shortcomings let devices reluctantly contribute relevant data to, or even refuse to participate in FL. Therefore, in the application of FL, an important but also challenging issue is to motivate as many participants as possible to provide high-quality data to FL. In this paper, we propose an incentive mechanism for FL based on the continuous zero-determinant(CZD) strategies from the perspective of game theory. We first model the interaction between the server and the devices during the FL process as a continuous iterative game. We then apply the CZD strategies for two players and then multiple players to optimize the social welfare of FL, for which we prove that the server can keep social welfare at a high and stable level. Subsequently, we design an incentive mechanism based on the CZD strategies to attract devices to contribute all of their high-accuracy data to FL.Finally, we perform simulations to demonstrate that our proposed CZD-based incentive mechanism can indeed generate high and stable social welfare in FL.
基金the Sichuan Provincial Science and Technology Department Project under Grant 2019YFN0104the Yibin Science and Technology Plan Project under Grant 2021GY008the Sichuan University of Science and Engineering Postgraduate Innovation Fund Project under Grant Y2022154.
文摘As a distributed machine learning method,federated learning(FL)has the advantage of naturally protecting data privacy.It keeps data locally and trains local models through local data to protect the privacy of local data.The federated learning method effectively solves the problem of artificial Smart data islands and privacy protection issues.However,existing research shows that attackersmay still steal user information by analyzing the parameters in the federated learning training process and the aggregation parameters on the server side.To solve this problem,differential privacy(DP)techniques are widely used for privacy protection in federated learning.However,adding Gaussian noise perturbations to the data degrades the model learning performance.To address these issues,this paper proposes a differential privacy federated learning scheme based on adaptive Gaussian noise(DPFL-AGN).To protect the data privacy and security of the federated learning training process,adaptive Gaussian noise is specifically added in the training process to hide the real parameters uploaded by the client.In addition,this paper proposes an adaptive noise reduction method.With the convergence of the model,the Gaussian noise in the later stage of the federated learning training process is reduced adaptively.This paper conducts a series of simulation experiments on realMNIST and CIFAR-10 datasets,and the results show that the DPFL-AGN algorithmperforms better compared to the other algorithms.
基金supported by NSFC(No.61972230)NSFShandong(No.ZR2021LZH006).
文摘Exploring open fields with coordinated unmanned vehicles is popular in academia and industries.One of the most impressive applicable approaches is the Internet of Vehicles(lov).The IoV connects vehicles,road infrastructures and communication facilities to provide solutions for exploration tasks.However,the coordination of acquiring information from multi-vehicles may risk data privacy.To this end,sharing high-quality experiences instead of raw data has become an urgent demand.This paper employs a Deep Reinforcement Learning(DRL)method to enable IoVs to generate training data with prioritized experience and states,which can support the IoV to explore the environment more efficiently.Moreover,a Federated Learning(FL)experience sharing model is established to guarantee the vehicles'privacy.The numerical results show that the proposed method presents a better successful sharing rate and a more stable convergence within the comparison of fundamental methods.The experiments also suggest that the proposed method could support agents without full information to achieve the tasks.
基金supported by Key Research and Development Program of China (No.2022YFC3005401)Key Research and Development Program of Yunnan Province,China (Nos.202203AA080009,202202AF080003)+1 种基金Science and Technology Achievement Transformation Program of Jiangsu Province,China (BA2021002)Fundamental Research Funds for the Central Universities (Nos.B220203006,B210203024).
文摘Data sharing and privacy protection are made possible by federated learning,which allows for continuous model parameter sharing between several clients and a central server.Multiple reliable and high-quality clients must participate in practical applications for the federated learning global model to be accurate,but because the clients are independent,the central server cannot fully control their behavior.The central server has no way of knowing the correctness of the model parameters provided by each client in this round,so clients may purposefully or unwittingly submit anomalous data,leading to abnormal behavior,such as becoming malicious attackers or defective clients.To reduce their negative consequences,it is crucial to quickly detect these abnormalities and incentivize them.In this paper,we propose a Federated Learning framework for Detecting and Incentivizing Abnormal Clients(FL-DIAC)to accomplish efficient and security federated learning.We build a detector that introduces an auto-encoder for anomaly detection and use it to perform anomaly identification and prevent the involvement of abnormal clients,in particular for the anomaly client detection problem.Among them,before the model parameters are input to the detector,we propose a Fourier transform-based anomaly data detectionmethod for dimensionality reduction in order to reduce the computational complexity.Additionally,we create a credit scorebased incentive structure to encourage clients to participate in training in order tomake clients actively participate.Three training models(CNN,MLP,and ResNet-18)and three datasets(MNIST,Fashion MNIST,and CIFAR-10)have been used in experiments.According to theoretical analysis and experimental findings,the FL-DIAC is superior to other federated learning schemes of the same type in terms of effectiveness.