Scalability and information personal privacy are vital for training and deploying large-scale deep learning models.Federated learning trains models on exclusive information by aggregating weights from various devices ...Scalability and information personal privacy are vital for training and deploying large-scale deep learning models.Federated learning trains models on exclusive information by aggregating weights from various devices and taking advantage of the device-agnostic environment of web browsers.Nevertheless,relying on a main central server for internet browser-based federated systems can prohibit scalability and interfere with the training process as a result of growing client numbers.Additionally,information relating to the training dataset can possibly be extracted from the distributed weights,potentially reducing the privacy of the local data used for training.In this research paper,we aim to investigate the challenges of scalability and data privacy to increase the efficiency of distributed training models.As a result,we propose a web-federated learning exchange(WebFLex)framework,which intends to improve the decentralization of the federated learning process.WebFLex is additionally developed to secure distributed and scalable federated learning systems that operate in web browsers across heterogeneous devices.Furthermore,WebFLex utilizes peer-to-peer interactions and secure weight exchanges utilizing browser-to-browser web real-time communication(WebRTC),efficiently preventing the need for a main central server.WebFLex has actually been measured in various setups using the MNIST dataset.Experimental results show WebFLex’s ability to improve the scalability of federated learning systems,allowing a smooth increase in the number of participating devices without central data aggregation.In addition,WebFLex can maintain a durable federated learning procedure even when faced with device disconnections and network variability.Additionally,it improves data privacy by utilizing artificial noise,which accomplishes an appropriate balance between accuracy and privacy preservation.展开更多
The increasing data pool in finance sectors forces machine learning(ML)to step into new complications.Banking data has significant financial implications and is confidential.Combining users data from several organizat...The increasing data pool in finance sectors forces machine learning(ML)to step into new complications.Banking data has significant financial implications and is confidential.Combining users data from several organizations for various banking services may result in various intrusions and privacy leakages.As a result,this study employs federated learning(FL)using a flower paradigm to preserve each organization’s privacy while collaborating to build a robust shared global model.However,diverse data distributions in the collaborative training process might result in inadequate model learning and a lack of privacy.To address this issue,the present paper proposes the imple-mentation of Federated Averaging(FedAvg)and Federated Proximal(FedProx)methods in the flower framework,which take advantage of the data locality while training and guaranteeing global convergence.Resultantly improves the privacy of the local models.This analysis used the credit card and Canadian Institute for Cybersecurity Intrusion Detection Evaluation(CICIDS)datasets.Precision,recall,and accuracy as performance indicators to show the efficacy of the proposed strategy using FedAvg and FedProx.The experimental findings suggest that the proposed approach helps to safely use banking data from diverse sources to enhance customer banking services by obtaining accuracy of 99.55%and 83.72%for FedAvg and 99.57%,and 84.63%for FedProx.展开更多
As the scale of federated learning expands,solving the Non-IID data problem of federated learning has become a key challenge of interest.Most existing solutions generally aim to solve the overall performance improveme...As the scale of federated learning expands,solving the Non-IID data problem of federated learning has become a key challenge of interest.Most existing solutions generally aim to solve the overall performance improvement of all clients;however,the overall performance improvement often sacrifices the performance of certain clients,such as clients with less data.Ignoring fairness may greatly reduce the willingness of some clients to participate in federated learning.In order to solve the above problem,the authors propose Ada-FFL,an adaptive fairness federated aggregation learning algorithm,which can dynamically adjust the fairness coefficient according to the update of the local models,ensuring the convergence performance of the global model and the fairness between federated learning clients.By integrating coarse-grained and fine-grained equity solutions,the authors evaluate the deviation of local models by considering both global equity and individual equity,then the weight ratio will be dynamically allocated for each client based on the evaluated deviation value,which can ensure that the update differences of local models are fully considered in each round of training.Finally,by combining a regularisation term to limit the local model update to be closer to the global model,the sensitivity of the model to input perturbations can be reduced,and the generalisation ability of the global model can be improved.Through numerous experiments on several federal data sets,the authors show that our method has more advantages in convergence effect and fairness than the existing baselines.展开更多
The past decades have witnessed a wide application of federated learning in crowd sensing,to handle the numerous data collected by the sensors and provide the users with precise and customized services.Meanwhile,how t...The past decades have witnessed a wide application of federated learning in crowd sensing,to handle the numerous data collected by the sensors and provide the users with precise and customized services.Meanwhile,how to protect the private information of users in federated learning has become an important research topic.Compared with the differential privacy(DP)technique and secure multiparty computation(SMC)strategy,the covert communication mechanism in federated learning is more efficient and energy-saving in training the ma-chine learning models.In this paper,we study the covert communication problem for federated learning in crowd sensing Internet-of-Things networks.Different from the previous works about covert communication in federated learning,most of which are considered in a centralized framework and experimental-based,we firstly proposes a centralized covert communication mechanism for federated learning among n learning agents,the time complexity of which is O(log n),approximating to the optimal solution.Secondly,for the federated learning without parameter server,which is a harder case,we show that solving such a problem is NP-hard and prove the existence of a distributed covert communication mechanism with O(log logΔlog n)times,approximating to the optimal solution.Δis the maximum distance between any pair of learning agents.Theoretical analysis and nu-merical simulations are presented to show the performance of our covert communication mechanisms.We hope that our covert communication work can shed some light on how to protect the privacy of federated learning in crowd sensing from the view of communications.展开更多
Federated learning ensures data privacy and security by sharing models among multiple computing nodes instead of plaintext data.However,there is still a potential risk of privacy leakage,for example,attackers can obta...Federated learning ensures data privacy and security by sharing models among multiple computing nodes instead of plaintext data.However,there is still a potential risk of privacy leakage,for example,attackers can obtain the original data through model inference attacks.Therefore,safeguarding the privacy of model parameters becomes crucial.One proposed solution involves incorporating homomorphic encryption algorithms into the federated learning process.However,the existing federated learning privacy protection scheme based on homomorphic encryption will greatly reduce the efficiency and robustness when there are performance differences between parties or abnormal nodes.To solve the above problems,this paper proposes a privacy protection scheme named Federated Learning-Elastic Averaging Stochastic Gradient Descent(FL-EASGD)based on a fully homomorphic encryption algorithm.First,this paper introduces the homomorphic encryption algorithm into the FL-EASGD scheme to preventmodel plaintext leakage and realize privacy security in the process ofmodel aggregation.Second,this paper designs a robust model aggregation algorithm by adding time variables and constraint coefficients,which ensures the accuracy of model prediction while solving performance differences such as computation speed and node anomalies such as downtime of each participant.In addition,the scheme in this paper preserves the independent exploration of the local model by the nodes of each party,making the model more applicable to the local data distribution.Finally,experimental analysis shows that when there are abnormalities in the participants,the efficiency and accuracy of the whole protocol are not significantly affected.展开更多
As an emerging joint learning model,federated learning is a promising way to combine model parameters of different users for training and inference without collecting users’original data.However,a practical and effic...As an emerging joint learning model,federated learning is a promising way to combine model parameters of different users for training and inference without collecting users’original data.However,a practical and efficient solution has not been established in previous work due to the absence of efficient matrix computation and cryptography schemes in the privacy-preserving federated learning model,especially in partially homomorphic cryptosystems.In this paper,we propose a Practical and Efficient Privacy-preserving Federated Learning(PEPFL)framework.First,we present a lifted distributed ElGamal cryptosystem for federated learning,which can solve the multi-key problem in federated learning.Secondly,we develop a Practical Partially Single Instruction Multiple Data(PSIMD)parallelism scheme that can encode a plaintext matrix into single plaintext for encryption,improving the encryption efficiency and reducing the communication cost in partially homomorphic cryptosystem.In addition,based on the Convolutional Neural Network(CNN)and the designed cryptosystem,a novel privacy-preserving federated learning framework is designed by using Momentum Gradient Descent(MGD).Finally,we evaluate the security and performance of PEPFL.The experiment results demonstrate that the scheme is practicable,effective,and secure with low communication and computation costs.展开更多
针对5G新空口-车联网(New Radio-Vehicle to Everything,NR-V2X)场景下车对基础设施(Vehicle to Infrastructure,V2I)和车对车(Vehicle to Vehicle,V2V)共享上行通信链路的频谱资源分配问题,提出了一种联邦-多智能体深度Q网络(Federated...针对5G新空口-车联网(New Radio-Vehicle to Everything,NR-V2X)场景下车对基础设施(Vehicle to Infrastructure,V2I)和车对车(Vehicle to Vehicle,V2V)共享上行通信链路的频谱资源分配问题,提出了一种联邦-多智能体深度Q网络(Federated Learning-Multi-Agent Deep Q Network,FL-MADQN)算法.该分布式算法中,每个车辆用户作为一个智能体,根据获取的本地信道状态信息,以网络信道容量最佳为目标函数,采用DQN算法训练学习本地网络模型.采用联邦学习加快以及稳定各智能体网络模型训练的收敛速度,即将各智能体的本地模型上传至基站进行聚合形成全局模型,再将全局模型下发至各智能体更新本地模型.仿真结果表明:与传统分布式多智能体DQN算法相比,所提出的方案具有更快的模型收敛速度,并且当车辆用户数增大时仍然保证V2V链路的通信效率以及V2I链路的信道容量.展开更多
With the arrival of 5G,latency-sensitive applications are becoming increasingly diverse.Mobile Edge Computing(MEC)technology has the characteristics of high bandwidth,low latency and low energy consumption,and has att...With the arrival of 5G,latency-sensitive applications are becoming increasingly diverse.Mobile Edge Computing(MEC)technology has the characteristics of high bandwidth,low latency and low energy consumption,and has attracted much attention among researchers.To improve the Quality of Service(QoS),this study focuses on computation offloading in MEC.We consider the QoS from the perspective of computational cost,dimensional disaster,user privacy and catastrophic forgetting of new users.The QoS model is established based on the delay and energy consumption and is based on DDQN and a Federated Learning(FL)adaptive task offloading algorithm in MEC.The proposed algorithm combines the QoS model and deep reinforcement learning algorithm to obtain an optimal offloading policy according to the local link and node state information in the channel coherence time to address the problem of time-varying transmission channels and reduce the computing energy consumption and task processing delay.To solve the problems of privacy and catastrophic forgetting,we use FL to make distributed use of multiple users’data to obtain the decision model,protect data privacy and improve the model universality.In the process of FL iteration,the communication delay of individual devices is too large,which affects the overall delay cost.Therefore,we adopt a communication delay optimization algorithm based on the unary outlier detection mechanism to reduce the communication delay of FL.The simulation results indicate that compared with existing schemes,the proposed method significantly reduces the computation cost on a device and improves the QoS when handling complex tasks.展开更多
Federated learning(FL), which allows multiple mobile devices to cooperatively train a machine learning model without sharing their data with the central server, has received widespread attention.However, the process o...Federated learning(FL), which allows multiple mobile devices to cooperatively train a machine learning model without sharing their data with the central server, has received widespread attention.However, the process of FL involves frequent communications between the server and mobile devices,which incurs a long latency. Intelligent reflecting surface(IRS) provides a promising technology to address this issue, thanks to its capacity to reconfigure the wireless propagation environment. In this paper, we exploit the advantage of IRS to reduce the latency of FL. Specifically, we formulate a latency minimization problem for the IRS assisted FL system, by optimizing the communication resource allocations including the devices’ transmit-powers, the uploading time, the downloading time, the multi-user decomposition matrix and the phase shift matrix of IRS. To solve this non-convex problem, we propose an efficient algorithm which is based on the Block Coordinate Descent(BCD) and the penalty difference of convex(DC) algorithm to compute the solution. Numerical results are provided to validate the efficiency of our proposed algorithm and demonstrate the benefit of deploying IRS for reducing the latency of FL. In particular, the results show that our algorithm can outperform the baseline of Majorization-Minimization(MM) algorithm with the fixed transmit-power by up to 30%.展开更多
Federated learning for edge computing is a promising solution in the data booming era,which leverages the computation ability of each edge device to train local models and only shares the model gradients to the centra...Federated learning for edge computing is a promising solution in the data booming era,which leverages the computation ability of each edge device to train local models and only shares the model gradients to the central server.However,the frequently transmitted local gradients could also leak the participants’private data.To protect the privacy of local training data,lots of cryptographic-based Privacy-Preserving Federated Learning(PPFL)schemes have been proposed.However,due to the constrained resource nature of mobile devices and complex cryptographic operations,traditional PPFL schemes fail to provide efficient data confidentiality and lightweight integrity verification simultaneously.To tackle this problem,we propose a Verifiable Privacypreserving Federated Learning scheme(VPFL)for edge computing systems to prevent local gradients from leaking over the transmission stage.Firstly,we combine the Distributed Selective Stochastic Gradient Descent(DSSGD)method with Paillier homomorphic cryptosystem to achieve the distributed encryption functionality,so as to reduce the computation cost of the complex cryptosystem.Secondly,we further present an online/offline signature method to realize the lightweight gradients integrity verification,where the offline part can be securely outsourced to the edge server.Comprehensive security analysis demonstrates the proposed VPFL can achieve data confidentiality,authentication,and integrity.At last,we evaluate both communication overhead and computation cost of the proposed VPFL scheme,the experimental results have shown VPFL has low computation costs and communication overheads while maintaining high training accuracy.展开更多
A cataract is one of the most significant eye problems worldwide that does not immediately impair vision and progressively worsens over time.Automatic cataract prediction based on various imaging technologies has been...A cataract is one of the most significant eye problems worldwide that does not immediately impair vision and progressively worsens over time.Automatic cataract prediction based on various imaging technologies has been addressed recently,such as smartphone apps used for remote health monitoring and eye treatment.In recent years,advances in diagnosis,prediction,and clinical decision support using Artificial Intelligence(AI)in medicine and ophthalmology have been exponential.Due to privacy concerns,a lack of data makes applying artificial intelligence models in the medical field challenging.To address this issue,a federated learning framework named CDFL based on a VGG16 deep neural network model is proposed in this research.The study collects data from the Ocular Disease Intelligent Recognition(ODIR)database containing 5,000 patient records.The significant features are extracted and normalized using the min-max normalization technique.In the federated learning-based technique,the VGG16 model is trained on the dataset individually after receiving model updates from two clients.Before transferring the attributes to the global model,the suggested method trains the local model.The global model subsequently improves the technique after integrating the new parameters.Every client analyses the results in three rounds to decrease the over-fitting problem.The experimental result shows the effectiveness of the federated learning-based technique on a Deep Neural Network(DNN),reaching a 95.28%accuracy while also providing privacy to the patient’s data.The experiment demonstrated that the suggested federated learning model outperforms other traditional methods,achieving client 1 accuracy of 95.0%and client 2 accuracy of 96.0%.展开更多
With the increasing number of smart devices and the development of machine learning technology,the value of users’personal data is becoming more and more important.Based on the premise of protecting users’personal p...With the increasing number of smart devices and the development of machine learning technology,the value of users’personal data is becoming more and more important.Based on the premise of protecting users’personal privacy data,federated learning(FL)uses data stored on edge devices to realize training tasks by contributing training model parameters without revealing the original data.However,since FL can still leak the user’s original data by exchanging gradient information.The existing privacy protection strategy will increase the uplink time due to encryption measures.It is a huge challenge in terms of communication.When there are a large number of devices,the privacy protection cost of the system is higher.Based on these issues,we propose a privacy-preserving scheme of user-based group collaborative federated learning(GrCol-PPFL).Our scheme primarily divides participants into several groups and each group communicates in a chained transmission mechanism.All groups work in parallel at the same time.The server distributes a random parameter with the same dimension as the model parameter for each participant as a mask for the model parameter.We use the public datasets of modified national institute of standards and technology database(MNIST)to test the model accuracy.The experimental results show that GrCol-PPFL not only ensures the accuracy of themodel,but also ensures the security of the user’s original data when users collude with each other.Finally,through numerical experiments,we show that by changing the number of groups,we can find the optimal number of groups that reduces the uplink consumption time.展开更多
Federated learning(FL) has developed rapidly in recent years as a privacy-preserving machine learning method,and it has been gradually applied to key areas involving privacy and security such as finance,medical care,a...Federated learning(FL) has developed rapidly in recent years as a privacy-preserving machine learning method,and it has been gradually applied to key areas involving privacy and security such as finance,medical care,and government affairs.However,the current solutions to FL rarely consider the problem of migration from centralized learning to federated learning,resulting in a high practical threshold for federated learning and low usability.Therefore,we introduce a reliable,efficient,and easy-to-use federated learning framework named Neursafe-FL.Based on the unified application program interface(API),the framework is not only compatible with mainstream machine learning frameworks,such as Tensorflow and Pytorch,but also supports further extensions,which can preserve the programming style of the original framework to lower the threshold of FL.At the same time,the design of componentization,modularization,and standardized interface makes the framework highly extensible,which meets the needs of customized requirements and FL evolution in the future.Neursafe-FL is already on Github as an open-source project^(1).展开更多
Benefiting from the development of Federated Learning(FL)and distributed communication systems,large-scale intelligent applications become possible.Distributed devices not only provide adequate training data,but also ...Benefiting from the development of Federated Learning(FL)and distributed communication systems,large-scale intelligent applications become possible.Distributed devices not only provide adequate training data,but also cause privacy leakage and energy consumption.How to optimize the energy consumption in distributed communication systems,while ensuring the privacy of users and model accuracy,has become an urgent challenge.In this paper,we define the FL as a 3-layer architecture including users,agents and server.In order to find a balance among model training accuracy,privacy-preserving effect,and energy consumption,we design the training process of FL as game models.We use an extensive game tree to analyze the key elements that influence the players’decisions in the single game,and then find the incentive mechanism that meet the social norms through the repeated game.The experimental results show that the Nash equilibrium we obtained satisfies the laws of reality,and the proposed incentive mechanism can also promote users to submit high-quality data in FL.Following the multiple rounds of play,the incentive mechanism can help all players find the optimal strategies for energy,privacy,and accuracy of FL in distributed communication systems.展开更多
Although Federated Deep Learning(FDL)enables distributed machine learning in the Internet of Vehicles(IoV),it requires multiple clients to upload model parameters,thus still existing unavoidable communication overhead...Although Federated Deep Learning(FDL)enables distributed machine learning in the Internet of Vehicles(IoV),it requires multiple clients to upload model parameters,thus still existing unavoidable communication overhead and data privacy risks.The recently proposed Swarm Learning(SL)provides a decentralized machine learning approach for unit edge computing and blockchain-based coordination.A Swarm-Federated Deep Learning framework in the IoV system(IoV-SFDL)that integrates SL into the FDL framework is proposed in this paper.The IoV-SFDL organizes vehicles to generate local SL models with adjacent vehicles based on the blockchain empowered SL,then aggregates the global FDL model among different SL groups with a credibility weights prediction algorithm.Extensive experimental results show that compared with the baseline frameworks,the proposed IoV-SFDL framework reduces the overhead of client-to-server communication by 16.72%,while the model performance improves by about 5.02%for the same training iterations.展开更多
High-efficiency and low-cost knowledge sharing can improve the decision-making ability of autonomous vehicles by mining knowledge from the Internet of Vehicles(IoVs).However,it is challenging to ensure high efficiency...High-efficiency and low-cost knowledge sharing can improve the decision-making ability of autonomous vehicles by mining knowledge from the Internet of Vehicles(IoVs).However,it is challenging to ensure high efficiency of local data learning models while preventing privacy leakage in a high mobility environment.In order to protect data privacy and improve data learning efficiency in knowledge sharing,we propose an asynchronous federated broad learning(FBL)framework that integrates broad learning(BL)into federated learning(FL).In FBL,we design a broad fully connected model(BFCM)as a local model for training client data.To enhance the wireless channel quality for knowledge sharing and reduce the communication and computation cost of participating clients,we construct a joint resource allocation and reconfigurable intelligent surface(RIS)configuration optimization framework for FBL.The problem is decoupled into two convex subproblems.Aiming to improve the resource scheduling efficiency in FBL,a double Davidon–Fletcher–Powell(DDFP)algorithm is presented to solve the time slot allocation and RIS configuration problem.Based on the results of resource scheduling,we design a reward-allocation algorithm based on federated incentive learning(FIL)in FBL to compensate clients for their costs.The simulation results show that the proposed FBL framework achieves better performance than the comparison models in terms of efficiency,accuracy,and cost for knowledge sharing in the IoV.展开更多
The application of artificial intelligence technology in Internet of Vehicles(lov)has attracted great research interests with the goal of enabling smart transportation and traffic management.Meanwhile,concerns have be...The application of artificial intelligence technology in Internet of Vehicles(lov)has attracted great research interests with the goal of enabling smart transportation and traffic management.Meanwhile,concerns have been raised over the security and privacy of the tons of traffic and vehicle data.In this regard,Federated Learning(FL)with privacy protection features is considered a highly promising solution.However,in the FL process,the server side may take advantage of its dominant role in model aggregation to steal sensitive information of users,while the client side may also upload malicious data to compromise the training of the global model.Most existing privacy-preserving FL schemes in IoV fail to deal with threats from both of these two sides at the same time.In this paper,we propose a Blockchain based Privacy-preserving Federated Learning scheme named BPFL,which uses blockchain as the underlying distributed framework of FL.We improve the Multi-Krum technology and combine it with the homomorphic encryption to achieve ciphertext-level model aggregation and model filtering,which can enable the verifiability of the local models while achieving privacy-preservation.Additionally,we develop a reputation-based incentive mechanism to encourage users in IoV to actively participate in the federated learning and to practice honesty.The security analysis and performance evaluations are conducted to show that the proposed scheme can meet the security requirements and improve the performance of the FL model.展开更多
Federated Learning(FL),a burgeoning technology,has received increasing attention due to its privacy protection capability.However,the base algorithm FedAvg is vulnerable when it suffers from so-called backdoor attacks...Federated Learning(FL),a burgeoning technology,has received increasing attention due to its privacy protection capability.However,the base algorithm FedAvg is vulnerable when it suffers from so-called backdoor attacks.Former researchers proposed several robust aggregation methods.Unfortunately,due to the hidden characteristic of backdoor attacks,many of these aggregation methods are unable to defend against backdoor attacks.What's more,the attackers recently have proposed some hiding methods that further improve backdoor attacks'stealthiness,making all the existing robust aggregation methods fail.To tackle the threat of backdoor attacks,we propose a new aggregation method,X-raying Models with A Matrix(XMAM),to reveal the malicious local model updates submitted by the backdoor attackers.Since we observe that the output of the Softmax layer exhibits distinguishable patterns between malicious and benign updates,unlike the existing aggregation algorithms,we focus on the Softmax layer's output in which the backdoor attackers are difficult to hide their malicious behavior.Specifically,like medical X-ray examinations,we investigate the collected local model updates by using a matrix as an input to get their Softmax layer's outputs.Then,we preclude updates whose outputs are abnormal by clustering.Without any training dataset in the server,the extensive evaluations show that our XMAM can effectively distinguish malicious local model updates from benign ones.For instance,when other methods fail to defend against the backdoor attacks at no more than 20%malicious clients,our method can tolerate 45%malicious clients in the black-box mode and about 30%in Projected Gradient Descent(PGD)mode.Besides,under adaptive attacks,the results demonstrate that XMAM can still complete the global model training task even when there are 40%malicious clients.Finally,we analyze our method's screening complexity and compare the real screening time with other methods.The results show that XMAM is about 10–10000 times faster than the existing methods.展开更多
In vehicle edge computing(VEC),asynchronous federated learning(AFL)is used,where the edge receives a local model and updates the global model,effectively reducing the global aggregation latency.Due to different amount...In vehicle edge computing(VEC),asynchronous federated learning(AFL)is used,where the edge receives a local model and updates the global model,effectively reducing the global aggregation latency.Due to different amounts of local data,computing capabilities and locations of the vehicles,renewing the global model with same weight is inappropriate.The above factors will affect the local calculation time and upload time of the local model,and the vehicle may also be affected by Byzantine attacks,leading to the deterioration of the vehicle data.However,based on deep reinforcement learning(DRL),we can consider these factors comprehensively to eliminate vehicles with poor performance as much as possible and exclude vehicles that have suffered Byzantine attacks before AFL.At the same time,when aggregating AFL,we can focus on those vehicles with better performance to improve the accuracy and safety of the system.In this paper,we proposed a vehicle selection scheme based on DRL in VEC.In this scheme,vehicle’s mobility,channel conditions with temporal variations,computational resources with temporal variations,different data amount,transmission channel status of vehicles as well as Byzantine attacks were taken into account.Simulation results show that the proposed scheme effectively improves the safety and accuracy of the global model.展开更多
基金This work has been funded by King Saud University,Riyadh,Saudi Arabia,through Researchers Supporting Project Number(RSPD2024R857).
文摘Scalability and information personal privacy are vital for training and deploying large-scale deep learning models.Federated learning trains models on exclusive information by aggregating weights from various devices and taking advantage of the device-agnostic environment of web browsers.Nevertheless,relying on a main central server for internet browser-based federated systems can prohibit scalability and interfere with the training process as a result of growing client numbers.Additionally,information relating to the training dataset can possibly be extracted from the distributed weights,potentially reducing the privacy of the local data used for training.In this research paper,we aim to investigate the challenges of scalability and data privacy to increase the efficiency of distributed training models.As a result,we propose a web-federated learning exchange(WebFLex)framework,which intends to improve the decentralization of the federated learning process.WebFLex is additionally developed to secure distributed and scalable federated learning systems that operate in web browsers across heterogeneous devices.Furthermore,WebFLex utilizes peer-to-peer interactions and secure weight exchanges utilizing browser-to-browser web real-time communication(WebRTC),efficiently preventing the need for a main central server.WebFLex has actually been measured in various setups using the MNIST dataset.Experimental results show WebFLex’s ability to improve the scalability of federated learning systems,allowing a smooth increase in the number of participating devices without central data aggregation.In addition,WebFLex can maintain a durable federated learning procedure even when faced with device disconnections and network variability.Additionally,it improves data privacy by utilizing artificial noise,which accomplishes an appropriate balance between accuracy and privacy preservation.
文摘The increasing data pool in finance sectors forces machine learning(ML)to step into new complications.Banking data has significant financial implications and is confidential.Combining users data from several organizations for various banking services may result in various intrusions and privacy leakages.As a result,this study employs federated learning(FL)using a flower paradigm to preserve each organization’s privacy while collaborating to build a robust shared global model.However,diverse data distributions in the collaborative training process might result in inadequate model learning and a lack of privacy.To address this issue,the present paper proposes the imple-mentation of Federated Averaging(FedAvg)and Federated Proximal(FedProx)methods in the flower framework,which take advantage of the data locality while training and guaranteeing global convergence.Resultantly improves the privacy of the local models.This analysis used the credit card and Canadian Institute for Cybersecurity Intrusion Detection Evaluation(CICIDS)datasets.Precision,recall,and accuracy as performance indicators to show the efficacy of the proposed strategy using FedAvg and FedProx.The experimental findings suggest that the proposed approach helps to safely use banking data from diverse sources to enhance customer banking services by obtaining accuracy of 99.55%and 83.72%for FedAvg and 99.57%,and 84.63%for FedProx.
基金National Natural Science Foundation of China,Grant/Award Number:62272114Joint Research Fund of Guangzhou and University,Grant/Award Number:202201020380+3 种基金Guangdong Higher Education Innovation Group,Grant/Award Number:2020KCXTD007Pearl River Scholars Funding Program of Guangdong Universities(2019)National Key R&D Program of China,Grant/Award Number:2022ZD0119602Major Key Project of PCL,Grant/Award Number:PCL2022A03。
文摘As the scale of federated learning expands,solving the Non-IID data problem of federated learning has become a key challenge of interest.Most existing solutions generally aim to solve the overall performance improvement of all clients;however,the overall performance improvement often sacrifices the performance of certain clients,such as clients with less data.Ignoring fairness may greatly reduce the willingness of some clients to participate in federated learning.In order to solve the above problem,the authors propose Ada-FFL,an adaptive fairness federated aggregation learning algorithm,which can dynamically adjust the fairness coefficient according to the update of the local models,ensuring the convergence performance of the global model and the fairness between federated learning clients.By integrating coarse-grained and fine-grained equity solutions,the authors evaluate the deviation of local models by considering both global equity and individual equity,then the weight ratio will be dynamically allocated for each client based on the evaluated deviation value,which can ensure that the update differences of local models are fully considered in each round of training.Finally,by combining a regularisation term to limit the local model update to be closer to the global model,the sensitivity of the model to input perturbations can be reduced,and the generalisation ability of the global model can be improved.Through numerous experiments on several federal data sets,the authors show that our method has more advantages in convergence effect and fairness than the existing baselines.
基金supported in part by the National Key Research and Development Program of China under Grant 2020YFB1005900the National Natural Science Foundation of China(NSFC)under Grant 62102232,62122042,61971269Natural Science Foundation of Shandong province under Grant ZR2021QF064.
文摘The past decades have witnessed a wide application of federated learning in crowd sensing,to handle the numerous data collected by the sensors and provide the users with precise and customized services.Meanwhile,how to protect the private information of users in federated learning has become an important research topic.Compared with the differential privacy(DP)technique and secure multiparty computation(SMC)strategy,the covert communication mechanism in federated learning is more efficient and energy-saving in training the ma-chine learning models.In this paper,we study the covert communication problem for federated learning in crowd sensing Internet-of-Things networks.Different from the previous works about covert communication in federated learning,most of which are considered in a centralized framework and experimental-based,we firstly proposes a centralized covert communication mechanism for federated learning among n learning agents,the time complexity of which is O(log n),approximating to the optimal solution.Secondly,for the federated learning without parameter server,which is a harder case,we show that solving such a problem is NP-hard and prove the existence of a distributed covert communication mechanism with O(log logΔlog n)times,approximating to the optimal solution.Δis the maximum distance between any pair of learning agents.Theoretical analysis and nu-merical simulations are presented to show the performance of our covert communication mechanisms.We hope that our covert communication work can shed some light on how to protect the privacy of federated learning in crowd sensing from the view of communications.
文摘Federated learning ensures data privacy and security by sharing models among multiple computing nodes instead of plaintext data.However,there is still a potential risk of privacy leakage,for example,attackers can obtain the original data through model inference attacks.Therefore,safeguarding the privacy of model parameters becomes crucial.One proposed solution involves incorporating homomorphic encryption algorithms into the federated learning process.However,the existing federated learning privacy protection scheme based on homomorphic encryption will greatly reduce the efficiency and robustness when there are performance differences between parties or abnormal nodes.To solve the above problems,this paper proposes a privacy protection scheme named Federated Learning-Elastic Averaging Stochastic Gradient Descent(FL-EASGD)based on a fully homomorphic encryption algorithm.First,this paper introduces the homomorphic encryption algorithm into the FL-EASGD scheme to preventmodel plaintext leakage and realize privacy security in the process ofmodel aggregation.Second,this paper designs a robust model aggregation algorithm by adding time variables and constraint coefficients,which ensures the accuracy of model prediction while solving performance differences such as computation speed and node anomalies such as downtime of each participant.In addition,the scheme in this paper preserves the independent exploration of the local model by the nodes of each party,making the model more applicable to the local data distribution.Finally,experimental analysis shows that when there are abnormalities in the participants,the efficiency and accuracy of the whole protocol are not significantly affected.
基金supported by the National Natural Science Foundation of China under Grant No.U19B2021the Key Research and Development Program of Shaanxi under Grant No.2020ZDLGY08-04+1 种基金the Key Technologies R&D Program of He’nan Province under Grant No.212102210084the Innovation Scientists and Technicians Troop Construction Projects of Henan Province.
文摘As an emerging joint learning model,federated learning is a promising way to combine model parameters of different users for training and inference without collecting users’original data.However,a practical and efficient solution has not been established in previous work due to the absence of efficient matrix computation and cryptography schemes in the privacy-preserving federated learning model,especially in partially homomorphic cryptosystems.In this paper,we propose a Practical and Efficient Privacy-preserving Federated Learning(PEPFL)framework.First,we present a lifted distributed ElGamal cryptosystem for federated learning,which can solve the multi-key problem in federated learning.Secondly,we develop a Practical Partially Single Instruction Multiple Data(PSIMD)parallelism scheme that can encode a plaintext matrix into single plaintext for encryption,improving the encryption efficiency and reducing the communication cost in partially homomorphic cryptosystem.In addition,based on the Convolutional Neural Network(CNN)and the designed cryptosystem,a novel privacy-preserving federated learning framework is designed by using Momentum Gradient Descent(MGD).Finally,we evaluate the security and performance of PEPFL.The experiment results demonstrate that the scheme is practicable,effective,and secure with low communication and computation costs.
文摘针对5G新空口-车联网(New Radio-Vehicle to Everything,NR-V2X)场景下车对基础设施(Vehicle to Infrastructure,V2I)和车对车(Vehicle to Vehicle,V2V)共享上行通信链路的频谱资源分配问题,提出了一种联邦-多智能体深度Q网络(Federated Learning-Multi-Agent Deep Q Network,FL-MADQN)算法.该分布式算法中,每个车辆用户作为一个智能体,根据获取的本地信道状态信息,以网络信道容量最佳为目标函数,采用DQN算法训练学习本地网络模型.采用联邦学习加快以及稳定各智能体网络模型训练的收敛速度,即将各智能体的本地模型上传至基站进行聚合形成全局模型,再将全局模型下发至各智能体更新本地模型.仿真结果表明:与传统分布式多智能体DQN算法相比,所提出的方案具有更快的模型收敛速度,并且当车辆用户数增大时仍然保证V2V链路的通信效率以及V2I链路的信道容量.
基金supported by the National Natural Science Foundation of China(62032013,62072094Liaoning Province Science and Technology Fund Project(2020MS086)+1 种基金Shenyang Science and Technology Plan Project(20206424)the Fundamental Research Funds for the Central Universities(N2116014,N180101028)CERNET Innovation Project(NGII20190504).
文摘With the arrival of 5G,latency-sensitive applications are becoming increasingly diverse.Mobile Edge Computing(MEC)technology has the characteristics of high bandwidth,low latency and low energy consumption,and has attracted much attention among researchers.To improve the Quality of Service(QoS),this study focuses on computation offloading in MEC.We consider the QoS from the perspective of computational cost,dimensional disaster,user privacy and catastrophic forgetting of new users.The QoS model is established based on the delay and energy consumption and is based on DDQN and a Federated Learning(FL)adaptive task offloading algorithm in MEC.The proposed algorithm combines the QoS model and deep reinforcement learning algorithm to obtain an optimal offloading policy according to the local link and node state information in the channel coherence time to address the problem of time-varying transmission channels and reduce the computing energy consumption and task processing delay.To solve the problems of privacy and catastrophic forgetting,we use FL to make distributed use of multiple users’data to obtain the decision model,protect data privacy and improve the model universality.In the process of FL iteration,the communication delay of individual devices is too large,which affects the overall delay cost.Therefore,we adopt a communication delay optimization algorithm based on the unary outlier detection mechanism to reduce the communication delay of FL.The simulation results indicate that compared with existing schemes,the proposed method significantly reduces the computation cost on a device and improves the QoS when handling complex tasks.
基金supported in part by National Natural Science Foundation of China under Grants 62122069, 62072490, 62071431, and 61871271in part by Science and Technology Development Fund of Macao SAR under Grants 0060/2019/A1 and 0162/2019/A3+5 种基金in part by FDCT-MOST Joint Project under Grant 0066/2019/AMJin part by the Intergovernmental International Cooperation in Science and Technology Innovation Program under Grant 2019YFE0111600in part by FDCT SKL-IOTSC(UM)-2021-2023in part by Zhejiang Provincial Natural Science Foundation of China under Grant LR17F010002in part by the Shenzhen Science and Technology Program under Projects JCYJ20210324093011030 and JCYJ20190808120415286in part by Research Grant of University of Macao under Grants MYRG2020-00107-IOTSC and SRG201900168-IOTSC。
文摘Federated learning(FL), which allows multiple mobile devices to cooperatively train a machine learning model without sharing their data with the central server, has received widespread attention.However, the process of FL involves frequent communications between the server and mobile devices,which incurs a long latency. Intelligent reflecting surface(IRS) provides a promising technology to address this issue, thanks to its capacity to reconfigure the wireless propagation environment. In this paper, we exploit the advantage of IRS to reduce the latency of FL. Specifically, we formulate a latency minimization problem for the IRS assisted FL system, by optimizing the communication resource allocations including the devices’ transmit-powers, the uploading time, the downloading time, the multi-user decomposition matrix and the phase shift matrix of IRS. To solve this non-convex problem, we propose an efficient algorithm which is based on the Block Coordinate Descent(BCD) and the penalty difference of convex(DC) algorithm to compute the solution. Numerical results are provided to validate the efficiency of our proposed algorithm and demonstrate the benefit of deploying IRS for reducing the latency of FL. In particular, the results show that our algorithm can outperform the baseline of Majorization-Minimization(MM) algorithm with the fixed transmit-power by up to 30%.
基金supported by the National Natural Science Foundation of China(No.62206238)the Natural Science Foundation of Jiangsu Province(Grant No.BK20220562)the Natural Science Research Project of Universities in Jiangsu Province(No.22KJB520010).
文摘Federated learning for edge computing is a promising solution in the data booming era,which leverages the computation ability of each edge device to train local models and only shares the model gradients to the central server.However,the frequently transmitted local gradients could also leak the participants’private data.To protect the privacy of local training data,lots of cryptographic-based Privacy-Preserving Federated Learning(PPFL)schemes have been proposed.However,due to the constrained resource nature of mobile devices and complex cryptographic operations,traditional PPFL schemes fail to provide efficient data confidentiality and lightweight integrity verification simultaneously.To tackle this problem,we propose a Verifiable Privacypreserving Federated Learning scheme(VPFL)for edge computing systems to prevent local gradients from leaking over the transmission stage.Firstly,we combine the Distributed Selective Stochastic Gradient Descent(DSSGD)method with Paillier homomorphic cryptosystem to achieve the distributed encryption functionality,so as to reduce the computation cost of the complex cryptosystem.Secondly,we further present an online/offline signature method to realize the lightweight gradients integrity verification,where the offline part can be securely outsourced to the edge server.Comprehensive security analysis demonstrates the proposed VPFL can achieve data confidentiality,authentication,and integrity.At last,we evaluate both communication overhead and computation cost of the proposed VPFL scheme,the experimental results have shown VPFL has low computation costs and communication overheads while maintaining high training accuracy.
基金Deputyship for Research&Innovation,Ministry of Education in Saudi Arabia,for funding this research work through Project Number 959.
文摘A cataract is one of the most significant eye problems worldwide that does not immediately impair vision and progressively worsens over time.Automatic cataract prediction based on various imaging technologies has been addressed recently,such as smartphone apps used for remote health monitoring and eye treatment.In recent years,advances in diagnosis,prediction,and clinical decision support using Artificial Intelligence(AI)in medicine and ophthalmology have been exponential.Due to privacy concerns,a lack of data makes applying artificial intelligence models in the medical field challenging.To address this issue,a federated learning framework named CDFL based on a VGG16 deep neural network model is proposed in this research.The study collects data from the Ocular Disease Intelligent Recognition(ODIR)database containing 5,000 patient records.The significant features are extracted and normalized using the min-max normalization technique.In the federated learning-based technique,the VGG16 model is trained on the dataset individually after receiving model updates from two clients.Before transferring the attributes to the global model,the suggested method trains the local model.The global model subsequently improves the technique after integrating the new parameters.Every client analyses the results in three rounds to decrease the over-fitting problem.The experimental result shows the effectiveness of the federated learning-based technique on a Deep Neural Network(DNN),reaching a 95.28%accuracy while also providing privacy to the patient’s data.The experiment demonstrated that the suggested federated learning model outperforms other traditional methods,achieving client 1 accuracy of 95.0%and client 2 accuracy of 96.0%.
基金supported by the Major science and technology project of Hainan Province(Grant No.ZDKJ2020012)National Natural Science Foundation of China(Grant No.62162024 and 62162022)Key Projects in Hainan Province(Grant ZDYF2021GXJS003 and Grant ZDYF2020040).
文摘With the increasing number of smart devices and the development of machine learning technology,the value of users’personal data is becoming more and more important.Based on the premise of protecting users’personal privacy data,federated learning(FL)uses data stored on edge devices to realize training tasks by contributing training model parameters without revealing the original data.However,since FL can still leak the user’s original data by exchanging gradient information.The existing privacy protection strategy will increase the uplink time due to encryption measures.It is a huge challenge in terms of communication.When there are a large number of devices,the privacy protection cost of the system is higher.Based on these issues,we propose a privacy-preserving scheme of user-based group collaborative federated learning(GrCol-PPFL).Our scheme primarily divides participants into several groups and each group communicates in a chained transmission mechanism.All groups work in parallel at the same time.The server distributes a random parameter with the same dimension as the model parameter for each participant as a mask for the model parameter.We use the public datasets of modified national institute of standards and technology database(MNIST)to test the model accuracy.The experimental results show that GrCol-PPFL not only ensures the accuracy of themodel,but also ensures the security of the user’s original data when users collude with each other.Finally,through numerical experiments,we show that by changing the number of groups,we can find the optimal number of groups that reduces the uplink consumption time.
文摘Federated learning(FL) has developed rapidly in recent years as a privacy-preserving machine learning method,and it has been gradually applied to key areas involving privacy and security such as finance,medical care,and government affairs.However,the current solutions to FL rarely consider the problem of migration from centralized learning to federated learning,resulting in a high practical threshold for federated learning and low usability.Therefore,we introduce a reliable,efficient,and easy-to-use federated learning framework named Neursafe-FL.Based on the unified application program interface(API),the framework is not only compatible with mainstream machine learning frameworks,such as Tensorflow and Pytorch,but also supports further extensions,which can preserve the programming style of the original framework to lower the threshold of FL.At the same time,the design of componentization,modularization,and standardized interface makes the framework highly extensible,which meets the needs of customized requirements and FL evolution in the future.Neursafe-FL is already on Github as an open-source project^(1).
基金sponsored by the National Key R&D Program of China(No.2018YFB2100400)the National Natural Science Foundation of China(No.62002077,61872100)+4 种基金the Major Research Plan of the National Natural Science Foundation of China(92167203)the Guangdong Basic and Applied Basic Research Foundation(No.2020A1515110385)the China Postdoctoral Science Foundation(No.2022M710860)the Zhejiang Lab(No.2020NF0AB01)Guangzhou Science and Technology Plan Project(202102010440).
文摘Benefiting from the development of Federated Learning(FL)and distributed communication systems,large-scale intelligent applications become possible.Distributed devices not only provide adequate training data,but also cause privacy leakage and energy consumption.How to optimize the energy consumption in distributed communication systems,while ensuring the privacy of users and model accuracy,has become an urgent challenge.In this paper,we define the FL as a 3-layer architecture including users,agents and server.In order to find a balance among model training accuracy,privacy-preserving effect,and energy consumption,we design the training process of FL as game models.We use an extensive game tree to analyze the key elements that influence the players’decisions in the single game,and then find the incentive mechanism that meet the social norms through the repeated game.The experimental results show that the Nash equilibrium we obtained satisfies the laws of reality,and the proposed incentive mechanism can also promote users to submit high-quality data in FL.Following the multiple rounds of play,the incentive mechanism can help all players find the optimal strategies for energy,privacy,and accuracy of FL in distributed communication systems.
基金supported by the National Natural Science Foundation of China(NSFC)under Grant 62071179.
文摘Although Federated Deep Learning(FDL)enables distributed machine learning in the Internet of Vehicles(IoV),it requires multiple clients to upload model parameters,thus still existing unavoidable communication overhead and data privacy risks.The recently proposed Swarm Learning(SL)provides a decentralized machine learning approach for unit edge computing and blockchain-based coordination.A Swarm-Federated Deep Learning framework in the IoV system(IoV-SFDL)that integrates SL into the FDL framework is proposed in this paper.The IoV-SFDL organizes vehicles to generate local SL models with adjacent vehicles based on the blockchain empowered SL,then aggregates the global FDL model among different SL groups with a credibility weights prediction algorithm.Extensive experimental results show that compared with the baseline frameworks,the proposed IoV-SFDL framework reduces the overhead of client-to-server communication by 16.72%,while the model performance improves by about 5.02%for the same training iterations.
基金supported in part by the National Natural Science Foundation of China(62371116 and 62231020)in part by the Science and Technology Project of Hebei Province Education Department(ZD2022164)+2 种基金in part by the Fundamental Research Funds for the Central Universities(N2223031)in part by the Open Research Project of Xidian University(ISN24-08)Key Laboratory of Cognitive Radio and Information Processing,Ministry of Education(Guilin University of Electronic Technology,China,CRKL210203)。
文摘High-efficiency and low-cost knowledge sharing can improve the decision-making ability of autonomous vehicles by mining knowledge from the Internet of Vehicles(IoVs).However,it is challenging to ensure high efficiency of local data learning models while preventing privacy leakage in a high mobility environment.In order to protect data privacy and improve data learning efficiency in knowledge sharing,we propose an asynchronous federated broad learning(FBL)framework that integrates broad learning(BL)into federated learning(FL).In FBL,we design a broad fully connected model(BFCM)as a local model for training client data.To enhance the wireless channel quality for knowledge sharing and reduce the communication and computation cost of participating clients,we construct a joint resource allocation and reconfigurable intelligent surface(RIS)configuration optimization framework for FBL.The problem is decoupled into two convex subproblems.Aiming to improve the resource scheduling efficiency in FBL,a double Davidon–Fletcher–Powell(DDFP)algorithm is presented to solve the time slot allocation and RIS configuration problem.Based on the results of resource scheduling,we design a reward-allocation algorithm based on federated incentive learning(FIL)in FBL to compensate clients for their costs.The simulation results show that the proposed FBL framework achieves better performance than the comparison models in terms of efficiency,accuracy,and cost for knowledge sharing in the IoV.
基金supported by the National Natural Science Foundation of China under Grant 61972148.
文摘The application of artificial intelligence technology in Internet of Vehicles(lov)has attracted great research interests with the goal of enabling smart transportation and traffic management.Meanwhile,concerns have been raised over the security and privacy of the tons of traffic and vehicle data.In this regard,Federated Learning(FL)with privacy protection features is considered a highly promising solution.However,in the FL process,the server side may take advantage of its dominant role in model aggregation to steal sensitive information of users,while the client side may also upload malicious data to compromise the training of the global model.Most existing privacy-preserving FL schemes in IoV fail to deal with threats from both of these two sides at the same time.In this paper,we propose a Blockchain based Privacy-preserving Federated Learning scheme named BPFL,which uses blockchain as the underlying distributed framework of FL.We improve the Multi-Krum technology and combine it with the homomorphic encryption to achieve ciphertext-level model aggregation and model filtering,which can enable the verifiability of the local models while achieving privacy-preservation.Additionally,we develop a reputation-based incentive mechanism to encourage users in IoV to actively participate in the federated learning and to practice honesty.The security analysis and performance evaluations are conducted to show that the proposed scheme can meet the security requirements and improve the performance of the FL model.
基金Supported by the Fundamental Research Funds for the Central Universities(328202204)。
文摘Federated Learning(FL),a burgeoning technology,has received increasing attention due to its privacy protection capability.However,the base algorithm FedAvg is vulnerable when it suffers from so-called backdoor attacks.Former researchers proposed several robust aggregation methods.Unfortunately,due to the hidden characteristic of backdoor attacks,many of these aggregation methods are unable to defend against backdoor attacks.What's more,the attackers recently have proposed some hiding methods that further improve backdoor attacks'stealthiness,making all the existing robust aggregation methods fail.To tackle the threat of backdoor attacks,we propose a new aggregation method,X-raying Models with A Matrix(XMAM),to reveal the malicious local model updates submitted by the backdoor attackers.Since we observe that the output of the Softmax layer exhibits distinguishable patterns between malicious and benign updates,unlike the existing aggregation algorithms,we focus on the Softmax layer's output in which the backdoor attackers are difficult to hide their malicious behavior.Specifically,like medical X-ray examinations,we investigate the collected local model updates by using a matrix as an input to get their Softmax layer's outputs.Then,we preclude updates whose outputs are abnormal by clustering.Without any training dataset in the server,the extensive evaluations show that our XMAM can effectively distinguish malicious local model updates from benign ones.For instance,when other methods fail to defend against the backdoor attacks at no more than 20%malicious clients,our method can tolerate 45%malicious clients in the black-box mode and about 30%in Projected Gradient Descent(PGD)mode.Besides,under adaptive attacks,the results demonstrate that XMAM can still complete the global model training task even when there are 40%malicious clients.Finally,we analyze our method's screening complexity and compare the real screening time with other methods.The results show that XMAM is about 10–10000 times faster than the existing methods.
基金supported in part by the National Natural Science Foundation of China(No.61701197)in part by the National Key Research and Development Program of China(No.2021YFA1000500(4))in part by the 111 Project(No.B23008).
文摘In vehicle edge computing(VEC),asynchronous federated learning(AFL)is used,where the edge receives a local model and updates the global model,effectively reducing the global aggregation latency.Due to different amounts of local data,computing capabilities and locations of the vehicles,renewing the global model with same weight is inappropriate.The above factors will affect the local calculation time and upload time of the local model,and the vehicle may also be affected by Byzantine attacks,leading to the deterioration of the vehicle data.However,based on deep reinforcement learning(DRL),we can consider these factors comprehensively to eliminate vehicles with poor performance as much as possible and exclude vehicles that have suffered Byzantine attacks before AFL.At the same time,when aggregating AFL,we can focus on those vehicles with better performance to improve the accuracy and safety of the system.In this paper,we proposed a vehicle selection scheme based on DRL in VEC.In this scheme,vehicle’s mobility,channel conditions with temporal variations,computational resources with temporal variations,different data amount,transmission channel status of vehicles as well as Byzantine attacks were taken into account.Simulation results show that the proposed scheme effectively improves the safety and accuracy of the global model.