期刊文献+
共找到36篇文章
< 1 2 >
每页显示 20 50 100
Data complexity-based batch sanitization method against poison in distributed learning
1
作者 Silv Wang Kai Fan +2 位作者 Kuan Zhang Hui Li Yintang Yang 《Digital Communications and Networks》 SCIE CSCD 2024年第2期416-428,共13页
The security of Federated Learning(FL)/Distributed Machine Learning(DML)is gravely threatened by data poisoning attacks,which destroy the usability of the model by contaminating training samples,so such attacks are ca... The security of Federated Learning(FL)/Distributed Machine Learning(DML)is gravely threatened by data poisoning attacks,which destroy the usability of the model by contaminating training samples,so such attacks are called causative availability indiscriminate attacks.Facing the problem that existing data sanitization methods are hard to apply to real-time applications due to their tedious process and heavy computations,we propose a new supervised batch detection method for poison,which can fleetly sanitize the training dataset before the local model training.We design a training dataset generation method that helps to enhance accuracy and uses data complexity features to train a detection model,which will be used in an efficient batch hierarchical detection process.Our model stockpiles knowledge about poison,which can be expanded by retraining to adapt to new attacks.Being neither attack-specific nor scenario-specific,our method is applicable to FL/DML or other online or offline scenarios. 展开更多
关键词 distributed machine learning security Federated learning Data poisoning attacks Data sanitization Batch detection Data complexity
下载PDF
Serverless distributed learning for smart grid analytics
2
作者 Gang Huang Chao Wu +1 位作者 Yifan Hu Chuangxin Guo 《Chinese Physics B》 SCIE EI CAS CSCD 2021年第8期558-565,共8页
The digitization,informatization,and intelligentization of physical systems require strong support from big data analysis.However,due to restrictions on data security and privacy and concerns about the cost of big dat... The digitization,informatization,and intelligentization of physical systems require strong support from big data analysis.However,due to restrictions on data security and privacy and concerns about the cost of big data collection,transmission,and storage,it is difficult to do data aggregation in real-world power systems,which directly retards the effective implementation of smart grid analytics.Federated learning,an advanced distributed learning method proposed by Google,seems a promising solution to the above issues.Nevertheless,it relies on a server node to complete model aggregation and the framework is limited to scenarios where data are independent and identically distributed.Thus,we here propose a serverless distributed learning platform based on blockchain to solve the above two issues.In the proposed platform,the task of machine learning is performed according to smart contracts,and encrypted models are aggregated via a mechanism of knowledge distillation.Through this proposed method,a server node is no longer required and the learning ability is no longer limited to independent and identically distributed scenarios.Experiments on a public electrical grid dataset will verify the effectiveness of the proposed approach. 展开更多
关键词 smart grid physical system distributed learning artificial intelligence
下载PDF
ADC-DL:Communication-Efficient Distributed Learning with Hierarchical Clustering and Adaptive Dataset Condensation
3
作者 Zhipeng Gao Yan Yang +1 位作者 Chen Zhao Zijia Mo 《China Communications》 SCIE CSCD 2022年第12期73-85,共13页
The rapid growth of modern mobile devices leads to a large number of distributed data,which is extremely valuable for learning models.Unfortunately,model training by collecting all these original data to a centralized... The rapid growth of modern mobile devices leads to a large number of distributed data,which is extremely valuable for learning models.Unfortunately,model training by collecting all these original data to a centralized cloud server is not applicable due to data privacy and communication costs concerns,hindering artificial intelligence from empowering mobile devices.Moreover,these data are not identically and independently distributed(Non-IID)caused by their different context,which will deteriorate the performance of the model.To address these issues,we propose a novel Distributed Learning algorithm based on hierarchical clustering and Adaptive Dataset Condensation,named ADC-DL,which learns a shared model by collecting the synthetic samples generated on each device.To tackle the heterogeneity of data distribution,we propose an entropy topsis comprehensive tiering model for hierarchical clustering,which distinguishes clients in terms of their data characteristics.Subsequently,synthetic dummy samples are generated based on the hierarchical structure utilizing adaptive dataset condensation.The procedure of dataset condensation can be adjusted adaptively according to the tier of the client.Extensive experiments demonstrate that the performance of our ADC-DL is more outstanding in prediction accuracy and communication costs compared with existing algorithms. 展开更多
关键词 distributed learning Non-IID data partition hierarchical clustering adaptive dataset condensation
下载PDF
Communication-Censored Distributed Learning for Stochastic Configuration Networks
4
作者 Yujun Zhou Xiaowen Ge Wu Ai 《International Journal of Intelligence Science》 2022年第2期21-37,共17页
This paper aims to reduce the communication cost of the distributed learning algorithm for stochastic configuration networks (SCNs), in which information exchange between the learning agents is conducted only at a tri... This paper aims to reduce the communication cost of the distributed learning algorithm for stochastic configuration networks (SCNs), in which information exchange between the learning agents is conducted only at a trigger time. For this purpose, we propose the communication-censored distributed learning algorithm for SCN, namely ADMMM-SCN-ET, by introducing the event-triggered communication mechanism to the alternating direction method of multipliers (ADMM). To avoid unnecessary information transmissions, each learning agent is equipped with a trigger function. Only if the event-trigger error exceeds a specified threshold and meets the trigger condition, the agent will transmit the variable information to its neighbors and update its state in time. The simulation results show that the proposed algorithm can effectively reduce the communication cost for training decentralized SCNs and save communication resources. 展开更多
关键词 Event-Triggered Communication distributed learning Stochastic Configuration Networks (SCN) Alternating Direction Method of Multipliers (ADMM)
下载PDF
The adaptive distributed learning based on homomorphic encryption and blockchain 被引量:1
5
作者 YANG Ruizhe ZHAO Xuehui +2 位作者 ZHANG Yanhua SI Pengbo TENG Yinglei 《High Technology Letters》 EI CAS 2022年第4期337-344,共8页
The privacy and security of data are recently research hotspots and challenges.For this issue,an adaptive scheme of distributed learning based on homomorphic encryption and blockchain is proposed.Specifically,in the f... The privacy and security of data are recently research hotspots and challenges.For this issue,an adaptive scheme of distributed learning based on homomorphic encryption and blockchain is proposed.Specifically,in the form of homomorphic encryption,the computing party iteratively aggregates the learning models from distributed participants,so that the privacy of both the data and model is ensured.Moreover,the aggregations are recorded and verified by blockchain,which prevents attacks from malicious nodes and guarantees the reliability of learning.For these sophisticated privacy and security technologies,the computation cost and energy consumption in both the encrypted learning and consensus reaching are analyzed,based on which a joint optimization of computation resources allocation and adaptive aggregation to minimize loss function is established with the realistic solution followed.Finally,the simulations and analysis evaluate the performance of the proposed scheme. 展开更多
关键词 blockchain distributed machine learning(DML) PRIVACY SECURITY
下载PDF
Autonomous Vehicle Platoons In Urban Road Networks:A Joint Distributed Reinforcement Learning and Model Predictive Control Approach
6
作者 Luigi D’Alfonso Francesco Giannini +3 位作者 Giuseppe Franzè Giuseppe Fedele Francesco Pupo Giancarlo Fortino 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第1期141-156,共16页
In this paper, platoons of autonomous vehicles operating in urban road networks are considered. From a methodological point of view, the problem of interest consists of formally characterizing vehicle state trajectory... In this paper, platoons of autonomous vehicles operating in urban road networks are considered. From a methodological point of view, the problem of interest consists of formally characterizing vehicle state trajectory tubes by means of routing decisions complying with traffic congestion criteria. To this end, a novel distributed control architecture is conceived by taking advantage of two methodologies: deep reinforcement learning and model predictive control. On one hand, the routing decisions are obtained by using a distributed reinforcement learning algorithm that exploits available traffic data at each road junction. On the other hand, a bank of model predictive controllers is in charge of computing the more adequate control action for each involved vehicle. Such tasks are here combined into a single framework:the deep reinforcement learning output(action) is translated into a set-point to be tracked by the model predictive controller;conversely, the current vehicle position, resulting from the application of the control move, is exploited by the deep reinforcement learning unit for improving its reliability. The main novelty of the proposed solution lies in its hybrid nature: on one hand it fully exploits deep reinforcement learning capabilities for decisionmaking purposes;on the other hand, time-varying hard constraints are always satisfied during the dynamical platoon evolution imposed by the computed routing decisions. To efficiently evaluate the performance of the proposed control architecture, a co-design procedure, involving the SUMO and MATLAB platforms, is implemented so that complex operating environments can be used, and the information coming from road maps(links,junctions, obstacles, semaphores, etc.) and vehicle state trajectories can be shared and exchanged. Finally by considering as operating scenario a real entire city block and a platoon of eleven vehicles described by double-integrator models, several simulations have been performed with the aim to put in light the main f eatures of the proposed approach. Moreover, it is important to underline that in different operating scenarios the proposed reinforcement learning scheme is capable of significantly reducing traffic congestion phenomena when compared with well-reputed competitors. 展开更多
关键词 distributed model predictive control distributed reinforcement learning routing decisions urban road networks
下载PDF
Adaptive Load Balancing for Parameter Servers in Distributed Machine Learning over Heterogeneous Networks 被引量:1
7
作者 CAI Weibo YANG Shulin +2 位作者 SUN Gang ZHANG Qiming YU Hongfang 《ZTE Communications》 2023年第1期72-80,共9页
In distributed machine learning(DML)based on the parameter server(PS)architecture,unbalanced communication load distribution of PSs will lead to a significant slowdown of model synchronization in heterogeneous network... In distributed machine learning(DML)based on the parameter server(PS)architecture,unbalanced communication load distribution of PSs will lead to a significant slowdown of model synchronization in heterogeneous networks due to low utilization of bandwidth.To address this problem,a network-aware adaptive PS load distribution scheme is proposed,which accelerates model synchronization by proactively adjusting the communication load on PSs according to network states.We evaluate the proposed scheme on MXNet,known as a realworld distributed training platform,and results show that our scheme achieves up to 2.68 times speed-up of model training in the dynamic and heterogeneous network environment. 展开更多
关键词 distributed machine learning network awareness parameter server load distribution heterogeneous network
下载PDF
A Tutorial on Federated Learning from Theory to Practice:Foundations,Software Frameworks,Exemplary Use Cases,and Selected Trends
8
作者 M.Victoria Luzón Nuria Rodríguez-Barroso +5 位作者 Alberto Argente-Garrido Daniel Jiménez-López Jose M.Moyano Javier Del Ser Weiping Ding Francisco Herrera 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第4期824-850,共27页
When data privacy is imposed as a necessity,Federated learning(FL)emerges as a relevant artificial intelligence field for developing machine learning(ML)models in a distributed and decentralized environment.FL allows ... When data privacy is imposed as a necessity,Federated learning(FL)emerges as a relevant artificial intelligence field for developing machine learning(ML)models in a distributed and decentralized environment.FL allows ML models to be trained on local devices without any need for centralized data transfer,thereby reducing both the exposure of sensitive data and the possibility of data interception by malicious third parties.This paradigm has gained momentum in the last few years,spurred by the plethora of real-world applications that have leveraged its ability to improve the efficiency of distributed learning and to accommodate numerous participants with their data sources.By virtue of FL,models can be learned from all such distributed data sources while preserving data privacy.The aim of this paper is to provide a practical tutorial on FL,including a short methodology and a systematic analysis of existing software frameworks.Furthermore,our tutorial provides exemplary cases of study from three complementary perspectives:i)Foundations of FL,describing the main components of FL,from key elements to FL categories;ii)Implementation guidelines and exemplary cases of study,by systematically examining the functionalities provided by existing software frameworks for FL deployment,devising a methodology to design a FL scenario,and providing exemplary cases of study with source code for different ML approaches;and iii)Trends,shortly reviewing a non-exhaustive list of research directions that are under active investigation in the current FL landscape.The ultimate purpose of this work is to establish itself as a referential work for researchers,developers,and data scientists willing to explore the capabilities of FL in practical applications. 展开更多
关键词 Data privacy distributed machine learning federated learning software frameworks
下载PDF
SIGNGD with Error Feedback Meets Lazily Aggregated Technique:Communication-Efficient Algorithms for Distributed Learning
9
作者 Xiaoge Deng Tao Sun +1 位作者 Feng Liu Dongsheng Li 《Tsinghua Science and Technology》 SCIE EI CAS CSCD 2022年第1期174-185,共12页
The proliferation of massive datasets has led to significant interests in distributed algorithms for solving large-scale machine learning problems.However,the communication overhead is a major bottleneck that hampers ... The proliferation of massive datasets has led to significant interests in distributed algorithms for solving large-scale machine learning problems.However,the communication overhead is a major bottleneck that hampers the scalability of distributed machine learning systems.In this paper,we design two communication-efficient algorithms for distributed learning tasks.The first one is named EF-SIGNGD,in which we use the 1-bit(sign-based) gradient quantization method to save the communication bits.Moreover,the error feedback technique,i.e.,incorporating the error made by the compression operator into the next step,is employed for the convergence guarantee.The second algorithm is called LE-SIGNGD,in which we introduce a well-designed lazy gradient aggregation rule to EF-SIGNGD that can detect the gradients with small changes and reuse the outdated information.LE-SIGNGD saves communication costs both in transmitted bits and communication rounds.Furthermore,we show that LE-SIGNGD is convergent under some mild assumptions.The effectiveness of the two proposed algorithms is demonstrated through experiments on both real and synthetic data. 展开更多
关键词 distributed learning communication-efficient algorithm convergence analysis
原文传递
On the development of cat swarm metaheuristic using distributed learning strategies and the applications
10
作者 Usha Manasi Mohapatra Babita Majhi Alok Kumar Jagadev 《International Journal of Intelligent Computing and Cybernetics》 EI 2019年第2期224-244,共21页
Purpose–The purpose of this paper is to propose distributed learning-based three different metaheuristic algorithms for the identification of nonlinear systems.The proposed algorithms are experimented in this study t... Purpose–The purpose of this paper is to propose distributed learning-based three different metaheuristic algorithms for the identification of nonlinear systems.The proposed algorithms are experimented in this study to address problems for which input data are available at different geographic locations.In addition,the models are tested for nonlinear systems with different noise conditions.In a nutshell,the suggested model aims to handle voluminous data with low communication overhead compared to traditional centralized processing methodologies.Design/methodology/approach–Population-based evolutionary algorithms such as genetic algorithm(GA),particle swarm optimization(PSO)and cat swarm optimization(CSO)are implemented in a distributed form to address the system identification problem having distributed input data.Out of different distributed approaches mentioned in the literature,the study has considered incremental and diffusion strategies.Findings–Performances of the proposed distributed learning-based algorithms are compared for different noise conditions.The experimental results indicate that CSO performs better compared to GA and PSO at all noise strengths with respect to accuracy and error convergence rate,but incremental CSO is slightly superior to diffusion CSO.Originality/value–This paper employs evolutionary algorithms using distributed learning strategies and applies these algorithms for the identification of unknown systems.Very few existing studies have been reported in which these distributed learning strategies are experimented for the parameter estimation task. 展开更多
关键词 System identification Wireless sensor network Diffusion learning strategy distributed learning-based cat swarm optimization Incremental learning strategy
原文传递
Adaptive Kernel Firefly Algorithm Based Feature Selection and Q-Learner Machine Learning Models in Cloud
11
作者 I.Mettildha Mary K.Karuppasamy 《Computer Systems Science & Engineering》 SCIE EI 2023年第9期2667-2685,共19页
CC’s(Cloud Computing)networks are distributed and dynamic as signals appear/disappear or lose significance.MLTs(Machine learning Techniques)train datasets which sometime are inadequate in terms of sample for inferrin... CC’s(Cloud Computing)networks are distributed and dynamic as signals appear/disappear or lose significance.MLTs(Machine learning Techniques)train datasets which sometime are inadequate in terms of sample for inferring information.A dynamic strategy,DevMLOps(Development Machine Learning Operations)used in automatic selections and tunings of MLTs result in significant performance differences.But,the scheme has many disadvantages including continuity in training,more samples and training time in feature selections and increased classification execution times.RFEs(Recursive Feature Eliminations)are computationally very expensive in its operations as it traverses through each feature without considering correlations between them.This problem can be overcome by the use of Wrappers as they select better features by accounting for test and train datasets.The aim of this paper is to use DevQLMLOps for automated tuning and selections based on orchestrations and messaging between containers.The proposed AKFA(Adaptive Kernel Firefly Algorithm)is for selecting features for CNM(Cloud Network Monitoring)operations.AKFA methodology is demonstrated using CNSD(Cloud Network Security Dataset)with satisfactory results in the performance metrics like precision,recall,F-measure and accuracy used. 展开更多
关键词 Cloud analytics machine learning ensemble learning distributed learning clustering classification auto selection auto tuning decision feedback cloud DevOps feature selection wrapper feature selection Adaptive Kernel Firefly Algorithm(AKFA) Q learning
下载PDF
Distributed Asynchronous Learning for Multipath Data Transmission Based on P-DDQN 被引量:1
12
作者 Kang Liu Wei Quan +3 位作者 Deyun Gao Chengxiao Yu Mingyuan Liu Yuming Zhang 《China Communications》 SCIE CSCD 2021年第8期62-74,共13页
Adaptive packet scheduling can efficiently enhance the performance of multipath Data Transmission.However,realizing precise packet scheduling is challenging due to the nature of high dynamics and unpredictability of n... Adaptive packet scheduling can efficiently enhance the performance of multipath Data Transmission.However,realizing precise packet scheduling is challenging due to the nature of high dynamics and unpredictability of network link states.To this end,this paper proposes a distributed asynchronous deep reinforcement learning framework to intensify the dynamics and prediction of adaptive packet scheduling.Our framework contains two parts:local asynchronous packet scheduling and distributed cooperative control center.In local asynchronous packet scheduling,an asynchronous prioritized replay double deep Q-learning packets scheduling algorithm is proposed for dynamic adaptive packet scheduling learning,which makes a combination of prioritized replay double deep Q-learning network(P-DDQN)to make the fitting analysis.In distributed cooperative control center,a distributed scheduling learning and neural fitting acceleration algorithm to adaptively update neural network parameters of P-DDQN for more precise packet scheduling.Experimental results show that our solution has a better performance than Random weight algorithm and Round-Robin algorithm in throughput and loss ratio.Further,our solution has 1.32 times and 1.54 times better than Random weight algorithm and Round-Robin algorithm on the stability of multipath data transmission,respectively. 展开更多
关键词 distributed asynchronous learning multipath data transmission deep reinforcement learning
下载PDF
Pseudo-label based semi-supervised learning in the distributed machine learning framework
13
作者 WANG Xiaoxi WU Wenjun +3 位作者 YANG Feng SI Pengbo ZHANG Xuanyi ZHANG Yanhua 《High Technology Letters》 EI CAS 2022年第2期172-180,共9页
With the emergence of various intelligent applications,machine learning technologies face lots of challenges including large-scale models,application oriented real-time dataset and limited capabilities of nodes in pra... With the emergence of various intelligent applications,machine learning technologies face lots of challenges including large-scale models,application oriented real-time dataset and limited capabilities of nodes in practice.Therefore,distributed machine learning(DML) and semi-supervised learning methods which help solve these problems have been addressed in both academia and industry.In this paper,the semi-supervised learning method and the data parallelism DML framework are combined.The pseudo-label based local loss function for each distributed node is studied,and the stochastic gradient descent(SGD) based distributed parameter update principle is derived.A demo that implements the pseudo-label based semi-supervised learning in the DML framework is conducted,and the CIFAR-10 dataset for target classification is used to evaluate the performance.Experimental results confirm the convergence and the accuracy of the model using the pseudo-label based semi-supervised learning in the DML framework.Given the proportion of the pseudo-label dataset is 20%,the accuracy of the model is over 90% when the value of local parameter update steps between two global aggregations is less than 5.Besides,fixing the global aggregations interval to 3,the model converges with acceptable performance degradation when the proportion of the pseudo-label dataset varies from 20% to 80%. 展开更多
关键词 distributed machine learning(DML) SEMI-SUPERVISED deep neural network(DNN)
下载PDF
FedTC:A Personalized Federated LearningMethod with Two Classifiers
14
作者 Yang Liu Jiabo Wang +4 位作者 Qinbo Liu Mehdi Gheisari Wanyin Xu Zoe L.Jiang Jiajia Zhang 《Computers, Materials & Continua》 SCIE EI 2023年第9期3013-3027,共15页
Centralized training of deep learning models poses privacy risks that hinder their deployment.Federated learning(FL)has emerged as a solution to address these risks,allowing multiple clients to train deep learning mod... Centralized training of deep learning models poses privacy risks that hinder their deployment.Federated learning(FL)has emerged as a solution to address these risks,allowing multiple clients to train deep learning models collaborativelywithout sharing rawdata.However,FL is vulnerable to the impact of heterogeneous distributed data,which weakens convergence stability and suboptimal performance of the trained model on local data.This is due to the discarding of the old local model at each round of training,which results in the loss of personalized information in the model critical for maintaining model accuracy and ensuring robustness.In this paper,we propose FedTC,a personalized federated learning method with two classifiers that can retain personalized information in the local model and improve the model’s performance on local data.FedTC divides the model into two parts,namely,the extractor and the classifier,where the classifier is the last layer of the model,and the extractor consists of other layers.The classifier in the local model is always retained to ensure that the personalized information is not lost.After receiving the global model,the local extractor is overwritten by the globalmodel’s extractor,and the classifier of the globalmodel serves as anadditional classifier of the localmodel toguide local training.The FedTCintroduces a two-classifier training strategy to coordinate the two classifiers for local model updates.Experimental results on Cifar10 and Cifar100 datasets demonstrate that FedTC performs better on heterogeneous data than current studies,such as FedAvg,FedPer,and local training,achieving a maximum improvement of 27.95%in model classification test accuracy compared to FedAvg. 展开更多
关键词 distributed machine learning federated learning data hetero-geneity non-independent identically distributed
下载PDF
Distributed Deep Reinforcement Learning:A Survey and a Multi-player Multi-agent Learning Toolbox
15
作者 Qiyue Yin Tongtong Yu +6 位作者 Shengqi Shen Jun Yang Meijing Zhao Wancheng Ni Kaiqi Huang Bin Liang Liang Wang 《Machine Intelligence Research》 EI CSCD 2024年第3期411-430,共20页
With the breakthrough of AlphaGo,deep reinforcement learning has become a recognized technique for solving sequential decision-making problems.Despite its reputation,data inefficiency caused by its trial and error lea... With the breakthrough of AlphaGo,deep reinforcement learning has become a recognized technique for solving sequential decision-making problems.Despite its reputation,data inefficiency caused by its trial and error learning mechanism makes deep reinforcement learning difficult to apply in a wide range of areas.Many methods have been developed for sample efficient deep reinforcement learning,such as environment modelling,experience transfer,and distributed modifications,among which distributed deep reinforcement learning has shown its potential in various applications,such as human-computer gaming and intelligent transportation.In this paper,we conclude the state of this exciting field,by comparing the classical distributed deep reinforcement learning methods and studying important components to achieve efficient distributed learning,covering single player single agent distributed deep reinforcement learning to the most complex multiple players multiple agents distributed deep reinforcement learning.Furthermore,we review recently released toolboxes that help to realize distributed deep reinforcement learning without many modifications of their non-distributed versions.By analysing their strengths and weaknesses,a multi-player multi-agent distributed deep reinforcement learning toolbox is developed and released,which is further validated on Wargame,a complex environment,showing the usability of the proposed toolbox for multiple players and multiple agents distributed deep reinforcement learning under complex games.Finally,we try to point out challenges and future trends,hoping that this brief review can provide a guide or a spark for researchers who are interested in distributed deep reinforcement learning. 展开更多
关键词 Deep reinforcement learning distributed machine learning self-play population-play TOOLBOX
原文传递
Finite-Time Distributed Identification for Nonlinear Interconnected Systems 被引量:1
16
作者 Farzaneh Tatari Hamidreza Modares +1 位作者 Christos Panayiotou Marios Polycarpou 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2022年第7期1188-1199,共12页
In this paper,a novel finite-time distributed identification method is introduced for nonlinear interconnected systems.A distributed concurrent learning-based discontinuous gradient descent update law is presented to ... In this paper,a novel finite-time distributed identification method is introduced for nonlinear interconnected systems.A distributed concurrent learning-based discontinuous gradient descent update law is presented to learn uncertain interconnected subsystems’dynamics.The concurrent learning approach continually minimizes the identification error for a batch of previously recorded data collected from each subsystem as well as its neighboring subsystems.The state information of neighboring interconnected subsystems is acquired through direct communication.The overall update laws for all subsystems form coupled continuous-time gradient flow dynamics for which finite-time Lyapunov stability analysis is performed.As a byproduct of this Lyapunov analysis,easy-to-check rank conditions on data stored in the distributed memories of subsystems are obtained,under which finite-time stability of the distributed identifier is guaranteed.These rank conditions replace the restrictive persistence of excitation(PE)conditions which are hard and even impossible to achieve and verify for interconnected subsystems.Finally,simulation results verify the effectiveness of the presented distributed method in comparison with the other methods. 展开更多
关键词 distributed concurrent learning finite-time identification nonlinear interconnected systems unknown dynamics
下载PDF
A new accelerating algorithm for multi-agent reinforcement learning 被引量:1
17
作者 张汝波 仲宇 顾国昌 《Journal of Harbin Institute of Technology(New Series)》 EI CAS 2005年第1期48-51,共4页
In multi-agent systems, joint-action must be employed to achieve cooperation because the evaluation of the behavior of an agent often depends on the other agents’ behaviors. However, joint-action reinforcement learni... In multi-agent systems, joint-action must be employed to achieve cooperation because the evaluation of the behavior of an agent often depends on the other agents’ behaviors. However, joint-action reinforcement learning algorithms suffer the slow convergence rate because of the enormous learning space produced by joint-action. In this article, a prediction-based reinforcement learning algorithm is presented for multi-agent cooperation tasks, which demands all agents to learn predicting the probabilities of actions that other agents may execute. A multi-robot cooperation experiment is run to test the efficacy of the new algorithm, and the experiment results show that the new algorithm can achieve the cooperation policy much faster than the primitive reinforcement learning algorithm. 展开更多
关键词 distributed reinforcement learning accelerating algorithm machine learning multi-agent system
下载PDF
A survey on federated learning:a perspective from multi-party computation 被引量:2
18
作者 Fengxia LIU Zhiming ZHENG +2 位作者 Yexuan SHI Yongxin TONG Yi ZHANG 《Frontiers of Computer Science》 SCIE EI CSCD 2024年第1期93-103,共11页
Federated learning is a promising learning paradigm that allows collaborative training of models across multiple data owners without sharing their raw datasets.To enhance privacy in federated learning,multi-party comp... Federated learning is a promising learning paradigm that allows collaborative training of models across multiple data owners without sharing their raw datasets.To enhance privacy in federated learning,multi-party computation can be leveraged for secure communication and computation during model training.This survey provides a comprehensive review on how to integrate mainstream multi-party computation techniques into diverse federated learning setups for guaranteed privacy,as well as the corresponding optimization techniques to improve model accuracy and training efficiency.We also pinpoint future directions to deploy federated learning to a wider range of applications. 展开更多
关键词 sfederated learning multi-party ycomputation privacy-preserving data mining distributed learning
原文传递
Attenuate Class Imbalance Problem for Pneumonia Diagnosis Using Ensemble Parallel Stacked Pre-Trained Models
19
作者 Aswathy Ravikumar Harini Sriraman 《Computers, Materials & Continua》 SCIE EI 2023年第4期891-909,共19页
Pneumonia is an acute lung infection that has caused many fatalitiesglobally. Radiologists often employ chest X-rays to identify pneumoniasince they are presently the most effective imaging method for this purpose.Com... Pneumonia is an acute lung infection that has caused many fatalitiesglobally. Radiologists often employ chest X-rays to identify pneumoniasince they are presently the most effective imaging method for this purpose.Computer-aided diagnosis of pneumonia using deep learning techniques iswidely used due to its effectiveness and performance. In the proposed method,the Synthetic Minority Oversampling Technique (SMOTE) approach is usedto eliminate the class imbalance in the X-ray dataset. To compensate forthe paucity of accessible data, pre-trained transfer learning is used, and anensemble Convolutional Neural Network (CNN) model is developed. Theensemble model consists of all possible combinations of the MobileNetv2,Visual Geometry Group (VGG16), and DenseNet169 models. MobileNetV2and DenseNet169 performed well in the Single classifier model, with anaccuracy of 94%, while the ensemble model (MobileNetV2+DenseNet169)achieved an accuracy of 96.9%. Using the data synchronous parallel modelin Distributed Tensorflow, the training process accelerated performance by98.6% and outperformed other conventional approaches. 展开更多
关键词 Pneumonia prediction distributed deep learning data parallel model ensemble deep learning class imbalance skewed data
下载PDF
Overhead-free Noise-tolerant Federated Learning: A New Baseline
20
作者 Shiyi Lin Deming Zhai +3 位作者 Feilong Zhang Junjun Jiang Xianming Liu Xiangyang Ji 《Machine Intelligence Research》 EI CSCD 2024年第3期526-537,共12页
Federated learning (FL) is a promising decentralized machine learning approach that enables multiple distributed clients to train a model jointly while keeping their data private. However, in real-world scenarios, the... Federated learning (FL) is a promising decentralized machine learning approach that enables multiple distributed clients to train a model jointly while keeping their data private. However, in real-world scenarios, the supervised training data stored in local clients inevitably suffer from imperfect annotations, resulting in subjective, inconsistent and biased labels. These noisy labels can harm the collaborative aggregation process of FL by inducing inconsistent decision boundaries. Unfortunately, few attempts have been made towards noise-tolerant federated learning, with most of them relying on the strategy of transmitting overhead messages to assist noisy labels detection and correction, which increases the communication burden as well as privacy risks. In this paper, we propose a simple yet effective method for noise-tolerant FL based on the well-established co-training framework. Our method leverages the inherent discrepancy in the learning ability of the local and global models in FL, which can be regarded as two complementary views. By iteratively exchanging samples with their high confident predictions, the two models “teach each other” to suppress the influence of noisy labels. The proposed scheme enjoys the benefit of overhead cost-free and can serve as a robust and efficient baseline for noise-tolerant federated learning. Experimental results demonstrate that our method outperforms existing approaches, highlighting the superiority of our method. 展开更多
关键词 Federated learning noise-label learning privacy-preserving machine learning edge intelligence distributed machine learning
原文传递
上一页 1 2 下一页 到第
使用帮助 返回顶部