期刊文献+
共找到8篇文章
< 1 >
每页显示 20 50 100
FedTC:A Personalized Federated LearningMethod with Two Classifiers
1
作者 Yang Liu Jiabo Wang +4 位作者 Qinbo Liu Mehdi Gheisari Wanyin Xu Zoe L.Jiang Jiajia Zhang 《Computers, Materials & Continua》 SCIE EI 2023年第9期3013-3027,共15页
Centralized training of deep learning models poses privacy risks that hinder their deployment.Federated learning(FL)has emerged as a solution to address these risks,allowing multiple clients to train deep learning mod... Centralized training of deep learning models poses privacy risks that hinder their deployment.Federated learning(FL)has emerged as a solution to address these risks,allowing multiple clients to train deep learning models collaborativelywithout sharing rawdata.However,FL is vulnerable to the impact of heterogeneous distributed data,which weakens convergence stability and suboptimal performance of the trained model on local data.This is due to the discarding of the old local model at each round of training,which results in the loss of personalized information in the model critical for maintaining model accuracy and ensuring robustness.In this paper,we propose FedTC,a personalized federated learning method with two classifiers that can retain personalized information in the local model and improve the model’s performance on local data.FedTC divides the model into two parts,namely,the extractor and the classifier,where the classifier is the last layer of the model,and the extractor consists of other layers.The classifier in the local model is always retained to ensure that the personalized information is not lost.After receiving the global model,the local extractor is overwritten by the globalmodel’s extractor,and the classifier of the globalmodel serves as anadditional classifier of the localmodel toguide local training.The FedTCintroduces a two-classifier training strategy to coordinate the two classifiers for local model updates.Experimental results on Cifar10 and Cifar100 datasets demonstrate that FedTC performs better on heterogeneous data than current studies,such as FedAvg,FedPer,and local training,achieving a maximum improvement of 27.95%in model classification test accuracy compared to FedAvg. 展开更多
关键词 Distributed machine learning federated learning data hetero-geneity non-independent identically distributed
下载PDF
ASCFL:Accurate and Speedy Semi-Supervised Clustering Federated Learning 被引量:2
2
作者 Jingyi He Biyao Gong +3 位作者 Jiadi Yang Hai Wang Pengfei Xu Tianzhang Xing 《Tsinghua Science and Technology》 SCIE EI CAS CSCD 2023年第5期823-837,共15页
The influence of non-Independent Identically Distribution(non-IID)data on Federated Learning(FL)has been a serious concern.Clustered Federated Learning(CFL)is an emerging approach for reducing the impact of non-IID da... The influence of non-Independent Identically Distribution(non-IID)data on Federated Learning(FL)has been a serious concern.Clustered Federated Learning(CFL)is an emerging approach for reducing the impact of non-IID data,which employs the client similarity calculated by relevant metrics for clustering.Unfortunately,the existing CFL methods only pursue a single accuracy improvement,but ignore the convergence rate.Additionlly,the designed client selection strategy will affect the clustering results.Finally,traditional semi-supervised learning changes the distribution of data on clients,resulting in higher local costs and undesirable performance.In this paper,we propose a novel CFL method named ASCFL,which selects clients to participate in training and can dynamically adjust the balance between accuracy and convergence speed with datasets consisting of labeled and unlabeled data.To deal with unlabeled data,the prediction labels strategy predicts labels by encoders.The client selection strategy is to improve accuracy and reduce overhead by selecting clients with higher losses participating in the current round.What is more,the similarity-based clustering strategy uses a new indicator to measure the similarity between clients.Experimental results show that ASCFL has certain advantages in model accuracy and convergence speed over the three state-of-the-art methods with two popular datasets. 展开更多
关键词 federated learning clustered federated learning non-Independent identically distribution(non-IID)data similarity indicator client selection semi-supervised learning
原文传递
The laws of large numbers for Pareto-type random variables under sub-linear expectation
3
作者 Binxia CHEN Qunying WU 《Frontiers of Mathematics in China》 SCIE CSCD 2022年第5期783-796,共14页
In this paper,some laws of large numbers are established for random variables that satisfy the Pareto distribution,so that the relevant conclusions in the traditional probability space are extended to the sub-linear e... In this paper,some laws of large numbers are established for random variables that satisfy the Pareto distribution,so that the relevant conclusions in the traditional probability space are extended to the sub-linear expectation space.Based on the Pareto distribution,we obtain the weak law of large numbers and strong law of large numbers of the weighted sum of some independent random variable sequences. 展开更多
关键词 Sub-linear expectation Pareto type distribution laws of large numbers independent and identical distribution
原文传递
Adjoining Batch Markov Arrival Processes of a Markov Chain 被引量:1
4
作者 Xiao-yun MO Xu-yan XIANG Xiang-qun YANG 《Acta Mathematicae Applicatae Sinica》 SCIE CSCD 2018年第1期1-10,共10页
A batch Markov arrival process(BMAP) X^*=(N, J) is a 2-dimensional Markov process with two components, one is the counting process N and the other one is the phase process J. It is proved that the phase process i... A batch Markov arrival process(BMAP) X^*=(N, J) is a 2-dimensional Markov process with two components, one is the counting process N and the other one is the phase process J. It is proved that the phase process is a time-homogeneous Markov chain with a finite state-space, or for short, Markov chain. In this paper,a new and inverse problem is proposed firstly: given a Markov chain J, can we deploy a process N such that the 2-dimensional process X^*=(N, J) is a BMAP? The process X^*=(N, J) is said to be an adjoining BMAP for the Markov chain J. For a given Markov chain the adjoining processes exist and they are not unique. Two kinds of adjoining BMAPs have been constructed. One is the BMAPs with fixed constant batches, the other one is the BMAPs with independent and identically distributed(i.i.d) random batches. The method we used in this paper is not the usual matrix-analytic method of studying BMAP, it is a path-analytic method. We constructed directly sample paths of adjoining BMAPs. The expressions of characteristic(D_k, k = 0, 1, 2· · ·)and transition probabilities of the adjoining BMAP are obtained by the density matrix Q of the given Markov chain J. Moreover, we obtained two frontal Theorems. We present these expressions in the first time. 展开更多
关键词 Markov chain batch Markov arrival process (BMAP) adjoining BMAP fixed constant batch independent identically distributed (i.i.d) random batch
原文传递
Exponential Inequality for a Class of NOD Random Variables and Its Application 被引量:1
5
作者 XING Guodong YANG Shanchao 《Wuhan University Journal of Natural Sciences》 CAS 2011年第1期7-10,共4页
In this paper,an exponential inequality for weighted sums of identically distributed NOD (negatively orthant dependent) random variables is established,by which we obtain the almost sure convergence rate of which re... In this paper,an exponential inequality for weighted sums of identically distributed NOD (negatively orthant dependent) random variables is established,by which we obtain the almost sure convergence rate of which reaches the available one for independent random variables in terms of Berstein type inequality. As application,we obtain the relevant exponential inequality for Priestley-Chao estimator of nonparametric regression estimate under NOD samples,from which the strong consistency rate is also obtained. 展开更多
关键词 identically distributed NOD (negatively orthant dependent) random variables weighted sums exponential inequality almost sure convergence rate Priestley-Chao estimator
原文传递
Novel parametric optimum processing method for airborne radar 被引量:1
6
作者 XUJia PENGYingning +3 位作者 WANQun ZHANGLiping LINYan XIAXianggen 《Science in China(Series F)》 2004年第6期706-716,共11页
In radar target detection, an optimum processor needs to automatically adapt its weights to the environment change. Conventionally, the optimum weights are obtained by substantial independently and identically distrib... In radar target detection, an optimum processor needs to automatically adapt its weights to the environment change. Conventionally, the optimum weights are obtained by substantial independently and identically distributed (i.i.d.) interference samplings, which is not always realistic in an inhomogeneous clutter background of airborne radar. The lack of i.i.d. samplings will inevitably lead to performance deterioration for optimum processing. In this paper, a novel parametric adaptive processing method is proposed for airborne radar target detection based on the modified Doppler distributed clutter (DDC) model with contribution of clutter's internal motion. It is different from the conventional methods in that the adaptive weights are determined by two parameters of DDC model, i.e., angular center and spread. A low-complexity nonlinear operators approach is also proposed to estimate these parameters. Simulation and performance analysis are also provided to show that the proposed method can remarkably reduce the dependence of i.i.d. samplings and it is computationally efficient for practical use. 展开更多
关键词 airborne radar adaptive implementation of optimum processing (AIOP) Doppler distributed clutter (DDC) mode independently and identically distributed (i.i.d.) sampling nonlinear energy operator (NLOP).
原文传递
Random Stabilization of Sampled-data Control Systems with Nonuniform Sampling
7
作者 Bin Tang Qi-Jie Zeng De-Feng He Yun Zhang School of Automation, Guangdong University of Technology, Guangzhou 510006, China 《International Journal of Automation and computing》 EI 2012年第5期492-500,共9页
For a sampled-data control system with nonuniform sampling, the sampling interval sequence, which is continuously distributed in a given interval, is described as a multiple independent and identically distributed (i.... For a sampled-data control system with nonuniform sampling, the sampling interval sequence, which is continuously distributed in a given interval, is described as a multiple independent and identically distributed (i.i.d.) process. With this process, the closed-loop system is transformed into an asynchronous dynamical impulsive model with input delays. Sufficient conditions for the closed-loop mean-square exponential stability are presented in terms of linear matrix inequalities (LMIs), in which the relation between the nonuniform sampling and the mean-square exponential stability of the closed-loop system is explicitly established. Based on the stability conditions, the controller design method is given, which is further formulated as a convex optimization problem with LMI constraints. Numerical examples and experiment results are given to show the effectiveness and the advantages of the theoretical results. 展开更多
关键词 Sampled-data control system nonuniform sampling independent and identically distributed (i.i.d.) process mean-square exponential stability controller design
原文传递
Nonlinear regression without i.i.d.assumption
8
作者 Qing Xu Xiaohua(Michael)Xuan 《Probability, Uncertainty and Quantitative Risk》 2019年第1期118-132,共15页
In this paper,we consider a class of nonlinear regression problems without the assumption of being independent and identically distributed.We propose a correspondent mini-max problem for nonlinear regression and give ... In this paper,we consider a class of nonlinear regression problems without the assumption of being independent and identically distributed.We propose a correspondent mini-max problem for nonlinear regression and give a numerical algorithm.Such an algorithm can be applied in regression and machine learning problems,and yields better results than traditional least squares and machine learning methods. 展开更多
关键词 Nonlinear regression MINIMAX INDEPENDENT identically distributed Least squares Machine learning Quadratic programming
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部