Decentralized Online Learning(DOL)extends online learning to the domain of distributed networks.However,limitations of local data in decentralized settings lead to a decrease in the accuracy of decisions or models com...Decentralized Online Learning(DOL)extends online learning to the domain of distributed networks.However,limitations of local data in decentralized settings lead to a decrease in the accuracy of decisions or models compared to centralized methods.Considering the increasing requirement to achieve a high-precision model or decision with distributed data resources in a network,applying ensemble methods is attempted to achieve a superior model or decision with only transferring gradients or models.A new boosting method,namely Boosting for Distributed Online Convex Optimization(BD-OCO),is designed to realize the application of boosting in distributed scenarios.BD-OCO achieves the regret upper bound O(M+N/MNT)where M measures the size of the distributed network and N is the number of Weak Learners(WLs)in each node.The core idea of BD-OCO is to apply the local model to train a strong global one.BD-OCO is evaluated on the basis of eight different real-world datasets.Numerical results show that BD-OCO achieves excellent performance in accuracy and convergence,and is robust to the size of the distributed network.展开更多
基金This work was supported by the National Natural Science Foundation of China(No.U19B2024)the National Key Research and Development Program(No.2018YFE0207600)。
文摘Decentralized Online Learning(DOL)extends online learning to the domain of distributed networks.However,limitations of local data in decentralized settings lead to a decrease in the accuracy of decisions or models compared to centralized methods.Considering the increasing requirement to achieve a high-precision model or decision with distributed data resources in a network,applying ensemble methods is attempted to achieve a superior model or decision with only transferring gradients or models.A new boosting method,namely Boosting for Distributed Online Convex Optimization(BD-OCO),is designed to realize the application of boosting in distributed scenarios.BD-OCO achieves the regret upper bound O(M+N/MNT)where M measures the size of the distributed network and N is the number of Weak Learners(WLs)in each node.The core idea of BD-OCO is to apply the local model to train a strong global one.BD-OCO is evaluated on the basis of eight different real-world datasets.Numerical results show that BD-OCO achieves excellent performance in accuracy and convergence,and is robust to the size of the distributed network.