期刊文献+
共找到1篇文章
< 1 >
每页显示 20 50 100
Boosting for Distributed Online Convex Optimization
1
作者 Yuhan Hu Yawei Zhao +1 位作者 Lailong Luo Deke Guo 《Tsinghua Science and Technology》 SCIE EI CAS CSCD 2023年第4期811-821,共11页
Decentralized Online Learning(DOL)extends online learning to the domain of distributed networks.However,limitations of local data in decentralized settings lead to a decrease in the accuracy of decisions or models com... Decentralized Online Learning(DOL)extends online learning to the domain of distributed networks.However,limitations of local data in decentralized settings lead to a decrease in the accuracy of decisions or models compared to centralized methods.Considering the increasing requirement to achieve a high-precision model or decision with distributed data resources in a network,applying ensemble methods is attempted to achieve a superior model or decision with only transferring gradients or models.A new boosting method,namely Boosting for Distributed Online Convex Optimization(BD-OCO),is designed to realize the application of boosting in distributed scenarios.BD-OCO achieves the regret upper bound O(M+N/MNT)where M measures the size of the distributed network and N is the number of Weak Learners(WLs)in each node.The core idea of BD-OCO is to apply the local model to train a strong global one.BD-OCO is evaluated on the basis of eight different real-world datasets.Numerical results show that BD-OCO achieves excellent performance in accuracy and convergence,and is robust to the size of the distributed network. 展开更多
关键词 distributed online Convex Optimization(OCO) online boosting online Gradient boosting(OGB)
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部