期刊文献+
共找到7篇文章
< 1 >
每页显示 20 50 100
Experimental study by online measurement of the precipitation of nickel hydroxide: Effects of operating conditions 被引量:2
1
作者 Weiwei E Jingcai Cheng +1 位作者 Chao Yang Zaisha Mao 《Chinese Journal of Chemical Engineering》 SCIE EI CAS CSCD 2015年第5期860-867,共8页
The objective of this work is using the online measurement method to study the process of precipitation of nickel hydroxide in a single-feed semi-batch stirred reactor with an internal diameter ofD = 240mm. The effect... The objective of this work is using the online measurement method to study the process of precipitation of nickel hydroxide in a single-feed semi-batch stirred reactor with an internal diameter ofD = 240mm. The effects of impeller speed, impeller type, impeller diameter and feed location on the mean particle size d43 and particle size distribution (PSD) were investigated, d43 and PSD were measured online using a Malvern Insitec Liquid Pro- cess Sizer every 20 s. It was found that d43 varied between 13 kwh and 26 lain under different operating conditions, and it decreased with increasing impeller diameter. When feeding at the off-bottom distance of D/2 under lower impeller speeds, d43 was significantly smaller than that at D/3. PSDs were slightly influenced by operating conditions. 展开更多
关键词 Nickel hydroxide Precipitation Particle size distribution online measurement Stirred tank Mixing
下载PDF
Privacy Preserving Distributed Bandit Residual Feedback Online Optimization Over Time-Varying Unbalanced Graphs
2
作者 Zhongyuan Zhao Zhiqiang Yang +2 位作者 Luyao Jiang Ju Yang Quanbo Ge 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI 2024年第11期2284-2297,共14页
This paper considers the distributed online optimization(DOO) problem over time-varying unbalanced networks, where gradient information is explicitly unknown. To address this issue, a privacy-preserving distributed on... This paper considers the distributed online optimization(DOO) problem over time-varying unbalanced networks, where gradient information is explicitly unknown. To address this issue, a privacy-preserving distributed online one-point residual feedback(OPRF) optimization algorithm is proposed. This algorithm updates decision variables by leveraging one-point residual feedback to estimate the true gradient information. It can achieve the same performance as the two-point feedback scheme while only requiring a single function value query per iteration. Additionally, it effectively eliminates the effect of time-varying unbalanced graphs by dynamically constructing row stochastic matrices. Furthermore, compared to other distributed optimization algorithms that only consider explicitly unknown cost functions, this paper also addresses the issue of privacy information leakage of nodes. Theoretical analysis demonstrate that the method attains sublinear regret while protecting the privacy information of agents. Finally, numerical experiments on distributed collaborative localization problem and federated learning confirm the effectiveness of the algorithm. 展开更多
关键词 Differential privacy distributed online optimization(DOO) federated learning one-point residual feedback(OPRF) time-varying unbalanced graphs
下载PDF
Random gradient-free method for online distributed optimization with strongly pseudoconvex cost functions
3
作者 Xiaoxi Yan Cheng Li +1 位作者 Kaihong Lu Hang Xu 《Control Theory and Technology》 EI CSCD 2024年第1期14-24,共11页
This paper focuses on the online distributed optimization problem based on multi-agent systems. In this problem, each agent can only access its own cost function and a convex set, and can only exchange local state inf... This paper focuses on the online distributed optimization problem based on multi-agent systems. In this problem, each agent can only access its own cost function and a convex set, and can only exchange local state information with its current neighbors through a time-varying digraph. In addition, the agents do not have access to the information about the current cost functions until decisions are made. Different from most existing works on online distributed optimization, here we consider the case where the cost functions are strongly pseudoconvex and real gradients of the cost functions are not available. To handle this problem, a random gradient-free online distributed algorithm involving the multi-point gradient estimator is proposed. Of particular interest is that under the proposed algorithm, each agent only uses the estimation information of gradients instead of the real gradient information to make decisions. The dynamic regret is employed to measure the proposed algorithm. We prove that if the cumulative deviation of the minimizer sequence grows within a certain rate, then the expectation of dynamic regret increases sublinearly. Finally, a simulation example is given to corroborate the validity of our results. 展开更多
关键词 Multi-agent system online distributed optimization Pseudoconvex optimization Random gradient-free method
原文传递
Zeroth-Order Methods for Online Distributed Optimization with Strongly Pseudoconvex Cost Functions
4
作者 Xiaoxi YAN Muyuan MA Kaihong LU 《Journal of Systems Science and Information》 CSCD 2024年第1期145-160,共16页
This paper studies an online distributed optimization problem over multi-agent systems.In this problem,the goal of agents is to cooperatively minimize the sum of locally dynamic cost functions.Different from most exis... This paper studies an online distributed optimization problem over multi-agent systems.In this problem,the goal of agents is to cooperatively minimize the sum of locally dynamic cost functions.Different from most existing works on distributed optimization,here we consider the case where the cost function is strongly pseudoconvex and real gradients of objective functions are not available.To handle this problem,an online zeroth-order stochastic optimization algorithm involving the single-point gradient estimator is proposed.Under the algorithm,each agent only has access to the information associated with its own cost function and the estimate of the gradient,and exchange local state information with its immediate neighbors via a time-varying digraph.The performance of the algorithm is measured by the expectation of dynamic regret.Under mild assumptions on graphs,we prove that if the cumulative deviation of minimizer sequence grows within a certain rate,then the expectation of dynamic regret grows sublinearly.Finally,a simulation example is given to illustrate the validity of our results. 展开更多
关键词 multi-agent systems strongly pseudoconvex function single-point gradient estimator online distributed optimization
原文传递
An Efficient Multi-area Networks-Merging Model for Power System Online Dynamic Modeling 被引量:3
5
作者 Chuan Qin Ping Ju +3 位作者 Feng Wu Yongkang Liu Xiaohui Ye Guoyang Wu 《CSEE Journal of Power and Energy Systems》 SCIE 2015年第4期22-28,共7页
To improve accuracy and efficiency in power systems dynamic modeling,the distributed online modeling approach is a good option.In this approach,the power system is divided into sub-grids,and the dynamic models of the ... To improve accuracy and efficiency in power systems dynamic modeling,the distributed online modeling approach is a good option.In this approach,the power system is divided into sub-grids,and the dynamic models of the sub-grids are built independently within the distributed modeling system.The subgrid models are subsequently merged,after which the dynamic model of the whole power system is finally constructed online.The merging of the networks plays an important role in the distributed online dynamic modeling of power systems.An efficient multi-area networks-merging model that can rapidly match the boundary power flow is proposed in this paper.The iterations of the boundary matching during network merging are eliminated due to the introduction of the merging model,and the dynamic models of the sub-grid can be directly“plugged in”with each other.The results of the calculations performed in a real power system demonstrate the accuracy of the integrated model under both steady and transient states. 展开更多
关键词 Boundary matching distributed online modeling dynamic modeling multi-area networks merging model power system
原文传递
Distributed regularized online optimization using forward-backward splitting
6
作者 Deming Yuan Baoyong Zhang +1 位作者 Shengyuan Xu Huanyu Zhao 《Control Theory and Technology》 EI CSCD 2023年第2期212-221,共10页
This paper considers the problem of distributed online regularized optimization over a network that consists of multiple interacting nodes.Each node is endowed with a sequence of loss functions that are time-varying a... This paper considers the problem of distributed online regularized optimization over a network that consists of multiple interacting nodes.Each node is endowed with a sequence of loss functions that are time-varying and a regularization function that is fixed over time.A distributed forward-backward splitting algorithm is proposed for solving this problem and both fixed and adaptive learning rates are adopted.For both cases,we show that the regret upper bounds scale as O(VT),where T is the time horizon.In particular,those rates match the centralized counterpart.Finally,we show the effectiveness of the proposed algorithms over an online distributed regularized linear regression problem. 展开更多
关键词 Distributed online optimization Regularized online learning REGRET Forward-backward splitting
原文传递
Boosting for Distributed Online Convex Optimization
7
作者 Yuhan Hu Yawei Zhao +1 位作者 Lailong Luo Deke Guo 《Tsinghua Science and Technology》 SCIE EI CAS CSCD 2023年第4期811-821,共11页
Decentralized Online Learning(DOL)extends online learning to the domain of distributed networks.However,limitations of local data in decentralized settings lead to a decrease in the accuracy of decisions or models com... Decentralized Online Learning(DOL)extends online learning to the domain of distributed networks.However,limitations of local data in decentralized settings lead to a decrease in the accuracy of decisions or models compared to centralized methods.Considering the increasing requirement to achieve a high-precision model or decision with distributed data resources in a network,applying ensemble methods is attempted to achieve a superior model or decision with only transferring gradients or models.A new boosting method,namely Boosting for Distributed Online Convex Optimization(BD-OCO),is designed to realize the application of boosting in distributed scenarios.BD-OCO achieves the regret upper bound O(M+N/MNT)where M measures the size of the distributed network and N is the number of Weak Learners(WLs)in each node.The core idea of BD-OCO is to apply the local model to train a strong global one.BD-OCO is evaluated on the basis of eight different real-world datasets.Numerical results show that BD-OCO achieves excellent performance in accuracy and convergence,and is robust to the size of the distributed network. 展开更多
关键词 distributed online Convex Optimization(OCO) online boosting online Gradient Boosting(OGB)
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部