The objective of this work is using the online measurement method to study the process of precipitation of nickel hydroxide in a single-feed semi-batch stirred reactor with an internal diameter ofD = 240mm. The effect...The objective of this work is using the online measurement method to study the process of precipitation of nickel hydroxide in a single-feed semi-batch stirred reactor with an internal diameter ofD = 240mm. The effects of impeller speed, impeller type, impeller diameter and feed location on the mean particle size d43 and particle size distribution (PSD) were investigated, d43 and PSD were measured online using a Malvern Insitec Liquid Pro- cess Sizer every 20 s. It was found that d43 varied between 13 kwh and 26 lain under different operating conditions, and it decreased with increasing impeller diameter. When feeding at the off-bottom distance of D/2 under lower impeller speeds, d43 was significantly smaller than that at D/3. PSDs were slightly influenced by operating conditions.展开更多
This paper considers the distributed online optimization(DOO) problem over time-varying unbalanced networks, where gradient information is explicitly unknown. To address this issue, a privacy-preserving distributed on...This paper considers the distributed online optimization(DOO) problem over time-varying unbalanced networks, where gradient information is explicitly unknown. To address this issue, a privacy-preserving distributed online one-point residual feedback(OPRF) optimization algorithm is proposed. This algorithm updates decision variables by leveraging one-point residual feedback to estimate the true gradient information. It can achieve the same performance as the two-point feedback scheme while only requiring a single function value query per iteration. Additionally, it effectively eliminates the effect of time-varying unbalanced graphs by dynamically constructing row stochastic matrices. Furthermore, compared to other distributed optimization algorithms that only consider explicitly unknown cost functions, this paper also addresses the issue of privacy information leakage of nodes. Theoretical analysis demonstrate that the method attains sublinear regret while protecting the privacy information of agents. Finally, numerical experiments on distributed collaborative localization problem and federated learning confirm the effectiveness of the algorithm.展开更多
This paper focuses on the online distributed optimization problem based on multi-agent systems. In this problem, each agent can only access its own cost function and a convex set, and can only exchange local state inf...This paper focuses on the online distributed optimization problem based on multi-agent systems. In this problem, each agent can only access its own cost function and a convex set, and can only exchange local state information with its current neighbors through a time-varying digraph. In addition, the agents do not have access to the information about the current cost functions until decisions are made. Different from most existing works on online distributed optimization, here we consider the case where the cost functions are strongly pseudoconvex and real gradients of the cost functions are not available. To handle this problem, a random gradient-free online distributed algorithm involving the multi-point gradient estimator is proposed. Of particular interest is that under the proposed algorithm, each agent only uses the estimation information of gradients instead of the real gradient information to make decisions. The dynamic regret is employed to measure the proposed algorithm. We prove that if the cumulative deviation of the minimizer sequence grows within a certain rate, then the expectation of dynamic regret increases sublinearly. Finally, a simulation example is given to corroborate the validity of our results.展开更多
This paper studies an online distributed optimization problem over multi-agent systems.In this problem,the goal of agents is to cooperatively minimize the sum of locally dynamic cost functions.Different from most exis...This paper studies an online distributed optimization problem over multi-agent systems.In this problem,the goal of agents is to cooperatively minimize the sum of locally dynamic cost functions.Different from most existing works on distributed optimization,here we consider the case where the cost function is strongly pseudoconvex and real gradients of objective functions are not available.To handle this problem,an online zeroth-order stochastic optimization algorithm involving the single-point gradient estimator is proposed.Under the algorithm,each agent only has access to the information associated with its own cost function and the estimate of the gradient,and exchange local state information with its immediate neighbors via a time-varying digraph.The performance of the algorithm is measured by the expectation of dynamic regret.Under mild assumptions on graphs,we prove that if the cumulative deviation of minimizer sequence grows within a certain rate,then the expectation of dynamic regret grows sublinearly.Finally,a simulation example is given to illustrate the validity of our results.展开更多
To improve accuracy and efficiency in power systems dynamic modeling,the distributed online modeling approach is a good option.In this approach,the power system is divided into sub-grids,and the dynamic models of the ...To improve accuracy and efficiency in power systems dynamic modeling,the distributed online modeling approach is a good option.In this approach,the power system is divided into sub-grids,and the dynamic models of the sub-grids are built independently within the distributed modeling system.The subgrid models are subsequently merged,after which the dynamic model of the whole power system is finally constructed online.The merging of the networks plays an important role in the distributed online dynamic modeling of power systems.An efficient multi-area networks-merging model that can rapidly match the boundary power flow is proposed in this paper.The iterations of the boundary matching during network merging are eliminated due to the introduction of the merging model,and the dynamic models of the sub-grid can be directly“plugged in”with each other.The results of the calculations performed in a real power system demonstrate the accuracy of the integrated model under both steady and transient states.展开更多
This paper considers the problem of distributed online regularized optimization over a network that consists of multiple interacting nodes.Each node is endowed with a sequence of loss functions that are time-varying a...This paper considers the problem of distributed online regularized optimization over a network that consists of multiple interacting nodes.Each node is endowed with a sequence of loss functions that are time-varying and a regularization function that is fixed over time.A distributed forward-backward splitting algorithm is proposed for solving this problem and both fixed and adaptive learning rates are adopted.For both cases,we show that the regret upper bounds scale as O(VT),where T is the time horizon.In particular,those rates match the centralized counterpart.Finally,we show the effectiveness of the proposed algorithms over an online distributed regularized linear regression problem.展开更多
Decentralized Online Learning(DOL)extends online learning to the domain of distributed networks.However,limitations of local data in decentralized settings lead to a decrease in the accuracy of decisions or models com...Decentralized Online Learning(DOL)extends online learning to the domain of distributed networks.However,limitations of local data in decentralized settings lead to a decrease in the accuracy of decisions or models compared to centralized methods.Considering the increasing requirement to achieve a high-precision model or decision with distributed data resources in a network,applying ensemble methods is attempted to achieve a superior model or decision with only transferring gradients or models.A new boosting method,namely Boosting for Distributed Online Convex Optimization(BD-OCO),is designed to realize the application of boosting in distributed scenarios.BD-OCO achieves the regret upper bound O(M+N/MNT)where M measures the size of the distributed network and N is the number of Weak Learners(WLs)in each node.The core idea of BD-OCO is to apply the local model to train a strong global one.BD-OCO is evaluated on the basis of eight different real-world datasets.Numerical results show that BD-OCO achieves excellent performance in accuracy and convergence,and is robust to the size of the distributed network.展开更多
基金the State Key Development Program for Basic Research of China(2013CB632601)the National High Technology Research and Development Program of China(2011AA060704)+1 种基金the National Natural Science Foundation of China(21476236,91434126)the National Science Fund for Distinguished Young Scholars(21025627)
文摘The objective of this work is using the online measurement method to study the process of precipitation of nickel hydroxide in a single-feed semi-batch stirred reactor with an internal diameter ofD = 240mm. The effects of impeller speed, impeller type, impeller diameter and feed location on the mean particle size d43 and particle size distribution (PSD) were investigated, d43 and PSD were measured online using a Malvern Insitec Liquid Pro- cess Sizer every 20 s. It was found that d43 varied between 13 kwh and 26 lain under different operating conditions, and it decreased with increasing impeller diameter. When feeding at the off-bottom distance of D/2 under lower impeller speeds, d43 was significantly smaller than that at D/3. PSDs were slightly influenced by operating conditions.
基金supported by the National Natural Science Foundation of China (62033010, U23B2061)Qing Lan Project of Jiangsu Province(R2023Q07)。
文摘This paper considers the distributed online optimization(DOO) problem over time-varying unbalanced networks, where gradient information is explicitly unknown. To address this issue, a privacy-preserving distributed online one-point residual feedback(OPRF) optimization algorithm is proposed. This algorithm updates decision variables by leveraging one-point residual feedback to estimate the true gradient information. It can achieve the same performance as the two-point feedback scheme while only requiring a single function value query per iteration. Additionally, it effectively eliminates the effect of time-varying unbalanced graphs by dynamically constructing row stochastic matrices. Furthermore, compared to other distributed optimization algorithms that only consider explicitly unknown cost functions, this paper also addresses the issue of privacy information leakage of nodes. Theoretical analysis demonstrate that the method attains sublinear regret while protecting the privacy information of agents. Finally, numerical experiments on distributed collaborative localization problem and federated learning confirm the effectiveness of the algorithm.
基金supported by the National Natural Science Foundation of China(Nos.62103169,51875380)the China Postdoctoral Science Foundation(No.2021M691313).
文摘This paper focuses on the online distributed optimization problem based on multi-agent systems. In this problem, each agent can only access its own cost function and a convex set, and can only exchange local state information with its current neighbors through a time-varying digraph. In addition, the agents do not have access to the information about the current cost functions until decisions are made. Different from most existing works on online distributed optimization, here we consider the case where the cost functions are strongly pseudoconvex and real gradients of the cost functions are not available. To handle this problem, a random gradient-free online distributed algorithm involving the multi-point gradient estimator is proposed. Of particular interest is that under the proposed algorithm, each agent only uses the estimation information of gradients instead of the real gradient information to make decisions. The dynamic regret is employed to measure the proposed algorithm. We prove that if the cumulative deviation of the minimizer sequence grows within a certain rate, then the expectation of dynamic regret increases sublinearly. Finally, a simulation example is given to corroborate the validity of our results.
基金Supported by National Natural Science Foundation of China(62103169,51875380)China Postdoctoral Science Foundation(2021M691313)。
文摘This paper studies an online distributed optimization problem over multi-agent systems.In this problem,the goal of agents is to cooperatively minimize the sum of locally dynamic cost functions.Different from most existing works on distributed optimization,here we consider the case where the cost function is strongly pseudoconvex and real gradients of objective functions are not available.To handle this problem,an online zeroth-order stochastic optimization algorithm involving the single-point gradient estimator is proposed.Under the algorithm,each agent only has access to the information associated with its own cost function and the estimate of the gradient,and exchange local state information with its immediate neighbors via a time-varying digraph.The performance of the algorithm is measured by the expectation of dynamic regret.Under mild assumptions on graphs,we prove that if the cumulative deviation of minimizer sequence grows within a certain rate,then the expectation of dynamic regret grows sublinearly.Finally,a simulation example is given to illustrate the validity of our results.
基金This work was supported by the National Key Basic Research Program of China(973 Program)(2013CB228204)the National Natural Science Foundation of China(51137002,51190102,51407060).
文摘To improve accuracy and efficiency in power systems dynamic modeling,the distributed online modeling approach is a good option.In this approach,the power system is divided into sub-grids,and the dynamic models of the sub-grids are built independently within the distributed modeling system.The subgrid models are subsequently merged,after which the dynamic model of the whole power system is finally constructed online.The merging of the networks plays an important role in the distributed online dynamic modeling of power systems.An efficient multi-area networks-merging model that can rapidly match the boundary power flow is proposed in this paper.The iterations of the boundary matching during network merging are eliminated due to the introduction of the merging model,and the dynamic models of the sub-grid can be directly“plugged in”with each other.The results of the calculations performed in a real power system demonstrate the accuracy of the integrated model under both steady and transient states.
基金This work was supported in part by the National Natural Science Foundation of China(Nos.62022042,62273181 and 62073166)in part by the Fundamental Research Funds for the Central Universities(No.30919011105)in part by the Open Project of the Key Laboratory of Advanced Perception and Intelligent Control of High-end Equipment(No.GDSC202017).
文摘This paper considers the problem of distributed online regularized optimization over a network that consists of multiple interacting nodes.Each node is endowed with a sequence of loss functions that are time-varying and a regularization function that is fixed over time.A distributed forward-backward splitting algorithm is proposed for solving this problem and both fixed and adaptive learning rates are adopted.For both cases,we show that the regret upper bounds scale as O(VT),where T is the time horizon.In particular,those rates match the centralized counterpart.Finally,we show the effectiveness of the proposed algorithms over an online distributed regularized linear regression problem.
基金This work was supported by the National Natural Science Foundation of China(No.U19B2024)the National Key Research and Development Program(No.2018YFE0207600)。
文摘Decentralized Online Learning(DOL)extends online learning to the domain of distributed networks.However,limitations of local data in decentralized settings lead to a decrease in the accuracy of decisions or models compared to centralized methods.Considering the increasing requirement to achieve a high-precision model or decision with distributed data resources in a network,applying ensemble methods is attempted to achieve a superior model or decision with only transferring gradients or models.A new boosting method,namely Boosting for Distributed Online Convex Optimization(BD-OCO),is designed to realize the application of boosting in distributed scenarios.BD-OCO achieves the regret upper bound O(M+N/MNT)where M measures the size of the distributed network and N is the number of Weak Learners(WLs)in each node.The core idea of BD-OCO is to apply the local model to train a strong global one.BD-OCO is evaluated on the basis of eight different real-world datasets.Numerical results show that BD-OCO achieves excellent performance in accuracy and convergence,and is robust to the size of the distributed network.