针对现有配电网鲁棒调度方法缺乏对不确定参数相关性问题的考虑,提出了一种基于数据驱动多面体集合的交直流混合配电网鲁棒调度方法。首先,构建分布式光伏出力的传统多面体集合,利用历史数据驱动形成了相关性包络图,通过弯曲多面体集合...针对现有配电网鲁棒调度方法缺乏对不确定参数相关性问题的考虑,提出了一种基于数据驱动多面体集合的交直流混合配电网鲁棒调度方法。首先,构建分布式光伏出力的传统多面体集合,利用历史数据驱动形成了相关性包络图,通过弯曲多面体集合边界,建立了相关性多面体集合模型。然后,在此基础上,针对相关性多面体集合存在鲁棒性差和保守性大的问题,建立了数据驱动的多面体集合模型。进一步,建立了基于数据驱动多面体集合的交直流混合配电网鲁棒调度模型,并采用列与约束生成(column and constraint generation,CCG)算法对鲁棒调度模型进行求解。最后,改进的IEEE33节点系统仿真结果表明,基于数据驱动多面体集合的交直流混合配电网鲁棒调度方法可以减少优化结果的保守性,提高其鲁棒性,证明了所提出方法的有效性。展开更多
As a mature distributed machine learning paradigm,federated learning enables wireless edge devices to collaboratively train a shared AI-model by stochastic gradient descent(SGD).However,devices need to upload high-dim...As a mature distributed machine learning paradigm,federated learning enables wireless edge devices to collaboratively train a shared AI-model by stochastic gradient descent(SGD).However,devices need to upload high-dimensional stochastic gradients to edge server in training,which cause severe communication bottleneck.To address this problem,we compress the communication by sparsifying and quantizing the stochastic gradients of edge devices.We first derive a closed form of the communication compression in terms of sparsification and quantization factors.Then,the convergence rate of this communicationcompressed system is analyzed and several insights are obtained.Finally,we formulate and deal with the quantization resource allocation problem for the goal of minimizing the convergence upper bound,under the constraint of multiple-access channel capacity.Simulations show that the proposed scheme outperforms the benchmarks.展开更多
文摘针对现有配电网鲁棒调度方法缺乏对不确定参数相关性问题的考虑,提出了一种基于数据驱动多面体集合的交直流混合配电网鲁棒调度方法。首先,构建分布式光伏出力的传统多面体集合,利用历史数据驱动形成了相关性包络图,通过弯曲多面体集合边界,建立了相关性多面体集合模型。然后,在此基础上,针对相关性多面体集合存在鲁棒性差和保守性大的问题,建立了数据驱动的多面体集合模型。进一步,建立了基于数据驱动多面体集合的交直流混合配电网鲁棒调度模型,并采用列与约束生成(column and constraint generation,CCG)算法对鲁棒调度模型进行求解。最后,改进的IEEE33节点系统仿真结果表明,基于数据驱动多面体集合的交直流混合配电网鲁棒调度方法可以减少优化结果的保守性,提高其鲁棒性,证明了所提出方法的有效性。
基金supported in part by the National Key Research and Development Program of China under Grant 2020YFB1807700in part by the National Science Foundation of China under Grant U200120122
文摘As a mature distributed machine learning paradigm,federated learning enables wireless edge devices to collaboratively train a shared AI-model by stochastic gradient descent(SGD).However,devices need to upload high-dimensional stochastic gradients to edge server in training,which cause severe communication bottleneck.To address this problem,we compress the communication by sparsifying and quantizing the stochastic gradients of edge devices.We first derive a closed form of the communication compression in terms of sparsification and quantization factors.Then,the convergence rate of this communicationcompressed system is analyzed and several insights are obtained.Finally,we formulate and deal with the quantization resource allocation problem for the goal of minimizing the convergence upper bound,under the constraint of multiple-access channel capacity.Simulations show that the proposed scheme outperforms the benchmarks.