期刊文献+
共找到5篇文章
< 1 >
每页显示 20 50 100
Analysis on the Education Mechanism of the"Learning Power"Platform from the Perspective of Media Convergence
1
作者 Zhi Li 《Journal of Contemporary Educational Research》 2021年第10期34-39,共6页
As a media learning platform,the"Learning Power55 platform integrates the advantages of the internet,big data,and new media.Through the supply of massive explicit and implicit learning resources as well as the co... As a media learning platform,the"Learning Power55 platform integrates the advantages of the internet,big data,and new media.Through the supply of massive explicit and implicit learning resources as well as the construction of the interactive space of"Learning Power/5 it fully embodies the education mechanism of moral education.Specifically,it is reflected in the distinctive political position and the education goal mechanism of"moral education,55 the education operation mechanism of"explicit and implicit unity,"the learning mechanism of'"autonomy and cooperation integTation,"and the feedback incentive mechanism of"gamification."The organic combination and interactive operation of these four mechanisms form a collaborative education mechanism system of goal orientation,education operation,learning process,and feedback incentive. 展开更多
关键词 Media integration "learning power"platform Education mechanism
下载PDF
Enhancing Iterative Learning Control With Fractional Power Update Law 被引量:1
2
作者 Zihan Li Dong Shen Xinghuo Yu 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2023年第5期1137-1149,共13页
The P-type update law has been the mainstream technique used in iterative learning control(ILC)systems,which resembles linear feedback control with asymptotical convergence.In recent years,finite-time control strategi... The P-type update law has been the mainstream technique used in iterative learning control(ILC)systems,which resembles linear feedback control with asymptotical convergence.In recent years,finite-time control strategies such as terminal sliding mode control have been shown to be effective in ramping up convergence speed by introducing fractional power with feedback.In this paper,we show that such mechanism can equally ramp up the learning speed in ILC systems.We first propose a fractional power update rule for ILC of single-input-single-output linear systems.A nonlinear error dynamics is constructed along the iteration axis to illustrate the evolutionary converging process.Using the nonlinear mapping approach,fast convergence towards the limit cycles of tracking errors inherently existing in ILC systems is proven.The limit cycles are shown to be tunable to determine the steady states.Numerical simulations are provided to verify the theoretical results. 展开更多
关键词 Asymptotic convergence convergence rate finiteiteration tracking fractional power learning rule limit cycles
下载PDF
The reservoir learning power across quantum many-body localization transition
3
作者 Wei Xia Jie Zou +1 位作者 Xingze Qiu Xiaopeng Li 《Frontiers of physics》 SCIE CSCD 2022年第3期91-99,共9页
Harnessing the quantum computation power of the present noisy-intermediate-size-quantum devices has received tremendous interest in the last few years. Here we study the learning power of a one-dimensional long-range ... Harnessing the quantum computation power of the present noisy-intermediate-size-quantum devices has received tremendous interest in the last few years. Here we study the learning power of a one-dimensional long-range randomly-coupled quantum spin chain, within the framework of reservoir computing. In time sequence learning tasks, we find the system in the quantum many-body localized (MBL) phase holds long-term memory, which can be attributed to the emergent local integrals of motion. On the other hand, MBL phase does not provide sufficient nonlinearity in learning highly-nonlinear time sequences, which we show in a parity check task. This is reversed in the quantum ergodic phase, which provides sufficient nonlinearity but compromises memory capacity. In a complex learning task of Mackey–Glass prediction that requires both sufficient memory capacity and nonlinearity, we find optimal learning performance near the MBL-to-ergodic transition. This leads to a guiding principle of quantum reservoir engineering at the edge of quantum ergodicity reaching optimal learning power for generic complex reservoir learning tasks. Our theoretical finding can be tested with near-term NISQ quantum devices. 展开更多
关键词 quantum reservoir computing many-body localization quantum ergodic edge of quantum ergodicity optimal learning power
原文传递
Managing power grids through topology actions: A comparative study between advanced rule-based and reinforcement learning agents
4
作者 Malte Lehna Jan Viebahn +2 位作者 Antoine Marot Sven Tomforde Christoph Scholz 《Energy and AI》 2023年第4期283-293,共11页
The operation of electricity grids has become increasingly complex due to the current upheaval and the increase in renewable energy production.As a consequence,active grid management is reaching its limits with conven... The operation of electricity grids has become increasingly complex due to the current upheaval and the increase in renewable energy production.As a consequence,active grid management is reaching its limits with conventional approaches.In the context of the Learning to Run a Power Network(L2RPN)challenge,it has been shown that Reinforcement Learning(RL)is an efficient and reliable approach with considerable potential for automatic grid operation.In this article,we analyse the submitted agent from Binbinchen and provide novel strategies to improve the agent,both for the RL and the rule-based approach.The main improvement is a N-1 strategy,where we consider topology actions that keep the grid stable,even if one line is disconnected.More,we also propose a topology reversion to the original grid,which proved to be beneficial.The improvements are tested against reference approaches on the challenge test sets and are able to increase the performance of the rule-based agent by 27%.In direct comparison between rule-based and RL agent we find similar performance.However,the RL agent has a clear computational advantage.We also analyse the behaviour in an exemplary case in more detail to provide additional insights.Here,we observe that through the N-1 strategy,the actions of both the rule-based and the RL agent become more diversified. 展开更多
关键词 Deep reinforcement learning Electricity grids learning to run a power network Topology control Proximal policy optimisation
原文传递
Hierarchical Task Planning for Power Line Flow Regulation
5
作者 Chenxi Wang Youtian Du +2 位作者 Yanhao Huang Yuanlin Chang Zihao Guo 《CSEE Journal of Power and Energy Systems》 SCIE EI CSCD 2024年第1期29-40,共12页
The complexity and uncertainty in power systems cause great challenges to controlling power grids.As a popular data-driven technique,deep reinforcement learning(DRL)attracts attention in the control of power grids.How... The complexity and uncertainty in power systems cause great challenges to controlling power grids.As a popular data-driven technique,deep reinforcement learning(DRL)attracts attention in the control of power grids.However,DRL has some inherent drawbacks in terms of data efficiency and explainability.This paper presents a novel hierarchical task planning(HTP)approach,bridging planning and DRL,to the task of power line flow regulation.First,we introduce a threelevel task hierarchy to model the task and model the sequence of task units on each level as a task planning-Markov decision processes(TP-MDPs).Second,we model the task as a sequential decision-making problem and introduce a higher planner and a lower planner in HTP to handle different levels of task units.In addition,we introduce a two-layer knowledge graph that can update dynamically during the planning procedure to assist HTP.Experimental results conducted on the IEEE 118-bus and IEEE 300-bus systems demonstrate our HTP approach outperforms proximal policy optimization,a state-of-the-art deep reinforcement learning(DRL)approach,improving efficiency by 26.16%and 6.86%on both systems. 展开更多
关键词 Knowledge graph power line flow regulation reinforcement learning task planning
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部