期刊文献+

Reinforcement learning and A^(*)search for the unit commitment problem Patrick de Mars^(∗),Aidan O’Sullivan

原文传递
导出
摘要 Previous research has combined model-free reinforcement learning with model-based tree search methodsto solve the unit commitment problem with stochastic demand and renewables generation. This approachwas limited to shallow search depths and suffered from significant variability in run time across probleminstances with varying complexity. To mitigate these issues, we extend this methodology to more advancedsearch algorithms based on A^(*) search. First, we develop a problem-specific heuristic based on priority list unitcommitment methods and apply this in Guided A^(*) search, reducing run time by up to 94% with negligibleimpact on operating costs. In addition, we address the run time variability issue by employing a novel anytimealgorithm, Guided IDA^(*), replacing the fixed search depth parameter with a time budget constraint. We showthat Guided IDA^(*) mitigates the run time variability of previous guided tree search algorithms and enablesfurther operating cost reductions of up to 1%.
机构地区 UCL Energy Institute
出处 《Energy and AI》 2022年第3期172-181,共10页 能源与人工智能(英文)
基金 supported by an Engineering and Physical Sciences Research Council research studentship(grant number:EP/R512400/1).

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部