期刊文献+
共找到3篇文章
< 1 >
每页显示 20 50 100
Combined peakreductionandself-consumptionusingproximalpolicy optimisation
1
作者 Thijs Peirelinck Chris Hermans +1 位作者 fred spiessens Geert Deconinck 《Energy and AI》 EI 2024年第2期24-31,共8页
Residential demand response programs aim to activate demand flexibility at the household level.In recent years,reinforcement learning(RL)has gained significant attention for these type of applications.A major challeng... Residential demand response programs aim to activate demand flexibility at the household level.In recent years,reinforcement learning(RL)has gained significant attention for these type of applications.A major challenge of RL algorithms is data efficiency.New RL algorithms,such as proximal policy optimisation(PPO),have tried to increase data efficiency.Addi tionally,combining RL with transfer learning has been proposed in an effort to mitigate this challenge.In this work,we further improve upon state-of-the-art transfer learning performance by incorporating demand response domain knowledge into the learning pipeline.We evaluate our approach on a demand response use case where peak shaving and self-consumption is incentivised by means of a capacity tariff.We show our adapted version of PPO,combined with transfer learming,reduces cost by 14.51%compared to a regular hysteresis controller and by 6.68%compared to traditional PPO. 展开更多
关键词 Demand response Reinforcement learning Electric water heater Peak shaving Transfer learning
原文传递
Direct Load Control of Thermostatically Controlled Loads Based on Sparse Observations Using Deep Reinforcement Learning 被引量:2
2
作者 frederik Ruelens Bert J.Claessens +2 位作者 Peter Vrancx fred spiessens Geert Deconinck 《CSEE Journal of Power and Energy Systems》 SCIE CSCD 2019年第4期423-432,共10页
This paper considers a demand response agent that must find a near-optimal sequence of decisions based on sparse observations of its environment.Extracting a relevant set of features from these observations is a chall... This paper considers a demand response agent that must find a near-optimal sequence of decisions based on sparse observations of its environment.Extracting a relevant set of features from these observations is a challenging task and may require substantial domain knowledge.One way to tackle this problem is to store sequences of past observations and actions in the state vector,making it high dimensional,and apply techniques from deep learning.This paper investigates the capabilities of different deep learning techniques,such as convolutional neural networks and recurrent neural networks,to extract relevant features for finding near-optimal policies for a residential heating system and electric water heater that are hindered by sparse observations.Our simulation results indicate that in this specific scenario,feeding sequences of time-series to an Long Short-Term Memory(LSTM)network,which is a specific type of recurrent neural network,achieved a higher performance than stacking these time-series in the input of a convolutional neural network or deep neural network. 展开更多
关键词 Convolutional networks deep reinforcement learning long short-term memory residential demand response
原文传递
Transfer learning in demand response: A review of algorithms for data-efficient modelling and control 被引量:1
3
作者 Thijs Peirelinck Hussain Kazmi +4 位作者 Brida V.Mbuwir Chris Hermans fred spiessens Johan Suykens Geert Deconinck 《Energy and AI》 2022年第1期183-196,共14页
A number of decarbonization scenarios for the energy sector are built on simultaneous electrification of energy demand,and decarbonization of electricity generation through renewable energy sources.However,increased e... A number of decarbonization scenarios for the energy sector are built on simultaneous electrification of energy demand,and decarbonization of electricity generation through renewable energy sources.However,increased electricity demand due to heat and transport electrification and the variability associated with renewables have the potential to disrupt stable electric grid operation.To address these issues using demand response,researchers and practitioners have increasingly turned towards automated decision support tools which utilize machine learning and optimization algorithms.However,when applied naively,these algorithms suffer from high sample complexity,which means that it is often impractical to fit sufficiently complex models because of a lack of observed data.Recent advances have shown that techniques such as transfer learning can address this problem and improve their performance considerably—both in supervised and reinforcement learning contexts.Such formulations allow models to leverage existing domain knowledge and human expertise in addition to sparse observational data.More formally,transfer learning embodies all techniques where one aims to increase(learning)performance in a target domain or task,by using knowledge gained in a source domain or task.This paper provides a detailed overview of state-of-the-art techniques on applying transfer learning in demand response,showing improvements that can exceed 30%in a variety of tasks.We observe that most research to date has focused on transfer learning in the context of electricity demand prediction,although reinforcement learning based controllers have also seen increasing attention.However,a number of limitations remain in these studies,including a lack of benchmarks,systematic performance improvement tracking,and consensus on techniques that can help avoid negative transfer. 展开更多
关键词 Demand response Transfer learning Reinforcement learning REVIEW Smart grid
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部