Energy optimization of wind turbines via a neural control policy based on reinforcement learning Markov chain Monte Carlo algorithm

Tavakol Aghaei, Vahid and Ağababaoğlu, Arda and Bawo, Biram and Naseradinmousavi, Peiman and Yıldırım, Sinan and Yeşilyurt, Serhat and Onat, Ahmet (2023) Energy optimization of wind turbines via a neural control policy based on reinforcement learning Markov chain Monte Carlo algorithm. Applied Energy, 341 . ISSN 0306-2619

Full text not available from this repository. (Request a copy)


This study focuses on the numerical analysis and optimal control of vertical-axis wind turbines (VAWT) using Bayesian reinforcement learning (RL). We specifically address small-scale wind turbines, which are well-suited to local and compact production of electrical energy on a small scale, such as urban and rural infrastructure installations. Existing literature concentrates on large scale wind turbines which run in unobstructed, mostly constant wind profiles. However urban installations generally must cope with rapidly changing wind patterns. To bridge this gap, we formulate and implement an RL strategy using the Markov chain Monte Carlo (MCMC) algorithm to optimize the long-term energy output of a wind turbine. Our MCMC-based RL algorithm is a model-free and gradient-free algorithm, in which the designer does not have to know the precise dynamics of the plant and its uncertainties. Our method addresses the uncertainties by using a multiplicative reward structure, in contrast with additive reward used in conventional RL approaches. We have shown numerically that the method specifically overcomes the shortcomings typically associated with conventional solutions, including, but not limited to, component aging, modeling errors, and inaccuracies in the estimation of wind speed patterns. Our results show that the proposed method is especially successful in capturing power from wind transients; by modulating the generator load and hence the rotor torque load, so that the rotor tip speed quickly reaches the optimum value for the anticipated wind speed. This ratio of rotor tip speed to wind speed is known to be critical in wind power applications. The wind to load energy efficiency of the proposed method was shown to be superior to two other methods; the classical maximum power point tracking method and a generator controlled by deep deterministic policy gradient (DDPG) method.
Item Type: Article
Uncontrolled Keywords: Bayesian learning; Deep deterministic policy gradient (DDPG); Markov chain Monte Carlo (MCMC); Neural control; Reinforcement learning (RL); Wind turbine energy maximization
Divisions: Faculty of Engineering and Natural Sciences > Academic programs > Mechatronics
Faculty of Engineering and Natural Sciences
Depositing User: Sinan Yıldırım
Date Deposited: 05 Aug 2023 15:30
Last Modified: 05 Aug 2023 15:30

Actions (login required)

View Item
View Item