A Markov chain Monte Carlo algorithm for Bayesian policy search

Warning The system is temporarily closed to updates for reporting purpose.

Tavakol Aghaei, Vahid and Onat, Ahmet and Yıldırım, Sinan (2018) A Markov chain Monte Carlo algorithm for Bayesian policy search. Systems Science and Control Engineering, 6 (1). pp. 438-455. ISSN 2164-2583

This is the latest version of this item.

[img]PDF (submitted version) - Repository staff only - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader

Official URL: https://doi.org/10.1080/21642583.2018.1528483


Policy search algorithms have facilitated application of Reinforcement Learning (RL) to dynamic systems, such as control of robots. Many policy search algorithms are based on the policy gradient, and thus may suffer from slow convergence or local optima complications. In this paper, we take a Bayesian approach to policy search under RL paradigm, for the problem of controlling a discrete time Markov decision process with continuous state and action spaces and with a multiplicative reward structure. For this purpose, we assume a prior over policy parameters and aim for the ‘posterior’ distribution where the ‘likelihood’ is the expected reward. We propound a Markov chain Monte Carlo algorithm as a method of generating samples for policy parameters from this posterior. The proposed algorithm is compared with certain well-known policy gradient-based RL methods and exhibits more appropriate performance in terms of time response and convergence rate, when applied to a nonlinear model of a Cart-Pole benchmark.

Item Type:Article
Uncontrolled Keywords:Reinforcement learning; Markov chain Monte Carlo; particle filtering; risk sensitive reward; policy search; control
Subjects:Q Science > QA Mathematics
ID Code:37093
Deposited By:Sinan Yıldırım
Deposited On:27 May 2019 15:38
Last Modified:27 May 2019 15:38

Available Versions of this Item

Repository Staff Only: item control page