Reinforcement learning for POMDP using state classification

Le Tien Dung, Takashi Komeda, Motoki Takagi

研究成果査読

17 被引用数 (Scopus)

抄録

Reinforcement learning (RL) has been widely used to solve problems with a little feedback from environment. Q learning can solve Markov decision processes (MDPs) quite well. For partially observable Markov decision processes (POMDPs), a recurrent neural network (RNN) can be used to approximate Q values. However, learning time for these problems is typically very long. We present a new combination of RL and RNN to find a good policy for POMDPs in a shorter learning time. This method contains two phases: firstly, state space is divided into two groups (fully observable state group and hidden state group); secondly, a Q value table is used to store values of fully observable states and an RNN is used to approximate values for hidden states. Results of experiments in two grid world problems show that the proposed method enables an agent to acquire a policy with better learning performance compared to the method using only a RNN.

本文言語English
ページ(範囲)761-779
ページ数19
ジャーナルApplied Artificial Intelligence
22
7-8
DOI
出版ステータスPublished - 2008 8月

ASJC Scopus subject areas

  • 人工知能

フィンガープリント

「Reinforcement learning for POMDP using state classification」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル