Behavior learning based on a policy gradient method: Separation of environmental dynamics and state-values in policies

Ishihara Seiji, Igarashi Harukazu

Research output: Contribution to journalArticle

1 Citation (Scopus)


Policy gradient methods are useful approaches to reinforcement learning. Applying the method to behavior learning, we can deal with each decision problem in different time-steps as a problem of minimizing an objective function. In this paper, we give the objective function consists of two types of parameters, which represent state-values and environmental dynamics. In order to separate the learning of the state-value from that of the environmental dynamics, we also give respective learning rules for each type of parameters. Furthermore, we show that the same set of state-values can be reused under different environmental dynamics.

Original languageEnglish
Pages (from-to)1737-1746+15
JournalIEEJ Transactions on Electronics, Information and Systems
Issue number9
Publication statusPublished - 2009 Jan 1



  • Policy gradient method
  • Pursuit problem
  • Reinforcement learning
  • State transition probabilities

ASJC Scopus subject areas

  • Electrical and Electronic Engineering

Cite this