Motion planning of a mobile robot using reinforcement learning

研究成果: Article査読

2 被引用数 (Scopus)

抄録

In a previous paper, we proposed a solution to navigation of a mobile robot. In our approach, we formulated the following two problems at each time step as discrete optimization problems: 1) estimation of position and direction of a robot, and 2)action decision. While the results of our simulation showed the effectiveness of our approach, the values of weights in the objective functions were given by a heuristic method. This paper presents a theoretical method using reinforcement learning for adjusting the weight parameters in the objective function that includes pieces of heuristic knowledge on the action decision. In our reinforcement learning, the expectation of a reward given to a robot's trajectory is defined as the value function to maximize. The robot's trajectories are generated stochastically because we used a probabilistic policy for determining actions of a robot to search for the global optimal trajectory. However, this decision process is not a Markov decision process because the objective function includes an action at the previous time. Thus, Q-learning, which is a conventional method of reinforcement learning, cannot be applied to this problem. In this paper, we applied Williams's episodic REINFORCE approach to the action decision and derived a learning rule for the weight parameters of the objective function. Moreover, we applied the stochastic hill-climbing method to maximizing the value function to reduce computation time. The learning rule was verified by our experiment.

本文言語English
ページ(範囲)501-509
ページ数9
ジャーナルTransactions of the Japanese Society for Artificial Intelligence
16
6
DOI
出版ステータスPublished - 2001
外部発表はい

ASJC Scopus subject areas

  • Software
  • Artificial Intelligence

フィンガープリント 「Motion planning of a mobile robot using reinforcement learning」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル