Knowledge-based recurrent neural networks in reinforcement learning

Tien Dung Le, Takashi Komeda, Motoki Takagi

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Citations (Scopus)

Abstract

Recurrent Neural Networks (RNNs) have been shown to have a strong ability to solve some hard problems. Learning time for these problems from scratch is typically very long. For supervised learning, several methods have been proposed to reuse existing knowledge in previous similar tasks. However, for unsupervised learning such as Reinforcement Learning (RL), especially for Partially Observable Markov Decision Processes (POMDPs), it is difficult to apply directly these algorithms. This paper presents several methods which have the potential of transferring of knowledge in RL using RNN: Directed Transfer, Cascade-Correlation, Mixture of Expert Systems, and Two-Level Architecture. Preliminary results of experiments in the E maze domain show the potential of these methods. Knowledge based learning time for a new problem is much shorter learning time from scratch even thought the new task looks very different from the previous tasks.

Original languageEnglish
Title of host publicationProceedings of the 11th IASTED International Conference on Artificial Intelligence and Soft Computing, ASC 2007
Pages169-174
Number of pages6
Publication statusPublished - 2007
Event11th IASTED International Conference on Artificial Intelligence and Soft Computing, ASC 2007 - Palma de Mallorca
Duration: 2007 Aug 292007 Aug 31

Other

Other11th IASTED International Conference on Artificial Intelligence and Soft Computing, ASC 2007
CityPalma de Mallorca
Period07/8/2907/8/31

Fingerprint

Recurrent neural networks
Reinforcement learning
Unsupervised learning
Supervised learning
Expert systems
Experiments

Keywords

  • Machine learning
  • Neural networks

ASJC Scopus subject areas

  • Artificial Intelligence
  • Computer Science Applications
  • Software

Cite this

Le, T. D., Komeda, T., & Takagi, M. (2007). Knowledge-based recurrent neural networks in reinforcement learning. In Proceedings of the 11th IASTED International Conference on Artificial Intelligence and Soft Computing, ASC 2007 (pp. 169-174)

Knowledge-based recurrent neural networks in reinforcement learning. / Le, Tien Dung; Komeda, Takashi; Takagi, Motoki.

Proceedings of the 11th IASTED International Conference on Artificial Intelligence and Soft Computing, ASC 2007. 2007. p. 169-174.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Le, TD, Komeda, T & Takagi, M 2007, Knowledge-based recurrent neural networks in reinforcement learning. in Proceedings of the 11th IASTED International Conference on Artificial Intelligence and Soft Computing, ASC 2007. pp. 169-174, 11th IASTED International Conference on Artificial Intelligence and Soft Computing, ASC 2007, Palma de Mallorca, 07/8/29.
Le TD, Komeda T, Takagi M. Knowledge-based recurrent neural networks in reinforcement learning. In Proceedings of the 11th IASTED International Conference on Artificial Intelligence and Soft Computing, ASC 2007. 2007. p. 169-174
Le, Tien Dung ; Komeda, Takashi ; Takagi, Motoki. / Knowledge-based recurrent neural networks in reinforcement learning. Proceedings of the 11th IASTED International Conference on Artificial Intelligence and Soft Computing, ASC 2007. 2007. pp. 169-174
@inproceedings{adbfcc791ab8414c924909809277b753,
title = "Knowledge-based recurrent neural networks in reinforcement learning",
abstract = "Recurrent Neural Networks (RNNs) have been shown to have a strong ability to solve some hard problems. Learning time for these problems from scratch is typically very long. For supervised learning, several methods have been proposed to reuse existing knowledge in previous similar tasks. However, for unsupervised learning such as Reinforcement Learning (RL), especially for Partially Observable Markov Decision Processes (POMDPs), it is difficult to apply directly these algorithms. This paper presents several methods which have the potential of transferring of knowledge in RL using RNN: Directed Transfer, Cascade-Correlation, Mixture of Expert Systems, and Two-Level Architecture. Preliminary results of experiments in the E maze domain show the potential of these methods. Knowledge based learning time for a new problem is much shorter learning time from scratch even thought the new task looks very different from the previous tasks.",
keywords = "Machine learning, Neural networks",
author = "Le, {Tien Dung} and Takashi Komeda and Motoki Takagi",
year = "2007",
language = "English",
isbn = "9780889866935",
pages = "169--174",
booktitle = "Proceedings of the 11th IASTED International Conference on Artificial Intelligence and Soft Computing, ASC 2007",

}

TY - GEN

T1 - Knowledge-based recurrent neural networks in reinforcement learning

AU - Le, Tien Dung

AU - Komeda, Takashi

AU - Takagi, Motoki

PY - 2007

Y1 - 2007

N2 - Recurrent Neural Networks (RNNs) have been shown to have a strong ability to solve some hard problems. Learning time for these problems from scratch is typically very long. For supervised learning, several methods have been proposed to reuse existing knowledge in previous similar tasks. However, for unsupervised learning such as Reinforcement Learning (RL), especially for Partially Observable Markov Decision Processes (POMDPs), it is difficult to apply directly these algorithms. This paper presents several methods which have the potential of transferring of knowledge in RL using RNN: Directed Transfer, Cascade-Correlation, Mixture of Expert Systems, and Two-Level Architecture. Preliminary results of experiments in the E maze domain show the potential of these methods. Knowledge based learning time for a new problem is much shorter learning time from scratch even thought the new task looks very different from the previous tasks.

AB - Recurrent Neural Networks (RNNs) have been shown to have a strong ability to solve some hard problems. Learning time for these problems from scratch is typically very long. For supervised learning, several methods have been proposed to reuse existing knowledge in previous similar tasks. However, for unsupervised learning such as Reinforcement Learning (RL), especially for Partially Observable Markov Decision Processes (POMDPs), it is difficult to apply directly these algorithms. This paper presents several methods which have the potential of transferring of knowledge in RL using RNN: Directed Transfer, Cascade-Correlation, Mixture of Expert Systems, and Two-Level Architecture. Preliminary results of experiments in the E maze domain show the potential of these methods. Knowledge based learning time for a new problem is much shorter learning time from scratch even thought the new task looks very different from the previous tasks.

KW - Machine learning

KW - Neural networks

UR - http://www.scopus.com/inward/record.url?scp=54949115850&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=54949115850&partnerID=8YFLogxK

M3 - Conference contribution

SN - 9780889866935

SP - 169

EP - 174

BT - Proceedings of the 11th IASTED International Conference on Artificial Intelligence and Soft Computing, ASC 2007

ER -