Recurrent Neural Networks (RNNs) have been shown to have a strong ability to solve some hard problems. Learning time for these problems from scratch is typically very long. For supervised learning, several methods have been proposed to reuse existing knowledge in previous similar tasks. However, for unsupervised learning such as Reinforcement Learning (RL), especially for Partially Observable Markov Decision Processes (POMDPs), it is difficult to apply directly these algorithms. This paper presents several methods which have the potential of transferring of knowledge in RL using RNN: Directed Transfer, Cascade-Correlation, Mixture of Expert Systems, and Two-Level Architecture. Preliminary results of experiments in the E maze domain show the potential of these methods. Knowledge based learning time for a new problem is much shorter learning time from scratch even thought the new task looks very different from the previous tasks.