Learning time is always a critical issue in Reinforcement Learning, especially when Recurrent Neural Networks are used to predict Q values in non-Markovian environments. Experience reuse has been received much attention due to its ability to reduce learning time. In this paper, we propose a new method to efficiently reuse experience. Our method generates new episodes from recorded episodes using an action-pair merger. Recorded episodes and new episodes are replayed after each learning epoch. We compare our method with standard online learning, and learning using experience replay in a vision-based robot problem. The results show the potential of this approach.