Reinforcement learning agents have shown very good results in robot control and navigation tasks, allowing robots to learn how to interact with an environment appropriately in a model-free manner. However, real-world robot systems have strict latency, power, and cost constraints, thus requiring special hardware consideration for the demanding computations of neural networks. Furthermore, reinforcement learning networks should be able to interface efficiently with the various other robot components. To address these challenges, we propose a method for applying FPGA hardware accelerators to robotics reinforcement learning agents at the inference stage and seamlessly integrating the FPGA hardware module to the robot system by automatically wrapping it in a Robot Operating System 2 (ROS2) node. The proposed system is evaluated in three OpenAI gym control environments: Cartpole-v1, Acrobot-v1, and Pendulum-v0. In the evaluation, both quantized and non-quantized reinforcement learning neural networks are used, and the proposed FPGA system is observed to provide up to a 3.69x speed up and up to 52.7x better performance per watt when compared to an agent running on a ROS2 node on a modern CPU.