We present an approach to learn and generate movements for robot actions from human demonstrations using Dynamical Movement Primitives (DMPs) framework. The human hand movements are recorded by a motion tracker using a Kinect sensor with a color-marker glove. We segment an observed movement into simple motion units which are called as motion primitives. Then, each motion primitive will be encoded by DMPs models. These DMPs models are used to generate a desired movement by from learning a sample movement with the ability of generalization and adaption to new situation as the change of a desired goal. We extend standard DMPs for multi-dimensional data including the hand 3D position as control signal for movement trajectory, the hand orientation representation as control signal for robot end-effector orientation, and the distance between two fingers as control signal for opening/closing state of a robot gripper.