An approach to learn hand movements for robot actions from human demonstrations

P. N. Hung, Takashi Yoshimi

研究成果: 著書の章/レポート/会議のプロシーディングスConference contribution

抄録

We present an approach to learn and generate movements for robot actions from human demonstrations using Dynamical Movement Primitives (DMPs) framework. The human hand movements are recorded by a motion tracker using a Kinect sensor with a color-marker glove. We segment an observed movement into simple motion units which are called as motion primitives. Then, each motion primitive will be encoded by DMPs models. These DMPs models are used to generate a desired movement by from learning a sample movement with the ability of generalization and adaption to new situation as the change of a desired goal. We extend standard DMPs for multi-dimensional data including the hand 3D position as control signal for movement trajectory, the hand orientation representation as control signal for robot end-effector orientation, and the distance between two fingers as control signal for opening/closing state of a robot gripper.

言語English
Title of host publicationSII 2016 - 2016 IEEE/SICE International Symposium on System Integration
PublisherInstitute of Electrical and Electronics Engineers Inc.
ページ711-716
Number of pages6
ISBN (Electronic)9781509033294
DOIs
StatePublished - 2017 2 6
イベント2016 IEEE/SICE International Symposium on System Integration, SII 2016 - Sapporo, Japan
継続期間: 2016 12 132016 12 15

Other

Other2016 IEEE/SICE International Symposium on System Integration, SII 2016
Japan
Sapporo
期間16/12/1316/12/15

Fingerprint

End effectors
Demonstrations
Robots
Grippers
Plant shutdowns
Trajectories
Color
Sensors

ASJC Scopus subject areas

  • Biomedical Engineering
  • Control and Systems Engineering
  • Mechanical Engineering
  • Artificial Intelligence
  • Hardware and Architecture
  • Control and Optimization

これを引用

Hung, P. N., & Yoshimi, T. (2017). An approach to learn hand movements for robot actions from human demonstrations. : SII 2016 - 2016 IEEE/SICE International Symposium on System Integration (pp. 711-716). [7844083] Institute of Electrical and Electronics Engineers Inc.. DOI: 10.1109/SII.2016.7844083

An approach to learn hand movements for robot actions from human demonstrations. / Hung, P. N.; Yoshimi, Takashi.

SII 2016 - 2016 IEEE/SICE International Symposium on System Integration. Institute of Electrical and Electronics Engineers Inc., 2017. p. 711-716 7844083.

研究成果: 著書の章/レポート/会議のプロシーディングスConference contribution

Hung, PN & Yoshimi, T 2017, An approach to learn hand movements for robot actions from human demonstrations. : SII 2016 - 2016 IEEE/SICE International Symposium on System Integration., 7844083, Institute of Electrical and Electronics Engineers Inc., pp. 711-716, 2016 IEEE/SICE International Symposium on System Integration, SII 2016, Sapporo, Japan, 16/12/13. DOI: 10.1109/SII.2016.7844083
Hung PN, Yoshimi T. An approach to learn hand movements for robot actions from human demonstrations. : SII 2016 - 2016 IEEE/SICE International Symposium on System Integration. Institute of Electrical and Electronics Engineers Inc.2017. p. 711-716. 7844083. 利用可能場所, DOI: 10.1109/SII.2016.7844083
Hung, P. N. ; Yoshimi, Takashi. / An approach to learn hand movements for robot actions from human demonstrations. SII 2016 - 2016 IEEE/SICE International Symposium on System Integration. Institute of Electrical and Electronics Engineers Inc., 2017. pp. 711-716
@inproceedings{50103c8f20df49c0af20131766c0b624,
title = "An approach to learn hand movements for robot actions from human demonstrations",
abstract = "We present an approach to learn and generate movements for robot actions from human demonstrations using Dynamical Movement Primitives (DMPs) framework. The human hand movements are recorded by a motion tracker using a Kinect sensor with a color-marker glove. We segment an observed movement into simple motion units which are called as motion primitives. Then, each motion primitive will be encoded by DMPs models. These DMPs models are used to generate a desired movement by from learning a sample movement with the ability of generalization and adaption to new situation as the change of a desired goal. We extend standard DMPs for multi-dimensional data including the hand 3D position as control signal for movement trajectory, the hand orientation representation as control signal for robot end-effector orientation, and the distance between two fingers as control signal for opening/closing state of a robot gripper.",
author = "Hung, {P. N.} and Takashi Yoshimi",
year = "2017",
month = "2",
day = "6",
doi = "10.1109/SII.2016.7844083",
language = "English",
pages = "711--716",
booktitle = "SII 2016 - 2016 IEEE/SICE International Symposium on System Integration",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
address = "United States",

}

TY - GEN

T1 - An approach to learn hand movements for robot actions from human demonstrations

AU - Hung,P. N.

AU - Yoshimi,Takashi

PY - 2017/2/6

Y1 - 2017/2/6

N2 - We present an approach to learn and generate movements for robot actions from human demonstrations using Dynamical Movement Primitives (DMPs) framework. The human hand movements are recorded by a motion tracker using a Kinect sensor with a color-marker glove. We segment an observed movement into simple motion units which are called as motion primitives. Then, each motion primitive will be encoded by DMPs models. These DMPs models are used to generate a desired movement by from learning a sample movement with the ability of generalization and adaption to new situation as the change of a desired goal. We extend standard DMPs for multi-dimensional data including the hand 3D position as control signal for movement trajectory, the hand orientation representation as control signal for robot end-effector orientation, and the distance between two fingers as control signal for opening/closing state of a robot gripper.

AB - We present an approach to learn and generate movements for robot actions from human demonstrations using Dynamical Movement Primitives (DMPs) framework. The human hand movements are recorded by a motion tracker using a Kinect sensor with a color-marker glove. We segment an observed movement into simple motion units which are called as motion primitives. Then, each motion primitive will be encoded by DMPs models. These DMPs models are used to generate a desired movement by from learning a sample movement with the ability of generalization and adaption to new situation as the change of a desired goal. We extend standard DMPs for multi-dimensional data including the hand 3D position as control signal for movement trajectory, the hand orientation representation as control signal for robot end-effector orientation, and the distance between two fingers as control signal for opening/closing state of a robot gripper.

UR - http://www.scopus.com/inward/record.url?scp=85015457312&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85015457312&partnerID=8YFLogxK

U2 - 10.1109/SII.2016.7844083

DO - 10.1109/SII.2016.7844083

M3 - Conference contribution

SP - 711

EP - 716

BT - SII 2016 - 2016 IEEE/SICE International Symposium on System Integration

PB - Institute of Electrical and Electronics Engineers Inc.

ER -