Head pose estimation and motion analysis of public speaking videos

Rinko Komiya, Takeshi Saitoh, Miharu Fuyuno, Yuko Yamashita, Yoshitaka Nakajima

Research output: Contribution to journalArticlepeer-review

3 Citations (Scopus)

Abstract

Public speaking is an essential skill in a large variety of professions and also in everyday life. However, it can be difficult to master. This paper focuses on the automatic assessment of nonverbal facial behavior during public speaking and proposes simple and efficient methods of head pose estimation and motion analysis. The authors collected nine and six speech videos from a recitation and oration contest, respectively, conducted at a Japanese high school and applied the proposed method to evaluate the contestants' performance. For the estimation of head pose from speech videos, their method produced results with an acceptable level of accuracy. The proposed motion analysis method can be used for calculating frequencies and moving ranges of head motion. The authors found that the proposed parameters and the eye-contact score are strongly correlated and that the proposed frequency and moving range parameters are suitable for evaluating public speaking. Thus, on the basis of these features, a teacher can provide accurate feedback to help a speaker improve.

Original languageEnglish
Pages (from-to)57-71
Number of pages15
JournalInternational Journal of Software Innovation
Volume5
Issue number1
DOIs
Publication statusPublished - 2017 Jan 1

Keywords

  • English Oration Contest
  • English Recitation Contest
  • Facial Feature Point
  • Head Pose Estimation
  • Image Processing
  • Motion Analysis
  • Speech Video

ASJC Scopus subject areas

  • Software
  • Computer Science Applications
  • Computer Networks and Communications
  • Computer Graphics and Computer-Aided Design
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Head pose estimation and motion analysis of public speaking videos'. Together they form a unique fingerprint.

Cite this