When should animated agents give additional instructions to users? - Monitoring user's understanding in multimodal dialogues -

Kazuyoshi Murata, Mika Enomoto, Yoshiko Arimoto, Yukiko Nakano

研究成果: Conference contribution

2 被引用数 (Scopus)

抄録

In multimodal communication, verbal and nonverbal behaviors such as gestures and manipulating objects in a workspace occur in parallel, and are coordinated in proper timing to each other. This paper focuses on the interaction between a beginner user using a video recorder application on PC and a multimodal animated help agent, and presents a probabilistic model of fine-grained timing dependencies among different behaviors of different modalities. First, we collect user-agent dialogues using a Wizard-of-Oz experimental setting, and then the collected verbal and nonverbal behavior data will be used to build a Bayesian network model, which can predict the likelihood of successful mouse clicks in near future, given evidence associated with the status of speech, agent's gestures and user's mouse actions. Finally, we attempt to determine proper timing when the agent should give additional instructions by estimating the likelihood of a mouse click occurrence.

本文言語English
ホスト出版物のタイトルICCAS 2007 - International Conference on Control, Automation and Systems
ページ733-736
ページ数4
DOI
出版ステータスPublished - 2007 12 1
イベントInternational Conference on Control, Automation and Systems, ICCAS 2007 - Seoul, Korea, Republic of
継続期間: 2007 10 172007 10 20

出版物シリーズ

名前ICCAS 2007 - International Conference on Control, Automation and Systems

Other

OtherInternational Conference on Control, Automation and Systems, ICCAS 2007
CountryKorea, Republic of
CitySeoul
Period07/10/1707/10/20

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Electrical and Electronic Engineering

フィンガープリント 「When should animated agents give additional instructions to users? - Monitoring user's understanding in multimodal dialogues -」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル