When should animated agents give additional instructions to users? - Monitoring user's understanding in multimodal dialogues -

Kazuyoshi Murata, Mika Enomoto, Yoshiko Arimoto, Yukiko Nakano

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Citations (Scopus)

Abstract

In multimodal communication, verbal and nonverbal behaviors such as gestures and manipulating objects in a workspace occur in parallel, and are coordinated in proper timing to each other. This paper focuses on the interaction between a beginner user using a video recorder application on PC and a multimodal animated help agent, and presents a probabilistic model of fine-grained timing dependencies among different behaviors of different modalities. First, we collect user-agent dialogues using a Wizard-of-Oz experimental setting, and then the collected verbal and nonverbal behavior data will be used to build a Bayesian network model, which can predict the likelihood of successful mouse clicks in near future, given evidence associated with the status of speech, agent's gestures and user's mouse actions. Finally, we attempt to determine proper timing when the agent should give additional instructions by estimating the likelihood of a mouse click occurrence.

Original languageEnglish
Title of host publicationICCAS 2007 - International Conference on Control, Automation and Systems
Pages733-736
Number of pages4
DOIs
Publication statusPublished - 2007
Externally publishedYes
EventInternational Conference on Control, Automation and Systems, ICCAS 2007 - Seoul, Korea, Republic of
Duration: 2007 Oct 172007 Oct 20

Other

OtherInternational Conference on Control, Automation and Systems, ICCAS 2007
CountryKorea, Republic of
CitySeoul
Period07/10/1707/10/20

Fingerprint

Monitoring
Bayesian networks
Communication
Statistical Models

Keywords

  • Animated agent
  • Bayesian network
  • Instruction dialogues

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Electrical and Electronic Engineering

Cite this

Murata, K., Enomoto, M., Arimoto, Y., & Nakano, Y. (2007). When should animated agents give additional instructions to users? - Monitoring user's understanding in multimodal dialogues -. In ICCAS 2007 - International Conference on Control, Automation and Systems (pp. 733-736). [4406995] https://doi.org/10.1109/ICCAS.2007.4406995

When should animated agents give additional instructions to users? - Monitoring user's understanding in multimodal dialogues -. / Murata, Kazuyoshi; Enomoto, Mika; Arimoto, Yoshiko; Nakano, Yukiko.

ICCAS 2007 - International Conference on Control, Automation and Systems. 2007. p. 733-736 4406995.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Murata, K, Enomoto, M, Arimoto, Y & Nakano, Y 2007, When should animated agents give additional instructions to users? - Monitoring user's understanding in multimodal dialogues -. in ICCAS 2007 - International Conference on Control, Automation and Systems., 4406995, pp. 733-736, International Conference on Control, Automation and Systems, ICCAS 2007, Seoul, Korea, Republic of, 07/10/17. https://doi.org/10.1109/ICCAS.2007.4406995
Murata K, Enomoto M, Arimoto Y, Nakano Y. When should animated agents give additional instructions to users? - Monitoring user's understanding in multimodal dialogues -. In ICCAS 2007 - International Conference on Control, Automation and Systems. 2007. p. 733-736. 4406995 https://doi.org/10.1109/ICCAS.2007.4406995
Murata, Kazuyoshi ; Enomoto, Mika ; Arimoto, Yoshiko ; Nakano, Yukiko. / When should animated agents give additional instructions to users? - Monitoring user's understanding in multimodal dialogues -. ICCAS 2007 - International Conference on Control, Automation and Systems. 2007. pp. 733-736
@inproceedings{1755b4f75805408e8f16af0cae2d880b,
title = "When should animated agents give additional instructions to users? - Monitoring user's understanding in multimodal dialogues -",
abstract = "In multimodal communication, verbal and nonverbal behaviors such as gestures and manipulating objects in a workspace occur in parallel, and are coordinated in proper timing to each other. This paper focuses on the interaction between a beginner user using a video recorder application on PC and a multimodal animated help agent, and presents a probabilistic model of fine-grained timing dependencies among different behaviors of different modalities. First, we collect user-agent dialogues using a Wizard-of-Oz experimental setting, and then the collected verbal and nonverbal behavior data will be used to build a Bayesian network model, which can predict the likelihood of successful mouse clicks in near future, given evidence associated with the status of speech, agent's gestures and user's mouse actions. Finally, we attempt to determine proper timing when the agent should give additional instructions by estimating the likelihood of a mouse click occurrence.",
keywords = "Animated agent, Bayesian network, Instruction dialogues",
author = "Kazuyoshi Murata and Mika Enomoto and Yoshiko Arimoto and Yukiko Nakano",
year = "2007",
doi = "10.1109/ICCAS.2007.4406995",
language = "English",
isbn = "8995003871",
pages = "733--736",
booktitle = "ICCAS 2007 - International Conference on Control, Automation and Systems",

}

TY - GEN

T1 - When should animated agents give additional instructions to users? - Monitoring user's understanding in multimodal dialogues -

AU - Murata, Kazuyoshi

AU - Enomoto, Mika

AU - Arimoto, Yoshiko

AU - Nakano, Yukiko

PY - 2007

Y1 - 2007

N2 - In multimodal communication, verbal and nonverbal behaviors such as gestures and manipulating objects in a workspace occur in parallel, and are coordinated in proper timing to each other. This paper focuses on the interaction between a beginner user using a video recorder application on PC and a multimodal animated help agent, and presents a probabilistic model of fine-grained timing dependencies among different behaviors of different modalities. First, we collect user-agent dialogues using a Wizard-of-Oz experimental setting, and then the collected verbal and nonverbal behavior data will be used to build a Bayesian network model, which can predict the likelihood of successful mouse clicks in near future, given evidence associated with the status of speech, agent's gestures and user's mouse actions. Finally, we attempt to determine proper timing when the agent should give additional instructions by estimating the likelihood of a mouse click occurrence.

AB - In multimodal communication, verbal and nonverbal behaviors such as gestures and manipulating objects in a workspace occur in parallel, and are coordinated in proper timing to each other. This paper focuses on the interaction between a beginner user using a video recorder application on PC and a multimodal animated help agent, and presents a probabilistic model of fine-grained timing dependencies among different behaviors of different modalities. First, we collect user-agent dialogues using a Wizard-of-Oz experimental setting, and then the collected verbal and nonverbal behavior data will be used to build a Bayesian network model, which can predict the likelihood of successful mouse clicks in near future, given evidence associated with the status of speech, agent's gestures and user's mouse actions. Finally, we attempt to determine proper timing when the agent should give additional instructions by estimating the likelihood of a mouse click occurrence.

KW - Animated agent

KW - Bayesian network

KW - Instruction dialogues

UR - http://www.scopus.com/inward/record.url?scp=48349133973&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=48349133973&partnerID=8YFLogxK

U2 - 10.1109/ICCAS.2007.4406995

DO - 10.1109/ICCAS.2007.4406995

M3 - Conference contribution

SN - 8995003871

SN - 9788995003879

SP - 733

EP - 736

BT - ICCAS 2007 - International Conference on Control, Automation and Systems

ER -