Speech Database Design for a Concatenative Text-to-Speech Synthesis System for Individuals with Communication Disorders

Akemi Iida, Nick Campbell

研究成果: Article査読

18 被引用数 (Scopus)

抄録

ATR's CHATR is a corpus-based text-to-speech (TTS) synthesis system that selects concatenation units from a natural speech database. The system's approach enables us to create a voice output communication aid (VOCA) using the voices of individuals who are anticipating the loss of phonatory functions. The advantage of CHATR is that individuals can use their own voice for communication even after vocal loss. This paper reports on a case study of the development of a VOCA using recordings of Japanese read speech (i.e., oral reading) from an individual with amyotrophic lateral sclerosis (ALS). In addition to using the individual's speech, we designed a speech database that could reproduce the characteristics of natural utterances in both general and specific situations. We created three speech corpora in Japanese to synthesize ordinary daily speech (i.e., in a normal speaking style): (1) a phonetically balanced sentence set, to assure that the system was able to synthesize all speech sounds; (2) readings of manuscripts, written by the same individual, for synthesizing talks regularly given as a source of natural intonation, articulation and voice quality; and (3) words and short phrases, to provide daily vocabulary entries for reproducing natural utterances in predictable situations. By combining one or more corpora, we were able to create four kinds of source database for CHATR synthesis. Using each source database, we synthesized speech from six test sentences. We selected the source database to use by observing selected units of synthesized speech and by performing perceptual experiments where we presented the speech to 20 Japanese native speakers. Analyzing the results of both observations and evaluations, we selected a source database compiled from all corpora. Incorporating CHATR, the selected source database, and an input acceleration function, we developed a VOCA for the individual to use in his daily life. We also created emotional speech source databases designed for loading separately to the VOCA in addition to the compiled speech database.

本文言語English
ページ(範囲)379-392
ページ数14
ジャーナルInternational Journal of Speech Technology
6
4
DOI
出版ステータスPublished - 2003 10月
外部発表はい

ASJC Scopus subject areas

  • ソフトウェア
  • 言語および言語学
  • 人間とコンピュータの相互作用
  • 言語学および言語
  • コンピュータ ビジョンおよびパターン認識

フィンガープリント

「Speech Database Design for a Concatenative Text-to-Speech Synthesis System for Individuals with Communication Disorders」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル