Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Testing the performance of spoken dialogue systems by means of an artificially simulated user

Testing the performance of spoken dialogue systems by means of an artificially simulated user This paper proposes a new technique to test the performance of spoken dialogue systems by artificially simulating the behaviour of three types of user (very cooperative, cooperative and not very cooperative) interacting with a system by means of spoken dialogues. Experiments using the technique were carried out to test the performance of a previously developed dialogue system designed for the fast-food domain and working with two kinds of language model for automatic speech recognition: one based on 17 prompt-dependent language models, and the other based on one prompt-independent language model. The use of the simulated user enables the identification of problems relating to the speech recognition, spoken language understanding, and dialogue management components of the system. In particular, in these experiments problems were encountered with the recognition and understanding of postal codes and addresses and with the lengthy sequences of repetitive confirmation turns required to correct these errors. By employing a simulated user in a range of different experimental conditions sufficient data can be generated to support a systematic analysis of potential problems and to enable fine-grained tuning of the system. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Artificial Intelligence Review Springer Journals

Testing the performance of spoken dialogue systems by means of an artificially simulated user

Loading next page...
 
/lp/springer-journals/testing-the-performance-of-spoken-dialogue-systems-by-means-of-an-2fIQocvXaW

References (45)

Publisher
Springer Journals
Copyright
Copyright © 2007 by Springer Science+Business Media B.V.
Subject
Computer Science; Complexity; Computer Science, general ; Artificial Intelligence (incl. Robotics)
ISSN
0269-2821
eISSN
1573-7462
DOI
10.1007/s10462-007-9059-9
Publisher site
See Article on Publisher Site

Abstract

This paper proposes a new technique to test the performance of spoken dialogue systems by artificially simulating the behaviour of three types of user (very cooperative, cooperative and not very cooperative) interacting with a system by means of spoken dialogues. Experiments using the technique were carried out to test the performance of a previously developed dialogue system designed for the fast-food domain and working with two kinds of language model for automatic speech recognition: one based on 17 prompt-dependent language models, and the other based on one prompt-independent language model. The use of the simulated user enables the identification of problems relating to the speech recognition, spoken language understanding, and dialogue management components of the system. In particular, in these experiments problems were encountered with the recognition and understanding of postal codes and addresses and with the lengthy sequences of repetitive confirmation turns required to correct these errors. By employing a simulated user in a range of different experimental conditions sufficient data can be generated to support a systematic analysis of potential problems and to enable fine-grained tuning of the system.

Journal

Artificial Intelligence ReviewSpringer Journals

Published: Nov 23, 2007

There are no references for this article.