LyrebirdTM: Developing Spoken Dialog Systems Using Examples
2002
https://doi.org/10.1007/3-540-45790-9_29…
3 pages
1 file
Sign up for access to the world's latest research
Abstract
An early release software product for the rapid development of spoken dialog systems (SDS’s), known as Lyrebird™ [1][2][3], will be demonstrated that makes use of grammatical inference to build natural language, mixed initiative, speech recognition applications. The demonstration will consist of the presenter developing a spoken dialog system using Lyrebird™, and will include a demonstration of some features that are still in the prototype phase.
Related papers
2000
The design and implementation of the AT&T Communicator mixed-initiative spoken dialog system is described. The Communicator project, sponsored by DARPA and launched in 1999, is a multi-year multi-site project on advanced spoken dialog systems research. The main focus of this paper is on issues related to the design of mixed-initiative systems. In addition to describing our architecture and implementation of the complex travel task, the paper reports on some preliminary evaluation results.
Dialogue Systems are now more and more used in real life. However, such systems are usually difficult for lay people to create. Some less complicated methods have been proposed, although with less powerful understanding capability. After introducing and comparing the approaches mentioned above, the grammar parsing part, named GrammarTool, of a natural language text interface Spoken Dialogue System (SDS) toolkit, named SDS Lite, is described in detail. Our system deals with Chinese dialogue and makes it very easy for lay people to learn and create their own SDS.
Development environments for spoken dialogue processing systems are of particular interest because the turnaround time for a dialogue system is high while at the same time a considerable amount of components can be reused with little or no modi cations. We describe an Integrated Development Environment (IDE) for spoken dialogue systems. The IDE allows application designers to interactively specify reusable building blocks called dialogue packages for dialogue systems. Each dialogue package consists of an assembly of data sources, including an objectoriented domain model, a task model and grammars. We show how the dialogue packages can be speci ed through a graphical user interface with the help of a wizard.
Proc. ICSLP, 2002
The rapidly expanding voice recognition industry has so far shown a preference for grammar-based language modelling, despite the better overall performance of statistical language modelling. Given that the advantages of the grammar-based approach make it unlikely to be replaced as the primary solution in the near future, it is natural to wonder whether some combination of the two approaches may prove useful. Here, we describe an implemented system that uses statistical language modelling and a decision-tree classifier to provide the user with some feedback when grammarbased recognition fails. Users of this system had more successful interactions than did users of a control system.
… European Conference on …
In this paper we describe a multi-purpose Spoken Dialogue System platform associated with two distinct applications as an home intelligent environment and remote access to information databases. These applications differ substantially on contents and possible uses but ...
2013
While Graphical User Interfaces (GUI) still represent the most common way of operating modern computing technology, Spoken Dialog Systems (SDS) have the potential to offer a more natural and intuitive mode of interaction. Even though some may say that existing speech recognition is neither reliable nor practical, the success of recent product releases such as Apple's Siri or Nuance's Dragon Drive suggests that language-based interaction is increasingly gaining acceptance. Yet, unlike applications for building GUIs, tools and frameworks that support the design, construction and maintenance of dialog systems are rare. A particular challenge of SDS design is the often complex integration of technologies. Systems usually consist of several components (e.g. speech recognition, language understanding, output generation, etc.), all of which require expertise to deploy them in a given application domain. This paper presents work in progress that aims at supporting this integration process. We propose a framework of components and describe how it may be used to prototype and gradually implement a spoken dialog system without requiring extensive domain expertise.
2005
Preface The design and study of spoken dialog systems is a relatively young research field compared to other speech technologies such as recognition and synthesis. In recent years however, as these core technologies have improved, the field of spoken dialog systems has been generating increased interest both in the research community and in the industry. While most of the early work originated from the artificial intelligence community and addressed high-level issues such as discourse planning, the development and deployment of actual usable systems has led to the emergence of a wide range of new issues such as error handling in dialog, multimodal integration, or rapid system development. At the same time, researchers from a variety of disciplines including speech and language technologies, robotics, and human-computer interaction have started to bring their unique skills and backgrounds to bear on these issues. Unfortunately, while this richness and variety of interests constitute ...
Spoken language interaction with computers has become a practical possibility as a result of recent technological developments in the speech sciences. This paper reports on the use of CSLU's RAD (Rapid Application Developer) to provide practical experience for undergraduate students in the specification and development of spoken dialogue systems. Two groups of students were involved. The first group included students of linguistics, speech and language therapy, and communication; the second group included students of computational linguistics who had taken several courses in computing. The paper describes the use of the toolkit for students with these different degrees of competence in computing and reports on plans for future work with the toolkit.
2007
Interacting with machines that listen, understand a nd react to human stimuli has been for many years the holy grai l of scientists across disciplines. In the last three de cades scientists have made great contributions to the training, desi gn and testing of conversational systems. In this paper we present the fundamentals of Spoken Dialog Systems (SDS) from Automatic Speech Recognition, to Spoken Language Understanding and to Text-to-Speech Synthesis. We report on the spoken dialog system architecture and experimen ts within a university help-desk application.
1999
The development of task-oriented spoken language dialog system requires expertise in multiple domains including speech recognition, natural spoken language understanding and generation, dialog management and speech synthesis. The dialog manager is the core of a spoken language dialog system, and makes use of multiple knowledge sources. In this contribution we report on our methodology for developing and testing different strategies for dialog management, drawing upon our experience with several travel information tasks. In the LIMSI ARISE system for train travel information we have implemented a 2-level mixedinitiative dialog strategy, where the user has maximum freedom when all is going well, and the system takes the initiative if problems are detected. The revised dialog strategy and error recovery mechanisms have resulted in a 5-10% increase in dialog success depending upon the word error rate.

Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
References (5)
- Starkie, Bradford C., 2002, Inferring Attribute Grammars with Structured Data for Natural Language Processing, in: Grammatical Inference and Applications. Sixth International Colloquium, ICGI-2002, pp 1-4 Berlin: Springer Verlag.
- Starkie, Bradford C., 2001. Programming Spoken Dialogs Using Grammatical Inference, in AI 2001: Advances in Artificial Intelligence, 14 th Australian Joint Conference on Artifi- cial Intelligence, pp 449-460, Berlin: Springer Verlag
- Starkie, Bradford C., 1999. A method of developing an interactive system, International Patent WO 00/78022.
- Cypher, Allen, 1993. Watch what I do: programming by demonstration, Cambridge Mas- sachusetts: MIT press.
- TM -Trade mark applied for by Telstra New Wave Pty Ltd.