The Management Of Ambiguities
2008, Definitions and Formalizations
Sign up for access to the world's latest research
Abstract
This chapter introduces and discusses the classification of methods to resolve ambiguities that arise during the communication process using visual languages. Ambiguities arise when the user gives his or her own semantics to the information. Sometimes his or her actions do not represent his or her intentions, producing an ambiguous or incorrect interpretation by the system. This chapter deals with ambiguities related to the system's interpretation function and methods to resolve them, which can be grouped in three main classes: prevention, a-posteriori resolution, and approximation resolution methods of ambiguities. This chapter distinguishes among different prevention methods: the procedural method, the reduction, and the improvement of the expressive power of the visual languages. The most used method for the a-posteriori resolution of ambiguities is mediation, which consists of repetition and choice. Finally, approximation resolution methods are presented to resolve ambiguities caused by imprecision of the user's interaction.
Related papers
Journal of Next Generation Information Technology, 2013
This paper deals with classifying ambiguities for Multimodal Languages. It evolves the classifications and the methods of the literature on ambiguities for Natural Language and Visual Language, empirically defining an original classification of ambiguities for multimodal interaction using a linguistic perspective. This classification distinguishes between Semantic and Syntactic multimodal ambiguities and their subclasses, which are intercepted using a rule-based method implemented in a software module. The experimental results have achieved an accuracy of the obtained classification compared to the expected one, which are defined by the human judgment, of 94.6% for the semantic ambiguities classes, and 92.1% for the syntactic ambiguities classes.
International Journal of Virtual Technology and Multimedia, 2010
Natural interaction approaches, such as the Sketch-based interaction, frequently imply ambiguities in the interpretation by the computer side. This paper focuses only on micro-level ambiguity analysing the interpretation of the abstract geometrical elements point, polyline and polygon. Considering a dynamic perspective, ambiguities can be caused by: inaccuracy in the user's drawing, approximation of the represented reality and user's deletion or retracing of fragments of the sketch. Moreover, this paper discusses methods to solve ambiguities taking into account the spatial and temporal information that characterise the user's drawing, deleting and over-tracing process according to some experimental observations of users' behaviour.
2009
Starting from discussing the problem of ambiguity and its pervasiveness on communication processes, this thesis dissertation faces problems of classifying and solving ambiguities for Multimodal Languages. This thesis gives an overview of the works proposed in literature about ambiguities in Natural Language and Visual Languages and discusses some existing proposals on multimodal ambiguities. An original classification of multimodal ambiguities has been defined using a linguistic perspective, introducing the notions of Multimodal Grammar, Multimodal Sentence and Multimodal Language. An overview of methods that the literature proposes for avoiding and detecting ambiguities has been done. These methods are grouped into: prevention of ambiguities, a-posterior resolution and approximation resolution methods. The analysis of these methods has underlined the suitability of Hidden Markov Models (HMMs) for disambiguation processes. However, due to the complexity of ambiguities for multimodal interaction, this thesis uses the Hierarchical Hidden Markov Models to manage the Semantic and Syntactic classes of ambiguities for Multimodal Sentences; this choice permits to operate at different levels going from the terminal elements to the Multimodal Sentence. The proposed methods for classifying and solving multimodal ambiguities have been used to design and implement two software modules. The experimental results of these modules have underlined a good level of accuracy during the classification and solution processes of multimodal ambiguities. I would like to give thanks to my supervisor Patrizia because she has contributed to my growth as a researcher and to the quality of this thesis. I am grateful to Fernando because he has taught me the many skills required to take my PhD, support, and knowledge. I would to thank my parents, my brother and all my family because they have helped me to know and to give priority to the things that are important in my life. I would to thank Arianna and Gabriele because they have been always willing to hear my ideas, no matter how ill-formed they have been, and to help me evolve them. Finally I would offer thank to my friends and to the people of the Multi Media & Modal Laboratory at CNR for their encouragements.
IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans, 1997
A novel definition of visual languages allows a uniform approach to satisfying the needs of visual reasoning faced in visual human-computer interaction. The way the machine associates a computational meaning with an image, and conversely, the way it generates an image on the screen from a computation are formally described. A definition of visual sentence and of visual language as a set of visual sentences is discussed. A hierarchy of visual languages is derived in relation with the requirements for intelligible, manageable and trustable interaction between humans and computers.
The main reason for using visual languages is that they are often far more convenient to the user than traditional textual languages. Therefore, visual languages intended for use by both computers and humans ought to be designed and analyzed not only from the perspective of computational resource requirements, but equally importantly, also from the perspective of languages that are cognitively usable and useful. Theoretical and practical research on visual languages need to take into account the full context of a coupled human-computer system in which the visual language facilitates interactions between the computational and the cognitive parts. This implies that theoretical analyses ought to address issues of comprehension, reasoning and interaction in the cognitive realm as well as issues of visual program parsing, execution and feedback in the computational realm. The human aspect is crucial to visual languages, and therefore we advocate a correspondingly broadened scope of inquiry for visual language research. In this chapter we describe aspects of human use of visual languages that ought to be important considerations in visual language research and design, and summarize research from related fields such as software visualization and diagrammatic reasoning that addresses these issues. A framework consistent with the broadened scope of visual language research is proposed and used to categorize and discuss several formalizations and implemented systems. In the course of showing how a sample of current work fits into this framework, open issues and fruitful directions for future research are also identified.
We demonstrate several parallels between interactive verbal communication and graphical communication. Experiment 1 shows that through interaction partners' graphical representations converge, and are refined, although degree of refinement is dictated by level of interaction. Experiment 2 shows that through interaction graphical representations lose their iconicity, taking on a more symbolic form. Again, this is dictated by the closeness of the interaction. Results are discussed both in terms of the evolution of writing systems and applications that support interactive graphical communication.
International Journal of Computational Intelligence Systems
Ambiguities represent uncertainty but also a fundamental item of discussion for who is interested in the interpretation of languages and it is actually functional for communicative purposes both in human-human communication and in human-machine interaction. This paper faces the need to address ambiguity issues in human-machine interaction. It deals with the identification of the meaningful features of multimodal ambiguities and proposes a dynamic classification method that characterizes them by learning, and progressively adapting with the evolution of the interaction language, by refining the existing classes, or by identifying new ones. A new class of ambiguities can be added by identifying and validating the meaningful features that characterize and distinguish it compared to the existing ones. The experimental results demonstrate improvement in the classification rate over considering new ambiguity classes.
1988
The human factor oriented task to improve Man-Computer-Interaction by the possibilities of multimodal communication, Le by the possibility of a combination of modes like Natural Language, Direct (graphical) Manipulation, and Formal Language was the starting point for our investigations and implementations. Systems have been developed and implemented on SUN workstations in C and in PRO LOG on UNIX, providing a structure of modules and a communication layer for combined (multimodal) interfaces. mS-QUE, Deictic Interaction System-Query Environment, show two of these applications allowing natural language queries combined with graphical selection (deictic actions) on forms and technical drafts. Seen from the human factors-, the linguistics-, and (to a certain extend) from the graphics point of view this kind of combined interaction is an interesting interaction improvement. Resume Le point de depart pour nos investigations et implementations a ete la tache ergonomique d'ameliorer l'interaction homme-machine a l'aide des possibilites des communications multi-modales, Le. a travers les possibiIires de combiner les modes de la mani pulation graphique directe, le langage naturel et les langages form ales. Des sysremes ont ete developes et implementes dans les langages 'C' et PROLOG dans le sysreme d'exploitation d'Unix sur des stations de travail SUN fournissant une structure de modules et une cauche de communication pour des interfaces combinees (multi-modales). Les sysremes DIS-QUE (Deictic Interaction System-Query Environment) represent deux de ces applications permettant des interrogations en langage naturel combine avec des selections graphique (actions de'ictiques) sur des formulaires et des esquisses techniques. Du point de vue ergonomique, linguistique et (dans un certain mesure) graphique, cette sorte d'interaction combinee represent une amelioration interessante.
Proceedings XIV Brazilian Symposium on Computer Graphics and Image Processing
This paper describes error handling and ambiguity in a class of applications organized around drawing and sketching, which we call Calligraphic Interfaces. While errors and imprecision are unavoidable features of human input, these have long been considered nuisances and problems to circumvent in user interface design. However, the transition away from non-WIMP interface styles and into continuous media featuring recognition requires that we take a fresh approach to errors and imprecision. We present new tools and interaction styles to allow designers to develop error tolerant and simpler interaction dialogues.

Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.