Working Together with Computers
2012, Multi-Modal Advancements
https://doi.org/10.4018/978-1-4666-0954-9.CH002…
3 pages
1 file
Sign up for access to the world's latest research
Abstract
The objective of this chapter is twofold. On one hand, it tries to introduce and present various components of Human Computer Interaction (HCI), if HCI is modeled as a process of cognition; on the other hand, it tries to underline those representations and mechanisms which are required to develop a general framework for a collaborative HCI. One must try to separate the specific problem solving skills and specific problem related knowledge from the general skills and knowledge acquired in interactive agents for future use. This separation leads to a distributed deep interaction layer consisting of many cognitive processes. A three layer architecture has been suggested for designing collaborative HCI with multiple human and computational agents.
Key takeaways
AI
AI
- The proposed framework introduces a three-layer architecture for collaborative Human Computer Interaction (HCI).
- Separation of problem-solving skills from general knowledge enhances future interactions in HCI systems.
- Dynamic environments influence the interaction process, affecting both immediate and long-term goals.
- HCI extends beyond user-computer interaction to include cognitive processes and environmental context.
- The text underscores the need for continuous knowledge acquisition and process adaptation in HCI.
Related papers
1999
Software systems are not just mathematical structures. The majority are also cognitive artefacts that rely on the ability of their users to understand and interpret information provided via an interface, and to act on that information using actions provided by the system. The design of an interactive system must thus account for both the technical requirements of the given domain, and the cognitive abilities of the user. However, the disciplines involved (computer science and psychology) traditionally operate with quite different methods and techniques, making it difficult to integrate their respective insights into system design. This paper reports on work that is creating a framework for interaction that encompasses both the description of complex software systems and the cognitive resources needed to operate those systems. An interface for supporting gestural interaction is used to illustrate the approach. The paper concludes with an assessment of the prospects for this kind of integrative modeling and sets out key areas for future progress.
2008
We discuss the design of the Intermediary Agent's brain, the control module of an embodied conversational virtual peer in a simulation game aimed at providing learning experiences regarding the dynamics of collaboration at the inter-personal (IP) level. We derive the overall aims of the game from theoretical foundations in collaboration theory and pedagogical theory and related requirements for the virtual peer; present the overall modular design of the system; and then detail the design perspectives and the interplay of the related operationalised concepts leading to the control architecture of the Intermediary Agent, that is realised as a simple cognitive appraisal process driven by direct and indirect effects of the missionoriented and social interactions of players and agent on the agent's level of trust in its human peers. We conclude with coverage of related work and insights from first deployment experiences.
2004
In the past two decades, the underlying interaction model for most software use has arguably remained unchanged and is little more than an expedient design based on certain superficial features of face-to-face communication that not only fails to accommodate an important range of users’ native interaction skills, but also devotes few computational resources to a useable artificial understanding of the process, progress, and products of the implied collaboration. This short paper examines how principles at work in people’s collaborative activities with each other play out in software use and takes the position that computational implementation of these fundamental human interaction concepts continues to be a relevant agenda for the artificial intelligence and human-computer interaction communities.
2008
We discuss the design of the Intermediary Agent's brain, the control module of an embodied conversational virtual peer in a simulation game aimed at providing learning experiences regarding the dynamics of collaboration at the inter-personal (IP) level. We derive the overall aims of the game from theoretical foundations in collaboration theory and pedagogical theory and related requirements for the virtual peer; present the overall modular design of the system; and then detail the design perspectives and the interplay of the related operationalised concepts leading to the control architecture of the Intermediary Agent, that is realised as a simple cognitive appraisal process driven by direct and indirect effects of the missionoriented and social interactions of players and agent on the agent's level of trust in its human peers. We conclude with coverage of related work and insights from first deployment experiences.
A Guided Tour of Artificial Intelligence Research, 2020
Human-Computer Interaction (HCI) and Artificial Intelligence (AI) are two disciplines that followed parallel trajectories for about four decades. They also both complement each other and overlap in various problem-rich domains. This chapter is far from being exhaustive, but provides a representative story of how HCI and AI cross-fertilise each other since their inception. It reviews the following domains: intelligent user interfaces and more specifically conversational animated affective agents; capitalisation, formulation and use of ergonomic knowledge for the design and evaluation of interactive systems; synergy between visualisation and data mining.
Many technical work places, such as laboratories or test beds, are the setting for well-defined processes requiring both high precision and extensive documentation, to ensure accuracy and support accountability that often is required by law, science, or both. In this type of scenario, it is desirable to delegate certain routine tasks, such as documentation or preparatory next steps, to some sort of automated assistant, in order to increase precision and reduce the required amount of manual labor in one fell swoop. At the same time, this automated assistant should be able to interact adequately with the human worker, to ensure that the human worker receives exactly the kind of support that is required in a certain context. To achieve this, we introduce a multilayer architecture for cognitive systems that structures the system's computation and reasoning across well-defined levels of abstraction, from mass signal processing up to organization-wide, intention-driven reasoning. By p...
2011
The HuComTech project aims at developing a theory of multimodal human-computer interaction linking knowledge about human-human interaction to technological implementation. The purpose is to contribute to a more efficient and human-like humancomputer interaction system by defining the main structural elements of communication, identifying their markers and define their alignment with other markers in a multimodal environment. The novelty of the proposed system is its bidirectionality (both analysis and synthesis). This advantage is utilized in a new multimodal corpus and database.
Control Engineering Practice, 1997
Human-machine interfaces for cooperative supervision and control by several human users either in control rooms or in group meetings are dealt with. The information flow between the different human users and their overlapping information needs are explained. The example of the cement plant illustrates this in more detail. Cognitive science concepts for supporting visual and mental coherence as well as multi-media, hypertext and CSCW (computer supported cooperative work) technologies are discussed with respect to their usage in multi-human machine interfaces. The design process for these interfaces is outlined with emphasising the different design stages, user participation and the possibility of knowledge based design support. Some ideas for the conceptual structure of multi-human machine interfaces are also presented.
Conference companion on Human factors in computing systems - CHI '95, 1995
This workshop will focus on appropriate use of cognitive models for the analysis and solution of HCI problems.
IEEE Pervasive Computing
We propose a hierarchical framework for collaborative intelligent systems. This framework organizes research challenges based on the nature of the collaborative activity and the information that must be shared, with each level building on capabilities provided by lower levels. We review research paradigms at each level, with a description of classical engineering-based approaches and modern alternatives based on machine learning, illustrated with a running example using a hypothetical personal service robot. We discuss cross-cutting issues that occur at all levels, focusing on the problem of communicating and sharing comprehension, the role of explanation and the social nature of collaboration. We conclude with a summary of research challenges and a discussion of the potential for economic and societal impact provided by technologies that enhance human abilities and empower people and society through collaboration with Intelligent Systems.

Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.