Augmented Intelligence
2020, Advances in computational intelligence and robotics book series
https://doi.org/10.4018/978-1-7998-2112-0.CH001…
3 pages
1 file
Sign up for access to the world's latest research
Abstract
Smart systems make decisions incorporating data available from different sensing in a way to control and make smart actions. In this context, smart actions consist in augmenting user's actions and/or decisions by using devices or additional information. Those actions could and should be different from user to user, depending on its characteristics and needs. To obtain smart actions adapted to the users, it is necessary to detect the user's individualities on-the-fly. This chapter focuses on how augmented intelligence can leverage smart systems, addressing: (a) the definitions and relations of artificial intelligence, augmented intelligence, and smart systems, namely the state of the art on how to extract human features that can be used to develop augmented intelligent systems (using only computer vision methods); (b) a brief explanation of a "describing people integrated framework", a framework to extract user information automatically without any user intervention; and (c) a description of several implemented smart systems, including a future work perspective.
FAQs
AI
What are the emerging trends in augmented intelligence and autonomous systems?add
The paper identifies trends such as autonomous things, augmented analytics, and edge computing that significantly enhance operational efficiency in various sectors.
How do adaptive interfaces improve user interaction with augmented intelligence?add
Adaptive interfaces personalize user interactions based on characteristics and needs, improving usability for diverse user groups, like children or seniors.
What does augmented intelligence aim to achieve in human-computer interaction?add
Augmented intelligence seeks to enhance human decision-making by leveraging AI to support user actions, rather than replacing them.
What are the ethical challenges surrounding the use of AI in smart systems?add
The study highlights concerns about user consent for data collection and alleviating fears related to AI, which impact its acceptance.
How does augmented analytics differ from traditional data analysis methods?add
Augmented analytics employs algorithms to evaluate multiple hypotheses, offering a broader understanding of data patterns compared to traditional analysis.
Related papers
Proceedings of the 8th …, 2006
A widely accepted prediction is that computing will move to the background, weaving itself into the fabric of our everyday living spaces and projecting the human user into the foreground. If this prediction is to come true, then next generation computing, which we will call human computing, should be about anticipatory user interfaces that should be human-centered, built for humans based on human models. They should transcend the traditional keyboard and mouse to include natural, human-like interactive functions including understanding and emulating certain human behaviors such as affective and social signaling. This article discusses a number of components of human behavior, how they might be integrated into computers, and how far we are from realizing the front end of human computing, that is, how far are we from enabling computers to understand human behavior.
The long-term goal of artificial intelligence (AI) is to make machines learn and think like human beings. Due to the high levels of uncertainty and vulnerability in human life and the open-ended nature of problems that humans are facing, no matter how intelligent machines are, they are unable to completely replace humans. Therefore, it is necessary to introduce human cognitive capabilities or human-like cognitive models into AI systems to develop a new form of AI, that is, hybrid-augmented intelligence. This form of AI or machine intelligence is a feasible and important developing model. Hybrid-augmented intelligence can be divided into two basic models: one is human-in-the-loop augmented intelligence with human-computer collaboration, and the other is cognitive computing based augmented intelligence, in which a cognitive model is embedded in the machine learning system. This survey describes a basic framework for human-computer collaborative hybrid-augmented intelligence, and the basic elements of hybrid-augmented intelligence based on cognitive computing. These elements include intuitive reasoning, causal models, evolution of memory and knowledge, especially the role and basic principles of intuitive reasoning for complex problem solving, and the cognitive learning framework for visual scene understanding based on memory and reasoning. Several typical applications of hybrid-augmented intelligence in related fields are given.
IEEE Access, 2022
Personas have successfully supported the development of classical user interfaces for more than two decades by mapping users' mental models to specific contexts. The rapid proliferation of Artificial Intelligence (AI) applications makes it necessary to create new approaches for future human-AI interfaces. Human-AI interfaces differ from classical human-computer interfaces in many ways, such as gaining some degree of human-like cognitive, self-executing, and self-adaptive capabilities and autonomy, and generating unexpected outputs that require non-deterministic interactions. Moreover, the most successful AI approaches are so-called "black box" systems, where the technology and the machine learning process are opaque to the user and the AI output is far not intuitive. This work shows how the personas method can be adapted to support the development of human-centered AI applications, and we demonstrate this on the example of a medical context. This work is-to our knowledge-the first to provide personas for AI using an openly available Personas for AI toolbox. The toolbox contains guidelines and material supporting persona development for AI as well as templates and pictures for persona visualisation. It is ready to use and freely available to the international research and development community. Additionally, an example from medical AI is provided as a best practice use case. This work is intended to help foster the development of novel human-AI interfaces that will be urgently needed in the near future.
Proceedings of the 8th international conference on Multimodal interfaces - ICMI '06, 2006
A widely accepted prediction is that computing will move to the background, weaving itself into the fabric of our everyday living spaces and projecting the human user into the foreground. If this prediction is to come true, then next generation computing, which we will call human computing, should be about anticipatory user interfaces that should be human-centered, built for humans based on human models. They should transcend the traditional keyboard and mouse to include natural, human-like interactive functions including understanding and emulating certain human behaviors such as affective and social signaling. This article discusses a number of components of human behavior, how they might be integrated into computers, and how far we are from realizing the front end of human computing, that is, how far are we from enabling computers to understand human behavior.
Lecture Notes in Computer Science, 2013
The field of Augmented Cognition (AugCog) has evolved over the past decade from its origins in the Defense Advanced Research Projects Agency (DARPA)-funded research program, emphasizing modulation of closed-loop human-computer interactions within operational environments, to address a broader scope of domains, contexts, and science and technology (S&T) challenges. Among these are challenges related to the underlying theoretical and empirical research questions, as well as the application of advances in the field within contexts such as training and education. This paper summarizes a series of ongoing research and development (R&D) efforts aimed at applying an AugCog-inspired framework to enhance both human-technology and humanhuman interactions within a variety of training and operational domains.
2003
The Assisted Cognition Project at the University of Washington develops novel representation and reasoning techniques in order to dramatically advance the capacity of ubiquitous computing environments to augment and enhance human capabilities, with a particular emphasis on increasing the independence of people suffering from cognitive limitations. Assisted Cognition systems (i) sense aspects of an individual's location and environment, both outdoors and at home, relying on a wide range of sensors such as global positioning systems (GPS), active badges, motion detectors, and other ubiquitous computing infrastructure; (ii) learn to interpret patterns of everyday behavior, and to recognize user errors, confusion, and distress, using techniques from state estimation, plan recognition, and machine learning; and (iii) offer proactive help at appropriate times to users through prompts, warnings, and other kinds of interventions. This overview focuses on the key problems in computer science and engineering that lay the technical foundations for Assisted Cognition. After briefly describing the broad, long-term benefits to society from work in this area, we will define the scientific challenges we address, review relevant previous results, and discuss our specific current and future research.
arXiv (Cornell University), 2021
We introduce Platform for Situated Intelligence, an open-source framework created to support the rapid development and study of multimodal, integrative-AI systems. The framework provides infrastructure for sensing, fusing, and making inferences from temporal streams of data across different modalities, a set of tools that enable visualization and debugging, and an ecosystem of components that encapsulate a variety of perception and processing technologies. These assets jointly provide the means for rapidly constructing and refining multimodal, integrative-AI systems, while retaining the efficiency and performance characteristics required for deployment in open-world settings. Recent advances in machine learning have led to significant improvements on numerous perceptual tasks [20]. For instance, in the span of a decade, error rates on object detection tasks have improved from~50% in 2010 to~13% in 2020 [2, 5, 48, 53]. Similarly, error rates on conversational speech have dropped dramatically [56, 49, 3]. Large strides have been made in machine translation [55], reading comprehension and text generation [16], recommender systems [58], dexterous robot control [7], and mastering competitive games [50]. Despite the steady progress in perceptual and control technologies, current AI models, and larger systems that incorporate them, provide singular, narrow wedges of expertise. A promising pathway to developing more general AI capabilities centers on bringing together and coordinating a constellation of AI competencies. Such integrative approaches can be employed to enable AI systems to perceive key aspects of the physical world and to make inferences across several distinct streams of data, including perceptual signals of the form that people depend on to assess situations and take actions. From mobile robots and cashier-less shopping experiences, to self-driving cars, intelligent meeting rooms, and factory floor assistants, computer systems that operate in the open world need to perceive their surroundings through multiple sensors, make sense of what is going on moment by moment in their environment, and decide how to act in a timely and appropriate manner. While many applications of AI will be autonomous, a particularly important, yet challenging opportunity for AI is developing intelligent systems that can collaborate in a natural manner with people. Fluid human-AI interaction will require AI systems to sense, infer, and coordinate with people with the ease, speed, and effectiveness that people expect when working with each other. Another important capability is formulating and leveraging a shared understanding or grounding with people about the task at hand, reminiscent of the shared understandings that people assume when they collaborate with one another. Such human-centered capabilities will hinge on endowing AI systems with multimodal capabilities that enable them to see, listen, and speak, and to understand critical aspects of language, gestures, and the surrounding physical environment.
2018
All artificial Intelligence (AI) systems make errors. These errors are unexpected, and differ often from the typical human mistakes ("non-human" errors). The AI errors should be corrected without damage of existing skills and, hopefully, avoiding direct human expertise. This paper presents an initial summary report of project taking new and systematic approach to improving the intellectual effectiveness of the individual AI by communities of AIs. We combine some ideas of learning in heterogeneous multiagent systems with new and original mathematical approaches for non-iterative corrections of errors of legacy AI systems. The mathematical foundations of AI non-destructive correction are presented and a series of new stochastic separation theorems is proven. These theorems provide a new instrument for the development, analysis, and assessment of machine learning methods and algorithms in high dimension. They demonstrate that in high dimensions and even for exponentially larg...
2011
An Augmented Reality (AR) is a technology which provides the user with a real time 3D enhanced perception of a physical environment with addition virtual elements-either virtual scenery, information regarding surroundings, other contextual information-and is also capable of hiding or replacing real structures. With Augmented Reality applications becoming more advanced, the ways the technology can be viably used is increasing. Augmented Reality has been used for gaming several times with varying results. AR systems are seen by some as an important part of the ambient intelligence landscape. Therefore, the authors present several types of augmentation applications of AR in the domestic, industrial, scientific, medicinal, and military sectors which may benefit future ambient intelligent systems.
2013
an inaugural event on advances towards fundamental, as well as practical and experimental aspects of intelligent and applications. The information surrounding us is not only overwhelming but also subject to limitations of systems and applications, including specialized devices. The diversity of systems and the spectrum of situations make it almost impossible for an end-user to handle the complexity of the challenges. Embedding intelligence in systems and applications seems to be a reasonable way to move some complex tasks form user duty. However, this approach requires fundamental changes in designing the systems and applications, in designing their interfaces and requires using specific cognitive and collaborative mechanisms. Intelligence became a key paradigm and its specific use takes various forms according to the technology or the domain a system or an application belongs to. We take here the opportunity to warmly thank all the members of the INTELLI 2013 Technical Program Comm...

Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.