Much of present AI research is based on the assumption of computational systems with infinite res... more Much of present AI research is based on the assumption of computational systems with infinite resources, an assumption that is either explicitly stated or implicit in the work as researchers ignore the fact that most real-world tasks must be finished within certain time limits, and it is the role of intelligence to effectively deal with such limitations. Expecting AI systems to give equal treatment to every piece of data they encounter is not appropriate in most real-world cases; available resources are likely to be insufficient for keeping up with available data in even moderately complex environments. Even if sufficient resources are available, they might possibly be put to better use than blindly applying them to every possible piece of data. Finding inspiration for more intelligent resource management schemes is not hard, we need to look no further than ourselves. This paper explores what human attention has to offer in terms of ideas and concepts for implementing intelligent resource management and how the resulting principles can be extended to levels beyond human attention. We also discuss some ideas for the principles behind attention mechanisms for artificial (general) intelligences.
One of the original goals of artificial intelligence (AI) research was to create machines with ve... more One of the original goals of artificial intelligence (AI) research was to create machines with very general cognitive capabilities and a relatively high level of autonomy. It has taken the field longer than many had expected to achieve even a fraction of this goal; the community has focused on building specific, targeted cognitive processes in isolation, and as of yet no system exists that integrates a broad range of capabilities or presents a general solution to autonomous acquisition of a large set of skills. Among the reasons for this are the highly limited machine learning and adaptation techniques available, and the inherent complexity of integrating numerous cognitive and learning capabilities in a coherent architecture. In this paper we review selected systems and architectures built expressly to address integrated skills. We highlight principles and features of these systems that seem promising for creating generally intelligent systems with some level of autonomy, and discuss them in the context of the development of future cognitive architectures. Autonomy is a key property for any system to be considered generally intelligent, in our view; we use this concept as an organizing principle for comparing the reviewed systems. Features that remain largely unaddressed in present research, but seem nevertheless necessary for such efforts to succeed, are also discussed.
In the domain of intelligent systems, the management of mental resources is typically called "att... more In the domain of intelligent systems, the management of mental resources is typically called "attention". Attention exists because all moderately complex environments -and the real-world environments of everyday life in particular -are a source of vastly greater information than can be processed in real-time by available cognitive resources of any known intelligence, human or otherwise. General-purpose artificial intelligence (AI) systems operating with limited resources under time-constraints in such environments must select carefully which information will be processed and which will be ignored. Even in the (rare) cases where sufficient resources may be available, attention could help make better use of them. All real-world tasks come with time limits, and managing these is a key part of the role of intelligence. Many AI researchers ignore this fact. As a result, the majority of existing AI architectures is incorrectly based on an (explicit or implicit) assumption of infinite or sufficient computational resources. Attention has not yet been recognized as a key cognitive process of AI systems and in particular not of artificial general intelligence systems. This dissertation argues for the absolute necessity of an attention mechanism for artificial general intelligence (AGI) architectures. We examine several issues related to attention and resource management, review prior work on these topics in cognitive psychology and AI, and present a design for a general attention mechanism for AGI systems. The proposed design -inspired by constructivist AI methodologiesaims at architectural and modal independence, and comprehensively addresses and integrates all principal factors associated with attention to date.
Much of present AI research is based on the assumption of computational systems with infinite res... more Much of present AI research is based on the assumption of computational systems with infinite resources, an assumption that is either explicitly stated or implicit in the work as researchers ignore the fact that most real-world tasks must be finished within certain time limits, and it is the role of intelligence to effectively deal with such limitations. Expecting AI systems to give equal treatment to every piece of data they encounter is not appropriate in most real-world cases; available resources are likely to be insufficient for keeping up with available data in even moderately complex environments. Even if sufficient resources are available, they might possibly be put to better use than blindly applying them to every possible piece of data. Finding inspiration for more intelligent resource management schemes is not hard, we need to look no further than ourselves. This paper explores what human attention has to offer in terms of ideas and concepts for implementing intelligent resource management and how the resulting principles can be extended to levels beyond human attention. We also discuss some ideas for the principles behind attention mechanisms for artificial (general) intelligences.
In this paper we consider the issue of endowing an AGI system with decision-making capabilities f... more In this paper we consider the issue of endowing an AGI system with decision-making capabilities for operation in real-world environments or those of comparable complexity. While action-selection is a critical function of any AGI system operating in the real-world, very few applicable theories or methodologies exist to support such functionality, when all necessary factors are taken into account. Decision theory and standard search techniques require several debilitating simplifications, including determinism, discrete state spaces, exhaustive evaluation of all possible future actions and a coarse grained representation of time. Due to the stochastic and continuous nature of real-world environments and inherent time-constraints, direct application of decision-making methodologies from traditional decision theory and search is not a viable option. We present predictive heuristics as a way to bridge the gap between the simplifications of decision theory and search, and the complexity of real-world environments.
International Journal of Computer Science and Artificial Intelligence, 2014
In the domain of intelligent systems the management of system resources is typically called "atte... more In the domain of intelligent systems the management of system resources is typically called "attention". Attention mechanisms exist because even environments of moderate complexity are a source of vastly more information than available cognitive resources of any known intelligence can handle. Cognitive resource management has not been of much concern in artificial intelligence (AI) work that builds relatively simple systems for particular targeted problems. For systems capable of a wide range of actions in complex environments, explicit management of time and cognitive resources is not only useful, it is a necessity. We have designed a general attention mechanism for intelligent systems. While a full implementation remains to be realized, the architectural principles on which our work rests have already been implemented. Here we examine some prior work that we find relevant to engineered systems, describe our design, and how it derives from constructivist AI principles.
An important part of human intelligence is the ability to use language. Humans learn how to use l... more An important part of human intelligence is the ability to use language. Humans learn how to use language in a society of language users, which is probably the most effective way to learn a language from the ground up. Principles that might allow an artificial agents to learn language this way are not known at present. Here we present a framework which begins to address this challenge. Our auto-catalytic, endogenous, reflective architecture (AERA) supports the creation of agents that can learn natural language by observation. We present results from two experiments where our S1 agent learns human communication by observing two humans interacting in a realtime mock television interview, using gesture and situated language. Results show that S1 can learn multimodal complex language and multimodal communicative acts, using a vocabulary of 100 words with numerous sentence formats, by observing unscripted interaction between the humans, with no grammar being provided to it a priori, and only high-level information about the format of the human interaction in the form of high-level goals of the interviewer and interviewee and a small ontology. The agent learns both the pragmatics, semantics, and syntax of complex sentences spoken by the human subjects on the topic of recycling of objects such as aluminum cans, glass bottles, plastic, and wood, as well as use of manual deictic reference and anaphora.
Four principal features of autonomous control systems are left both unaddressed and unaddressable... more Four principal features of autonomous control systems are left both unaddressed and unaddressable by present-day engineering methodologies: 1. The ability to operate effectively in environments that are only partially known beforehand at design time;; 2. A level of generality that allows a system to re-assess and re-define the fulfillment of its mission in light of unexpected constraints or other unforeseen changes in the environment;; 3. The ability to operate effectively in environments of significant complexity;; and 4. The ability to degrade gracefully - how it can continue striving to achieve its main goals when resources become scarce, or in light of other expected or unexpected constraining factors that impede its progress. We describe new methodological and engineering principles for addressing these shortcomings, that we have used to design a machine that becomes increasingly better at behaving in underspecified circumstances, in a goal-directed way, on the job, by modeling itself and its environment as experience accumulates. Based on principles of autocatalysis, endogeny, and reflectivity, the work provides an architectural blueprint for constructing systems with high levels of operational autonomy in underspecified circumstances, starting from only a small amount of designer-specified code - a seed. Using a valuedriven dynamic priority scheduling to control the parallel execution of a vast number of lines of reasoning, the system accumulates increasingly useful models of its experience, resulting in recursive self-improvement that can be autonomously sustained after the machine leaves the lab, within the boundaries imposed by its designers. A prototype system has been implemented and demonstrated to learn a complex real-world task -real-time multimodal dialogue with humans -by on-line observation. Our work presents solutions to several challenges that must be solved for achieving artificial general intelligence.
Many existing AGI architectures are based on the assumption of infinite computational resources, ... more Many existing AGI architectures are based on the assumption of infinite computational resources, as researchers ignore the fact that real-world tasks have time limits, and managing these is a key part of the role of intelligence. In the domain of intelligent systems the management of system resources is typically called “attention”. Attention mechanisms are necessary because all moderately complex environments are likely to be the source of vastly more information than could be processed in realtime by an intelligence’s available cognitive resources. Even if sufficient resources were available, attention could help make better use of them. We argue that attentional mechanisms are not only nice to have, for AGI architectures they are an absolute necessity. We examine ideas and concepts from cognitive psychology for creating intelligent resource management mechanisms and how these can be applied to engineered systems. We present a design for a general attention mechanism intended for implementation in AGI architectures.
Four principal features of autonomous control systems are left both unaddressed and unaddressable... more Four principal features of autonomous control systems are left both unaddressed and unaddressable by present-day engineering methodologies: 1. The ability to operate effectively in environments that are only partially known beforehand at design time; 2. A level of generality that allows a system to re-assess and re-define the fulfillment of its mission in light of unexpected constraints or other unforeseen changes in the environment; 3. The ability to operate effectively in environments of significant complexity; and 4. The ability to degrade gracefully – how it can continue striving to achieve its main goals when resources become scarce, or in light of other expected or unexpected constraining factors that impede its progress. We describe new methodological and engineering principles for addressing these shortcomings, that we have used to design a machine that becomes increasingly better at behaving in underspecified circumstances, in a goal-directed way, on the job, by modeling itself and its environment as experience accumulates. Based on principles of autocatalysis, endogeny, and reflectivity, the work provides an architectural blueprint for constructing systems with high levels of operational autonomy in underspecified circumstances, starting from only a small amount of designer-specified code – a seed. Using a value- driven dynamic priority scheduling to control the parallel execution of a vast number of lines of reasoning, the system accumulates increasingly useful models of its experience, resulting in recursive self-improvement that can be autonomously sustained after the machine leaves the lab, within the boundaries imposed by its designers. A prototype system has been implemented and demonstrated to learn a complex real-world task – real-time multimodal dialogue with humans – by on-line observation. Our work presents solutions to several challenges that must be solved for achieving artificial general intelligence.
Many existing AGI architectures are based on the assumption of infinite computational resources, ... more Many existing AGI architectures are based on the assumption of infinite computational resources, as researchers ignore the fact that real-world tasks have time limits, and managing these is a key part of the role of intelligence. In the domain of intelligent systems the management of system resources is typically called "attention". Attention mechanisms are necessary because all moderately complex environments are likely to be the source of vastly more information than could be processed in realtime by an intelligence's available cognitive resources. Even if sufficient resources were available, attention could help make better use of them. We argue that attentional mechanisms are not only nice to have, for AGI architectures they are an absolute necessity. We examine ideas and concepts from cognitive psychology for creating intelligent resource management mechanisms and how these can be applied to engineered systems. We present a design for a general attention mechanism intended for implementation in AGI architectures.
Uploads
Papers by Helgi Helgason