Learning mechanisms for “Conscious” Software Agents
Sign up for access to the world's latest research
Abstract
Here we describe mechanisms for a half-dozen or so different types of learning to be implemented in “conscious” software agents, and speculate briefly on their implications for human learning and development. In particular, we're concerned with conceptual learning and behavioral learning in two “conscious” software agents, namely, CMattie and IDA. We offer computational mechanisms for such learning.
Key takeaways
AI
AI
- The text outlines learning mechanisms for 'conscious' software agents CMattie and IDA.
- IDA requires a three-phase development process to acquire complex domain knowledge.
- Both agents utilize sparse distributed memory for associative declarative learning.
- Conceptual learning occurs through internal interaction and case-based reasoning with human agents.
- Learning mechanisms in these agents suggest parallels to human metacognition and emotional influence.
Related papers
1999
Realizing “Consciousness” In Software Agents describes the first design and implementation of Bernard Baars' global workspace theory. Global workspace theory is a leading psychological model of human consciousness. The “Conscious” Software Research Group at the University of Memphis has labeled agents which implement this theory as “conscious” software agents. As background material for the reader, this work also discusses agents, other existing cognitive architectures, and current software reuse methodology. This dissertation describes in depth the “Conscious” Agent framework (ConAg), developed by this author. ConAg is a reusable software framework that carefully follows software reuse methodology. ConAg provides a solid foundation for building “conscious” software agents, and in particular, “consciousness” within these agents. A description of two agents built with ConAg are described, as well as the framework's structure. It is beyond this work's scope to address whet...
… of the Third International Conference on …, 2000
Lecture Notes in Computer Science, 1996
New information technology (IT) is a major challenge to human adaptability. A crucial issue for the integration of new IT in the education system is the enhancement of its role of preserving cultural heritage, improving knowledge transferal and social integration. Software agents are computer programs that can be used to improve learning. Learning is described by five attributes: pleasure, learning how to learn, efficiency, allowing for errors in order to learn, and memory retention. These attributes guide the design of software agents that extend and support understanding, motivation, memory and reasoning capabilities. We will provide examples of agents that add pragmatics to current educational materials. They improve cooperative learning and cooperative design of pedagogical documents. These issues are discussed in the context of a critical analysis of the French educational system and the emergence of new information technology and software agents.
This paper briefly describes four kinds of learning carried out by intelligent agents in a computational environment facilitating joint activities of people and software agents. The types of learning and the applications we draw examples from are: learning by being told and learning by experience, as illustrated through a virtual patient application; learning by reasoning, as illustrated through a clinician's advisor application; and learning by reading, as illustrated by an ontology enhancement application. The agents carrying out these types of learning are modeled using cognitive modeling strategies that show marked parallels with how humans seem to learn.
Proceedings of The …, 1998
2007
An autonomous agent (Franklin and Graesser 1997) is a system situated in, and part of, an environment, which senses that environment, and acts on it, over time, in pursuit of its own agenda. It acts in such a way as to possibly influence what it senses at a later time. In other words, it is structurally coupled to its environment (Maturana 1975, Maturana and Varela 1980). Biological examples of autonomous agents include humans and most animals. Nonbiological examples include some mobile robots, and various computational agents, including artificial life agents, software agents and computer viruses. We will be concerned with an autonomous software agent, "living" in a real world computing system. Autonomous software agents, when equipped with cognitive (interpreted broadly) features chosen from among multiple senses, perception, concept formation, attention, problem-solving, decision making, short and long-term memory, learning, emotions, etc., are called cognitive agents. Though illdefined, cognitive agents can play a synergistic role in the study of human cognition, including consciousness (Franklin 1997). In this article, cognitive features such as attention are used both in the folk-psychological and technical senses. Here, we are particularly concerned with cognitive software agents that implement global workspace theory, a psychological theory of consciousness (Baars 1988, 1997). Global workspace theory postulates that human cognition is implemented by a multitude of relatively small, special purpose processes, almost always unconscious. It is a multiagent system with a society of its own. Coalitions of such processes, when aroused by novel and/or problematic situations, find their way into a global workspace (and into consciousness). This limited capacity workspace serves to broadcast the message of the coalition to all the unconscious processors, in order to recruit other processors to join in handling the current novel situation, or in solving the current problem. All this takes place under the auspices of contexts: goal contexts, perceptual contexts, conceptual contexts, and cultural contexts. Each context is itself a coalition of processes. There is much more to the theory, including attention, learning, action selection, and problem solving. We will refer to cognitive agents that implement global workspace theory as "conscious" software agents. "Conscious" software agents are domainspecific entities; very little of their architectures is domain-independent. They adapt and learn by reacting to the changes in their domain, and through their interaction with other agents in their domains, be they human or artificial. Due to this extensive interaction, "conscious" software agents tend to be social creatures, and exhibit some socially situated intelligence.
pat, 2006
This is a report on the LIDA architecture, a work in progress that is based on IDA, an intelligent, autonomous," conscious" software agent that does personnel work for the US Navy. IDA uses locally developed cutting edge artificial intelligence technology designed to model human cognition. IDA's task is to find jobs for sailors whose current assignments are about to end. She selects jobs to offer a sailor, taking into account the Navy's policies, the job's needs, the sailor's preferences, and her own deliberation about feasible dates. ...
Journal of Artificial General Intelligence
Natural selection has imbued biological agents with motivations moving them to act for survival and reproduction, as well as to learn so as to support both. Artificial agents also require motivations to act in a goal-directed manner and to learn appropriately into various memories. Here we present a biologically inspired motivation system, based on feelings (including emotions) integrated within the LIDA cognitive architecture at a fundamental level. This motivational system, operating within LIDA’s cognitive cycle, provides a repertoire of motivational capacities operating over a range of time scales of increasing complexity. These include alarms, appraisal mechanisms, appetence and aversion, and deliberation and planning.
Journal of Artificial Intelligence Research
Learning by observation can be of key importance whenever agents sharing similar features want to learn from each other. This paper presents an agent architecture that enables software agents to learn by direct observation of the actions executed by expert agents while they are performing a task. This is possible because the proposed architecture displays information that is essential for observation, making it possible for software agents to observe each other. The agent architecture supports a learning process that covers all aspects of learning by observation, such as discovering and observing experts, learning from the observed data, applying the acquired knowledge and evaluating the agent's progress. The evaluation provides control over the decision to obtain new knowledge or apply the acquired knowledge to new problems. We combine two methods for learning from the observed information. The first one, the recall method, uses the sequence on which the actions were observed t...
Toward Artificial Sapience, 2008
Sapient agents have been characterized as systems that learn their cognitive state and capabilities through experience, considering social environments and interactions with other agents or humans. The BDI (belief, desire, intention) model of cognitive agency offers ...

Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.