Intelligent Behaviors for Simulated Entities
2005
Sign up for access to the world's latest research
Abstract
In nearly any simulation system, there will be entities -that is, platforms or forces -that are not under the control of a human participant, either because the necessary personnel are not available or they would be too costly. These entities must mimic the behavior of the real-world entities that they represent in order to achieve some level of believability in the simulation. Crafting realistic, intelligent-seeming behaviors for simulated entities is a non-trivial task, however, and there are a variety of techniques that can be employed. While there exists a diverse array of agent architectures, two common approaches are cognitive architectures and state machines.
Related papers
Cornell University - arXiv, 2021
Behaviors of the synthetic characters in current military simulations are limited since they are generally generated by rule-based and reactive computational models with minimal intelligence. Such computational models cannot adapt to reflect the experience of the characters, resulting in brittle intelligence for even the most effective behavior models devised via costly and labor-intensive processes. Observation-based behavior model adaptation that leverages machine learning and the experience of synthetic entities in combination with appropriate prior knowledge can address the issues in the existing computational behavior models to create a better training experience in military training simulations. In this paper, we introduce a framework that aims to create autonomous synthetic characters that can perform coherent sequences of believable behavior while being aware of human trainees and their needs within a training simulation. This framework brings together three mutually complementary components. The first component is a Unity-based simulation environment-Rapid Integration and Development Environment (RIDE)-supporting One World Terrain (OWT) models and capable of running and supporting machine learning experiments. The second is Shiva, a novel multi-agent reinforcement and imitation learning framework that can interface with a variety of simulation environments, and that can additionally utilize a variety of learning algorithms. The final component is the Sigma Cognitive Architecture that will augment the behavior models with symbolic and probabilistic reasoning capabilities. We have successfully created proof-of-concept behavior models leveraging this framework on realistic terrain as an essential step towards bringing machine learning into military simulations: (1) in order to improve the quality and complexity of non-player characters in training simulations; (2) in order to create more realistic and challenging training experiences while reducing the cost and time to develop them; and (3) in order to make simulations less dependent on the availability of human participants. ABOUT THE AUTHORS Volkan Ustun is a senior artificial intelligence researcher at the USC Institute for Creative Technologies. His general research interests are cognitive architectures, computational cognitive models, natural language and simulation. He is a member of the cognitive architecture group and a major contributor to the Sigma cognitive architecture.
2009
We introduce a cognitive architecture that combines human factors, human cognitive modeling, decision, perception and action selection based on hierarchical task networks. We make use of a cognitive model which permits us to define a control mode for a virtual agent. This control mode influences the agent's decisions as well as its perception, its knowledge, its goals. It depends on its physical and cognitive states and its personality. We determine which factors influence the agent's control mode and use them to generate various behaviors depending on the situational constraints. We propose a way to model the environment and new algorithms to enable our virtual characters to evolve in the environment dynamically and with a certain level of credibility.
2009
Simulation-based training in military decision making often requires ample personnel for playing various roles (eg team mates, adversaries). Usually humans are used to play these roles to ensure varied behavior required for the training of such tasks. However, there is growing conviction and evidence that intelligent agents can also produce human-like, variable behavior. At the same time, it is known that goal-directed, systematic training is more effective than learning-by-doing only.
2015
Behavioral modeling of combat entities in military simulations by creating synthetic agents in order to satisfy various battle scenarios is an important problem. The conventional modeling tools are not always sufficient to handle complex situations requiring adaptation. To deal with this Agent-Based Modeling (ABM) is employed, as the agents exhibit autonomous behavior by adapting and varying their behavior during the course of the simulation whilst achieving the goals. Synthetic agents created by means of Computer Generated Force (CGF) is a relatively recent approach to model behavior of combat entities for a more realistic training and effective military planning. CGFs, are also sometimes referred to as SemiAutomated Forces (SAF) and enables to create high-fidelity simulations. Agents are used to control and augment the behavior of CGF entities, hence converting them into Intelligent CGF (ICGF). The intelligent agents can be modeled to exhibit cognitive abilities. For this review p...
Lecture Notes in Computer Science, 2012
Second Life is a popular multi-purpose online virtual world that provides a rich platform for remote human interaction. It is increasingly being used as a simulation platform to model complex human interactions in diverse areas, as well as to simulate multi-agent systems. It would therefore be beneficial to provide techniques allowing high-level agent development tools, especially cognitive agent platforms such as belief-desire-intention (BDI) programming frameworks, to be interfaced to Second Life. This is not a trivial task as it involves mapping potentially unreliable sensor readings from complex Second Life simulations to a domain-specific abstract logical model of observed properties and/or events. This paper investigates this problem in the context of agent interactions in a multi-agent system simulated in Second Life. We present a framework which facilitates the connection of any multi-agent platform with Second Life, and demonstrate it in conjunction with an extension of the Jason BDI interpreter.
Communications of The ACM, 1999
Synthetic agents with varying degrees of intelligence and autonomy are being designed in many research laboratories. The motivations include military training simulations, games and entertainments, educational software, digital personal assistants, software agents managing Internet transactions or purely scientific curiosity.
2001
A strong need exists for a simulation-based means to provide individuals in deployed settings teamwork and cross-platform coordination skills training in realistic, mission-oriented scenarios. This need can be met by using advanced human behavioral representation technology to provide synthetic teammates and collateral entities operating within a sophisticated synthetic battlespace. When combined with a full range of automated, intelligent agent-based instructional support capabilities including real-time performance measurement, diagnosis and feedback along with menu-driven scenario generation and replay capability for debriefing purposes the result is a system called Synthetic Cognition for Operational Team Training (SCOTT). An initial SCOTT system is being developed to provide training in cross-platform coordination skills for members of the Navy E-2C Hawkeye tactical crew. The architecture and behavioral representation issues in E-2C SCOTT are discussed.
During the NATO HFM research workshop, a special session was dedicated to learning techniques. The discussion following the presentation was guided by the following questions: •What are the factors enabling/limiting learning in EVS? •When is the optimal time to provide feedback in EVS? On error, on trainee request, other? •How much human involvement is required to support learning in EVS, and how much can bedelegated to intelligent agents? •How is feedback provided to trainees during embedded training? A significant part of the discussion on learning technologies centered on intelligent agents for embedded training and how they might support the constraints (e.g., lack of availability of a human tutor) inferred by the questions above. Also discussed was how they differ from conventional technology-based training environments.
2008 6th IEEE International Conference on Industrial Informatics, 2008
I.

Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.