Adding Life-Like Synthetic Characters to the Web
2000, Lecture Notes in Computer Science
https://doi.org/10.1007/978-3-540-45012-2_1…
2 pages
1 file
Sign up for access to the world's latest research
Abstract
With the advent of web browsers that are able to execute programs embedded in web pages, the use of animated characters for the presentation of information over the web has become possible. A strong argument in favour of using such characters in a web interface is the fact that they make human-computer interaction more enjoyable and allow for the emulation of communication styles common in humanhuman dialogue. In this paper we discuss three ongoing DFKI projects on lifelike synthetic characters in the internet. While all agents rely on the same approach for automated script generation, we use different player technologies which will be discussed in the light of different applications.
Related papers
2015
An integral part of social believability in role playing games is believability of non-player characters (NPC). In this paper we argue for the importance of believability in NPCs, even those that are completely outside of any pre-written quest or plot. We present NPCAgency, a system designed to generate many conversational NPCs as packaged narrative assets that can be shared and imported into various projects to increase story-world immersion. We believe such a system can help solve two problems. First, the authorial burden of the game designer is lessened, allowing renderings of large numbers of NPCs, each with their own unique background and conversation topics, all conforming to the norms of a predefined “universe”. Second, the immersive aspect of the game is heightened as the player can engage complex characters with lengthy dialogue affordances. We demonstrate the concept by generating fifty characters with attributes drawn from “Game of Thrones” (GOT) / “A Song of Ice and Fire...
Applied Artificial Intelligence, 1999
Lecture Notes in Computer Science, 2005
During the last decade research groups as well as a number of commercial software developers have started to deploy embodied conversational characters in the user interface especially in those application areas where a close emulation of multimodal human-human communication is needed. Most of these characters have one thing in common: In order to enter the user's physical world, they need to be physical themselves. The paper focuses on challenges that arise when embedding synthetic conversational agents in the user's physical world. We will start from work on synthetic agents that populate virtual worlds and anthropomorphic robots that inhabit physical worlds and discuss how the two areas need to be combined in order to populate physical worlds with synthetic characters. Finally, we will report on so-called traversable interfaces that allow agents to cross the border from the physical space to the virtual space and vice versa.
Flairs, 2009
With the most resource intensive tasks in games offloaded to special purpose processors, game designers now have the opportunity to build richer characters using more complex AI techniques than have been used in the past. While additional CPU time makes improved AI feasible, better tools for building agents are needed to make good interactive characters a reality. In this paper we present the BEHAVEngine and Be-haviorShop which enable the creation of rich interactive characters.
1996
This paper introduces Linguistic Style Improvisation, a theory and algorithms for improvisation of spoken utterances by artificial agents, with applications to interactive story and dialogue systems. We argue that linguistic style is a key aspect of character, and show how speech act representations common in AI can provide abstract representations from which computer characters can improvise. We show that the mechanisms proposed introduce the possibility of socially oriented agents, meet the requirements that lifelike characters be believable, and satisfy particular criteria for improvisation proposed by Hayes-Roth.
Lecture Notes in Computer Science, 2003
Embodied conversational characters are autonomous, graphically embodied virtual creatures that live in a 2D or 3D virtual environment. They are able to interact intelligently with human users, other characters, and their digital environment. While for decades research has concentrated on geometric body modelling and the development of animation and rendering techniques for virtual characters, other qualities have now come in focus as well, including the provision of conversational skills as well as the simulation of believable behavior including affect and peculiarities induced by individual personality traits. As a consequence, the domain of virtual characters has become much more diverse and now encompasses a wide range of disciplines, from computer graphics and animation to AI and more recently also psychology, sociology as well as design and arts. The current paper discusses a number of design issues that arise when building an application with one or more embodied characters. By means of selected sample applications we also illustrate a yet ongoing development of animated presentation agents starting with TV-style information presenters to highly interactive multi-character scenarios in which information is conveyed to the user in the form of multi-party conversations.
2000
We are creating an environment in which to investigate the role of advanced AI in computer games. This environment is based on the Unreal Tournament (UT) game engine and the Soar AI engine. Unreal provides a 3D virtual environment, while Soar provides a flexible architecture for developing complex AI characters. This paper describes our progress to date, starting with our game, Haunt 2, which is designed so that complex AI characters will be critical to the success (or failure) of the game. We also describe the extensions we have made to UT to support AI characters with complex physiology so that the AI characters' behavior is driven by their interaction with their environment, their internal long-term goals, and any story-based goals. Finally, we describe the overall system design and interfaces between Soar and UT to support flexible development as well as efficient implementation.
2011
Abstract For many application areas, where a task is most naturally represented by talking or where standard input devices are difficult to use or not available at all, virtual characters can be well suited as an intuitive man-machineinterface due to their inherent ability to simulate verbal as well as nonverbal communicative behavior. This type of interface is made possible with the help of multimodal dialog systems, which extend common speech dialog systems with additional modalities just like in human-human interaction.
Lecture Notes in Computer Science, 2003
In this paper, we propose the use of the Belief-Desire-Intention (BDI) model for cognitive agents for the implementation of animated characters. The BDI agent architecture has been widely used in dynamic and complex scenarios where agents may need to act under incomplete and incorrect information about other agents and the environment. In this work, we bring together an articulated model for character animation and an interpreter for AgentSpeak(L), an agent-oriented programming language that implements the BDI architecture. We have developed an interface that allows the BDI-based agent reasoning system to be used for guiding the behaviour of articulated characters in a virtual environment. This is a promising approach for the high-level specification of complex computer animations. The paper also presents a simple 3D animation that illustrates the use of BDI specifications in our approach.
Abstract. This paper presents educational software based on a math tutor for elementary school children. This software teaches the basics operations of mathematics through a virtual tutor endowed with personality. The virtual tutor is a cognitive agent that integrates a behavioral model, verbal and nonverbal expressions. This architecture was based on the concepts of computing Clouding and RIA (Rich Internet Application) for easy access through Internet and mobile devices.

Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.