Papers by Brian Scassellati

Proceedings of the Fifteenth National Tenth Conference on Artificial Intelligence Innovative Applications of Artificial Intelligence, Jul 1, 1998
We present a novel methodology for building humanlike artificially intelligent systems. We take a... more We present a novel methodology for building humanlike artificially intelligent systems. We take as a model the only existing systems which are universally accepted as intelligent: humans. We emphasize building intelligent systems which are not masters of a single domain, but, like humans, are adept at performing a variety of complex tasks in the real world. Using evidence from cognitive science and neuroscience, we suggest four alternative essences of intelligence to those held by classical AI. These are the parallel themes of development, social interaction, embodiment, and integration. Following a methodology based on these themes, we have built a physical humanoid robot. In this paper we present our methodology and the insights it affords for facilitating learning, simplifying the computation underlying rich behavior, and building systems that can scale to more complex tasks in more challenging environments.
We present and discuss four important yet underserved research questions critical to the future o... more We present and discuss four important yet underserved research questions critical to the future of sharedenvironment human-robot collaboration. We begin with a brief survey of research surrounding individual components required for a complete collaborative robot control system, discussing the current state of the art in Learning from Demonstration, active learning, adaptive planning systems, and intention recognition. We motivate the exploration of the presented research questions by relating them to existing work and representative use cases from the domains of construction and cooking.
Fostering Learning Gains Through Personalized Robot-Child Tutoring Interactions
Proceedings of the Tenth Annual Acm Ieee International Conference, Mar 2, 2015

Effective robot collaborators that work with humans require an understanding of the underlying co... more Effective robot collaborators that work with humans require an understanding of the underlying constraint network of any joint task to be performed. Discovering this network allows an agent to more effectively plan around coworker actions or unexpected changes in its environment. To maximize the practicality of collaborative robots in real-world scenarios, humans should not be assumed to have an abundance of either time, patience, or prior insight into the underlying structure of a task when relied upon to provide the training required to impart proficiency and understanding. This work introduces and experimentally validates two demonstrationbased active learning strategies that a robot can utilize to accelerate context-free task comprehension. These strategies are derived from the action-space graph, a dual representation of a Semi-Markov Decision Process graph that acts as a constraint network and informs query generation. We present a pilot study showcasing the effectiveness of these active learning algorithms across three representative classes of task structure. Our results show an increased effectiveness of active learning when utilizing feature-based query strategies, especially in multi-instructor scenarios, achieving better task comprehension from a relatively small quantity of training demonstrations. We further validate our results by creating virtual instructors from a model of our pilot study participants, and applying it to a set of 12 more complex, real world food preparation tasks with similar results.
Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence, Jul 31, 1999
This paper presents part of an on-going project to integrate perception, attention, drives, emoti... more This paper presents part of an on-going project to integrate perception, attention, drives, emotions, behavior arbitration, and expressive acts for a robot designed to interact socially with humans. We present the design of a visual attention system based on a model of human visual search behavior from Wolfe (1994). The attention system integrates perceptions (motion detection, color saliency, and face popouts) with habituation effects and influences from the robot's motivational and behavioral state to create a context-dependent attention activation map. This activation map is used to direct eye movements and to satiate the drives of the motivational system.

We present the results of a 100 participant study on the role of a robot's physical presence in a... more We present the results of a 100 participant study on the role of a robot's physical presence in a robot tutoring task. Participants were asked to solve a set of puzzles while being provided occasional gameplay advice by a robot tutor. Each participant was assigned one of five conditions: (1) no advice, (2) robot providing randomized advice, (3) voice of the robot providing personalized advice, (4) video representation of the robot providing personalized advice, or (5) physically-present robot providing personalized advice. We assess the tutor's effectiveness by the time it takes participants to complete the puzzles. Participants in the robot providing personalized advice group solved most puzzles faster on average and improved their same-puzzle solving time significantly more than participants in any other group. Our study is the first to assess the effect of the physical presence of a robot in an automated tutoring interaction. We conclude that physical embodiment can produce measurable learning gains.
The Problem: If a robot is intended to interact with people, it needs an active vision system tha... more The Problem: If a robot is intended to interact with people, it needs an active vision system that can serve both a perceptual and communicative function.
Proceedings of the 4th ACM/IEEE International Conference on Human Robot Interaction, HRI 2009, La Jolla, California, USA, March 9-13, 2009
Hri, 2009

Joint visual attention is a critical aspect of typical human interactions. Psychophysics experime... more Joint visual attention is a critical aspect of typical human interactions. Psychophysics experiments indicate that people exhibit strong reflexive attention shifts in the direction of another person's gaze, but not in the direction of non-social cues such as arrows. In this experiment, we ask whether robot gaze elicits the same reflexive cueing effect as human gaze. We consider two robots, Zeno and Keepon, to establish whether differences in cueing depend on level of robot anthropomorphism. Using psychophysics methods for measuring attention by analyzing time to identification of a visual probe, we compare attention shifts elicited by five directional stimuli: a photograph of a human face, a line drawing of a human face, Zeno's gaze, Keepon's gaze and an arrow. Results indicate that all stimuli convey directional information, but that robots fail to elicit attentional cueing effects that are evoked by non-robot stimuli, regardless of robot anthropomorphism.
Kismet and Cog, humanoid robots at the MIT Artificial Intelligence Laboratory, are "relational ar... more Kismet and Cog, humanoid robots at the MIT Artificial Intelligence Laboratory, are "relational artifacts,"
This paper presents part of an on-going project to integrate perception, attention, drives, emoti... more This paper presents part of an on-going project to integrate perception, attention, drives, emotions, behavior arbitration, and expressive acts for a robot designed to interact socially with humans. We present the design of a visual attention system based on a model of human visual search behavior from Wolfe (1994). The attention system integrates perceptions (motion detection, color saliency, and face popouts) with habituation effects and influences from the robot's motivational and behavioral state to create a context-dependent attention activation map. This activation map is used to direct eye movements and to satiate the drives of the motivational system.
Humans (and some other animals) acquire new skills socially through direct tutelage, observationa... more Humans (and some other animals) acquire new skills socially through direct tutelage, observational conditioning, goal emulation, imitation, and other methods . These social learning skills provide a powerful mechanism for an observer to acquire behaviors and knowledge from a skilled individual (the model). In particular, imitation is an extremely powerful mechanism for social learning which has received a great deal of interest from researchers in the fields of animal behavior and child development.
The authors implemented a system which performs a fundamental visuomotor coordination task on the... more The authors implemented a system which performs a fundamental visuomotor coordination task on the humanoid robot Cog. Cog's task was to saccade its pair of two degree-of-freedom eyes to foveate on a target, and then to maneuver its six degree-of-freedom compliant arm to point at that target. This task requires systems for learning to saccade to visual targets, generating smooth arm trajectories, locating the arm in the visual field, and learning the map between gaze direction and correct pointing configuration of the arm. All learning was self-supervised solely by visual feedback. The task was accomplished by many parallel processes running on a seven processor, extensible architecture, MIMD computer.
The Computational Modeling of Perceptual Biases of Children with ASD in Naturalistic Settings
ABSTRACT
Exploration of the Activities of Others Predicts Social and Cognitive Deficits in Toddlers with ASD
ABSTRACT

2013 IEEE Third Joint International Conference on Development and Learning and Epigenetic Robotics (ICDL), 2013
One of the hallmarks of development is the transition of an agent from novice learner to able par... more One of the hallmarks of development is the transition of an agent from novice learner to able partner to experienced instructor. While most machine learning approaches focus on the first transition, we are interested in building an effective learning and development system that allows for the complete range of transitions to occur. In this paper, we present a mechanism enabling such transitions within the context of collaborative social tasks. We present a cooperative robot system capable of learning a hierarchical task execution from an experienced human user, collaborating safely with a knowledgeable human peer, and instructing a novice user based on the explicit inclusion of a feature within the planning and skill execution subsystems we've termed social force. We conclude with an evaluation of this feature's flexibility within a collaborative construction task, changing a robot's behaviors between student, peer, and instructor through simple manipulations of this feature's treatment within the planning subsystem.
While robots have been used extensively for the purpose of teaching symbolic knowledge, using rob... more While robots have been used extensively for the purpose of teaching symbolic knowledge, using robots to teach or refine motor skills of humans, such as swinging a bat, or shooting a basketball, is underserved. Robots are uniquely well situated to observe physical movements, identify problems, prioritize which problems to address first, and to patiently communicate personalized advice to the student. We propose an architecture to coach physical skills, and focus on the second and third of these challenges-identifying problems with the movements , and prioritizing which to address first-as applied to the domain of shooting a basketball. We present a supervised learning approach to prioritize which problems to work on, and propose the design of several user studies that will determine the effectiveness of the algorithm .

Humanoid Robotics
rom radically different research agendas and underlying assumptions. At the MIT AI ab, three basi... more rom radically different research agendas and underlying assumptions. At the MIT AI ab, three basic principles guide our research . We design humanoid robots to act autonomously and safely, without human control or supervision, in natural work environments and to interact with people. We do not design them as solutions for specific robotic needs (as with welding robots on assembly lines). Our goal is to build robots that function in many different real-world environments in essentially the same way. . Social robots must be able to detect and understand natural human cues the low-level social conventions that people understand and use everyday, such as head nods or eye contact so that anyone can interact with them without special training or instruction. They must also be able to employ those conventions to perform an interactive exchange. The necessity of these abilities influences the robots control-system design and physical embodiment. . Robotics offers a unique tool for t

Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction - HRI '15, 2015
Short et al. found that in a game between a human participant and a humanoid robot, the participa... more Short et al. found that in a game between a human participant and a humanoid robot, the participant will perceive the robot as being more agentic and as having more intentionality if it cheats than if it plays without cheating. However, in that design, the robot that actively cheated also generated more motion than the other conditions. In this paper, we investigate whether the additional movement of the cheating gesture is responsible for the increased agency and intentionality or whether the act of cheating itself triggers this response. In a between-participant design with 83 participants, we disambiguate between these causes by testing (1) the cases of the robot cheating to win, (2) cheating to lose, (3) cheating to tie from a winning position, and (4) cheating to tie from a losing position. Despite the fact that the robot changes its gesture to cheat in all four conditions, we find that participants are more likely to report the gesture change when the robot cheated to win from a losing position, compared with the other conditions. Participants in that same condition are also far more likely to protest in the form of an utterance following the cheat and report that the robot is less fair and honest. It is therefore the adversarial cheat itself that causes the effect and not the change in gesture, providing evidence for a cheating detector that can be triggered by robots.
Proceedings 1999 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human and Environment Friendly Robots with High Intelligence and Emotional Quotients (Cat. No.99CH36289), 1999
In order to interact socially with a human, a robot must convey intentionality, that is, the huma... more In order to interact socially with a human, a robot must convey intentionality, that is, the human must believe that the robot has beliefs, desires, and intentions. We have constructed a robot which exploits natural human social tendencies to convey intentionality through motor actions and facial expressions. We present results on the integration of perception, attention, motivation, behavior, and motor systems which allow the robot to engage in infant-like interactions with a human caregiver.
Uploads
Papers by Brian Scassellati