Papers by Lindsay Sanneman
XRDS: Crossroads, The ACM Magazine for Students
Insights from the field of human factors can help us design human-centered explanations that enab... more Insights from the field of human factors can help us design human-centered explanations that enable effective human-robot interaction. Studying explanation techniques according to these human factors will be critical in understanding their efficacy across diverse contexts.

Lecture Notes in Computer Science, 2020
Recent advances in artificial intelligence (AI) have drawn attention to the need for AI systems t... more Recent advances in artificial intelligence (AI) have drawn attention to the need for AI systems to be understandable to human users. The explainable AI (XAI) literature aims to enhance human understanding and human-AI team performance by providing users with necessary information about AI system behavior. Simultaneously, the human factors literature has long addressed important considerations that contribute to human performance, including how to determine human informational needs. Drawing from the human factors literature, we propose a three-level framework for the development and evaluation of explanations about AI system behavior. Our proposed levels of XAI are based on the informational needs of human users, which can be determined using the levels of situation awareness (SA) framework from the human factors literature. Based on our levels of XAI framework, we also propose a method for assessing the effectiveness of XAI systems.

As robots become increasingly prevalent in our communities, aligning the values motivating their ... more As robots become increasingly prevalent in our communities, aligning the values motivating their behavior with human values is critical. However, it is often difficult or impossible for humans, both expert and non-expert, to enumerate values comprehensively, accurately, and in forms that are readily usable for robot planning. Misspecification can lead to undesired, inefficient, or even dangerous behavior. In the value alignment problem, humans and robots work together to optimize human objectives, which are often represented as reward functions and which the robot can infer by observing human actions. In existing alignment approaches, no explicit feedback about this inference process is provided to the human. In this paper, we introduce an exploratory framework to address this problem, which we call Transparent Value Alignment (TVA). TVA suggests that techniques from explainable AI (XAI) be explicitly applied to provide humans with information about the robot's beliefs throughout learning, enabling efficient and effective human feedback. CCS CONCEPTS • Computing methodologies → Theory of mind; Reinforcement learning; Cognitive robotics; • Human-centered computing → HCI theory, concepts and models; Collaborative interaction.
Validating metrics for reward alignment in human-autonomy teaming
Computers in Human Behavior, Sep 1, 2023
The Situation Awareness Framework for Explainable AI (SAFE-AI) and Human Factors Considerations for XAI Systems
International Journal of Human-computer Interaction, Jun 15, 2022
An Empirical Study of Reward Explanations With Human-Robot Interaction Applications
IEEE robotics and automation letters, Oct 1, 2022
This article may be used only for the purpose of research, teaching, and/or private study. Commer... more This article may be used only for the purpose of research, teaching, and/or private study. Commercial use or systematic downloading (by robots or other automatic processes) is prohibited without explicit Publisher approval.

As artificial intelligence becomes an increasingly prevalent method of enhancing robotic capabili... more As artificial intelligence becomes an increasingly prevalent method of enhancing robotic capabilities, it is important to consider effective ways to train these learning pipelines and to leverage human expertise. Working towards these goals, a master-apprentice model is presented and is evaluated during a grasping task for effectiveness and human perception. The apprenticeship model augments self-supervised learning with learning by demonstration, efficiently using the human's time and expertise while facilitating future scalability to supervision of multiple robots; the human provides demonstrations via virtual reality when the robot cannot complete the task autonomously. Experimental results indicate that the robot learns a grasping task with the apprenticeship model faster than with a solely self-supervised approach and with fewer human interventions than a solely demonstration-based approach; 100% grasping success is obtained after 150 grasps with 19 demonstrations. Preliminary user studies evaluating workload, usability, and effectiveness of the system yield promising results for system scalability and deployability. They also suggest a tendency for users to overestimate the robot's skill and to generalize its capabilities, especially as learning improves.
Assessment of Depth Data Acquisition Methods for Virtual Reality Mission Operations Support Tools
2022 IEEE Aerospace Conference (AERO), Mar 5, 2022

arXiv (Cornell University), Oct 8, 2021
Explainable AI techniques that describe agent reward functions can enhance human-robot collaborat... more Explainable AI techniques that describe agent reward functions can enhance human-robot collaboration in a variety of settings. One context where human understanding of agent reward functions is particularly beneficial is in the value alignment setting. In the value alignment context, an agent aims to infer a human's reward function through interaction so that it can assist the human with their tasks. If the human can understand where gaps exist in the agent's reward understanding, they will be able to teach more efficiently and effectively, leading to quicker human-agent team performance improvements. In order to support human collaborators in the value alignment setting and similar contexts, it is first important to understand the effectiveness of different reward explanations techniques in a variety of domains. In this paper, we introduce a categorization of information modalities for reward explanation techniques, propose a suite of assessment techniques for human reward understanding, and introduce four axes of domain complexity. We then propose an experiment to study the relative efficacy of a broad set of reward explanation techniques covering multiple modalities of information in a set of domains of varying complexity.
arXiv (Cornell University), May 12, 2020

Integrated supply chain models provide an opportunity to optimize costs and production times in t... more Integrated supply chain models provide an opportunity to optimize costs and production times in the supply chain while taking into consideration the many steps in the production and delivery process and the many constraints on time, shared resources, and throughput capabilities. In this work, mixed integer linear programming (MILP) models are developed to describe the manufacturing plant, consolidation transport, and distribution center components of the supply chain. Initial optimization results are obtained for each of these models. Additionally, an integrated model including a single plant, multiple consolidation transport vehicles, and a single distribution center is formulated and initial results are obtained. All models are implemented and optimized for their given objectives using a standard MILP solver. Initial optimization results suggest that it is intractable to solve problems of relevant scale using standard MILP solvers. The natural hierarchical structure in the supply chain problem lends itself well to application of decomposition techniques intended to speed up solution time. Exact techniques, such as Benders decomposition, are explored as a baseline. Classical Benders decomposition is applied to the manufacturing plant model, and results indicate that Benders decomposition on its own will not improve solve times for the manufacturing plant problem and instead leads to longer solve times for the problems that are solved. This is likely due to the large number of discrete variables in manufacturing plant model. To improve upon solve times for the manufacturing plant model, an approximate decomposition technique is developed, applied to the plant model, and evaluated. The approximate algorithm developed in this work decomposes the problem into a three-level hierarchical structure and integrates a heuristic approach at two of the three levels in order to solve abstracted versions of the larger problem and guide towards high-quality solutions. Results indicate that the approximate technique solves problems faster than those solved by the standard MILP solver and all solutions are within approximately 20% of the true optimal solutions. Additionally, the approximate technique can solve problems twice the size of those solved by the standard MILP solver within a one hour timeframe.

Equity and Access in Algorithms, Mechanisms, and Optimization
Artificial intelligence (AI) stands to improve healthcare through innovative new systems ranging ... more Artificial intelligence (AI) stands to improve healthcare through innovative new systems ranging from diagnosis aids to patient tools. However, such "Health AI" systems are complicated and challenging to integrate into standing clinical practice. With advancing AI, regulations, practice, and policies must adapt to a wide range of new risks while experts learn to interact with complex automated systems. Even in the early stages of Health AI, risks and gaps are being identified, like severe underperformance of models for minority groups and catastrophic model failures when input data shift over time. In the face of such gaps, we find inspiration in aviation, a field that went from highly dangerous to largely safe. We draw three main lessons from aviation safety that can apply to Health AI: 1) Build regulatory feedback loops to learn from mistakes and improve practices, 2) Establish a culture of safety and openness where stakeholders have incentives to report failures and communicate across the healthcare system, and 3) Extensively train, retrain, and accredit experts for interacting with Health AI, especially to help address automation bias and foster trust. Finally, we discuss remaining limitations in Health AI with less guidance from aviation. CCS CONCEPTS • Social and professional topics → Government technology policy; • Applied computing → Life and medical sciences; • Computing methodologies → Philosophical/theoretical foundations of artificial intelligence.
[Home delivery gives us a feeling of the good old times]
Kätilölehti, 1982
Validating metrics for reward alignment in human-autonomy teaming
Computers in Human Behavior

Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction
As robots become increasingly prevalent in our communities, aligning the values motivating their ... more As robots become increasingly prevalent in our communities, aligning the values motivating their behavior with human values is critical. However, it is often difficult or impossible for humans, both expert and non-expert, to enumerate values comprehensively, accurately, and in forms that are readily usable for robot planning. Misspecification can lead to undesired, inefficient, or even dangerous behavior. In the value alignment problem, humans and robots work together to optimize human objectives, which are often represented as reward functions and which the robot can infer by observing human actions. In existing alignment approaches, no explicit feedback about this inference process is provided to the human. In this paper, we introduce an exploratory framework to address this problem, which we call Transparent Value Alignment (TVA). TVA suggests that techniques from explainable AI (XAI) be explicitly applied to provide humans with information about the robot's beliefs throughout learning, enabling efficient and effective human feedback. CCS CONCEPTS • Computing methodologies → Theory of mind; Reinforcement learning; Cognitive robotics; • Human-centered computing → HCI theory, concepts and models; Collaborative interaction.
Assessment of Depth Data Acquisition Methods for Virtual Reality Mission Operations Support Tools
2022 IEEE Aerospace Conference (AERO)
The Situation Awareness Framework for Explainable AI (SAFE-AI) and Human Factors Considerations for XAI Systems
International Journal of Human–Computer Interaction
An Empirical Study of Reward Explanations With Human-Robot Interaction Applications
IEEE Robotics and Automation Letters
Uploads
Papers by Lindsay Sanneman