Papers by Lize Alberts

Computers as Bad Social Actors: Dark Patterns and Anti-Patterns in Interfaces that Act Socially
Proceedings of the ACM on human-computer interaction, Apr 17, 2024
Interfaces increasingly mimic human social behaviours. Beyond prototypical examples like chatbots... more Interfaces increasingly mimic human social behaviours. Beyond prototypical examples like chatbots, basic automated systems like app notifications or self-checkout machines likewise address or 'talk to' people in person-like ways. Whilst early evidence suggests social cues can enhance user experience, we lack a good understanding of when, and why, their use in interaction design may be inappropriate. We combined a qualitative survey (n=80) with experience sampling, interview, and workshop studies (n=11) to understand people's attitudes and preferences regarding how a range of automated systems talk to/at them. We thematically analysed examples of phrasings or conduct our participants disliked, their reasons, and how they would prefer to be treated instead. One category of inappropriate use we identified is when social design elements are used to manipulate user behaviour. We distinguish four such tactics: 'agents' playing on users' emotions (e.g., guilt-tripping, coaxing), being pushy, mothering users, or being passive-aggressive. Another category regards pragmatics: personal or situational factors that can make even a seemingly helpful or friendly message come across as rude, tactless, invasive, etc. These include contextual insensitivity (e.g., embarrassing users in public); expressing clearly false personalised care; or treating a user in ways they find misaligned with the system's role or the nature of their relationship. We discuss these inappropriate uses in terms of an emerging 'social' class of dark and anti-patterns. From participant suggestions, we offer recommendations for improving how interfaces treat people in interaction, including broader normative reflections on treating users respectfully.

arXiv (Cornell University), Apr 24, 2024
We stand at the beginning of an era of technological and societal transformation marked by the de... more We stand at the beginning of an era of technological and societal transformation marked by the development of advanced AI assistants. Which path the technology develops along is in large part a product of the choices we make now, whether as researchers, developers, policymakers and legislators or as members of the public. We hope that the research presented in this paper will function as a springboard for further coordination and cooperation to shape the kind of AI assistants we want to see in the world. The Ethics of Advanced AI Assistants to proceed as users of this technology, as developers and as members of the society into which AI assistants may well be received. Yet, given the myriad of challenges and range of interlocking issues involved in creating beneficial AI assistants, we may also wonder how best to proceed. This paper explores a number of deep underlying questions about the ethical and societal implications of advanced AI assistants. By engaging in a practice of robust ethical foresight, our goals are to better anticipate where the tide of technological change may take us and to anchor responsible decision-making as we contribute to, interact with and co-create outcomes in this domain. The paper starts by considering the technology itself and different types of advanced AI assistant. It then explores questions around AI value alignment, well-being, safety and malicious uses. Extending the circle of inquiry further, we next look at the relationship between advanced AI assistants and individual users in more detail by exploring topics such as influence, anthropomorphism, appropriate relationships, trust and privacy. With this analysis in place, we consider the deployment of this technology at a societal level by focusing on cooperation, misinformation, equity and access, economic impact and environment, and we look at how best to evaluate advanced AI assistants. Finally, we conclude by providing some further reflections on what we have found. Ultimately, AI assistants that could have such a transformative impact on our lives must be appropriately responsive to the competing claims and needs of users, developers and society. Moreover, their behaviour should conform to principles that are appropriate for the domain they operate in. These principles are best understood as the outcome of fair deliberation at the societal level, and they include laws, norms and ethical standards. The span of questions raised by advanced AI assistants is wide-ranging and potentially daunting. In this section we provide an overview of some of the key questions that arise in this context. Each question receives detailed treatment in a later chapter dedicated to the specific topic. The intention of this section is only to provide some sense of the wider ethical landscape -and of the underlying motivation behind this paper. This overview may also be helpful to readers because of the interlocking nature of the challenges and opportunities that advanced AI assistants give rise to. Awareness of one set of issues frequently feeds into and supports deeper understanding of another. In total, we present 16 clusters of questions about advanced AI assistants relating to the deeper analysis and themes that surface in this paper. The full structure of the paper and chapter contents are covered in the penultimate section of this chapter. Key questions for the ethical and societal analysis of advanced AI assistants include: 1. What is an advanced AI assistant? How does an AI assistant differ from other kinds of AI technology? 2. What capabilities would an advanced AI assistant have? How capable could these assistants be? 3. What is a good AI assistant? Are there certain values that we want advanced AI assistants to evidence across all contexts? 4. Are there limits on what AI assistants should be allowed to do? If so, how are these limits determined? 5. What should an AI assistant be aligned with? With user instructions, preferences, interests, values, well-being or something else? 6. What issues need to be addressed for AI assistants to be safe? What does safety mean for this class of The Ethics of Advanced AI Assistants upon foundation models which are trained on a large corpora, including text sourced from the internet, and built upon to produce new artefacts. These models can be used to power advanced AI assistants in a variety of ways, including training with additional data and by learning to use tools such as application programming interfaces (APIs). Challenges arising in this domain include improving adaptation techniques, safely enabling greater autonomy in agents and developing rigorous evaluation tools to understand performance. Chapter 4, on Types of Assistant, explores the various applications of advanced AI assistants and the range of forms they could take. It begins by charting the technological transition from narrow AI tools to the general-purpose AI systems on which advanced AI assistants are based. It then explores the potential capabilities of AI assistants, including multimodal inputs and outputs, memory and inference. After that, it considers four types of advanced AI assistant that could be developed: (1) a thought assistant for discovery and understanding; (2) a creative assistant for generating ideas and content; (3) a personal assistant for planning and action, and (4) a more advanced personal assistant to further life goals. The final section explores the possibility that AI assistants will become the main user interface for the future. Chapter 5, on Value Alignment, explores the question of AI value alignment in the context of advanced AI assistants. It argues that AI alignment is best understood in terms of a tetradic relationship involving the AI agent, the user, the developer and society at large. This framework highlights the various ways in which an AI assistant can be misaligned and the need to address these varieties of misalignment in order to deploy the technology in a safe and beneficial manner. The chapter concludes by proposing a nuanced approach to alignment for AI assistants that takes into account the claims and responsibilities of different parties. Chapter 6, on Well-being, builds on theoretical and empirical literature on the conceptualisation and measurement of human well-being from philosophy, psychology, health and social sciences to discuss how advanced AI assistants should be designed and developed to align with user well-being. We identify key technical and normative challenges around the understanding of well-being that AI assistants should align with, the data and proxies that should be used to appropriately model user well-being, and the role that user preferences should play in designing well-being-centred AI assistants. The complexity surrounding human well-being requires the design of AI assistants to be informed by domain experts across different AI application domains and rooted in lived experience. Chapter 7, on Safety, focuses on dangerous situations that may arise in the context of AI assistant systems, with a particular emphasis on the safety of advanced AI assistants. It begins by providing some background information about safety engineering and safety in the context of AI. The chapter then explores some concrete examples of harms involving recent assistants based on large language models (LLMs). Building on this foundation, it then considers safety for advanced AI assistants by looking at some hypothetical harms and investigating two possible drivers of these outcomes: capability failures and goal-related failures. The chapter concludes by exploring mitigation techniques for safety risk and avenues for future research. Chapter 15, on Access and Opportunity, notes that, with the capabilities described in this paper, advanced AI assistants have the potential to provide important opportunities to those who have access to them. At the same time, there is a risk of inequality if this technology is not widely available or if it is not designed to be accessible and beneficial for all. This chapter surfaces various dimensions and situations of differential access that could influence the way people interact with advanced AI assistants, case studies that highlight risks to be avoided, and access-related challenges need to be addressed throughout the design, development and deployment process. To help map out paths ahead, it concludes with an exploration of the idea of liberatory access and looks at how this ideal may support the beneficial and equitable development of advanced AI assistants. Chapter 16, on Misinformation, argues that advanced AI assistants pose four main risks for the information ecosystem. First, AI assistants may make users more susceptible to misinformation, as people develop trust relationships with these systems and uncritically turn to them as reliable sources of information. Second, AI assistants may provide ideologically biased or otherwise partial information to users in attempting to align to user expectations. In doing so, AI assistants may reinforce specific ideologies and biases and compromise healthy political debate. Third, AI assistants may erode societal trust in shared knowledge by contributing to the dissemination of large volumes of plausible-sounding but low-quality information. Finally, AI assistants may facilitate hypertargeted disinformation campaigns by offering novel, covert ways for propagandists to manipulate public opinion. This chapter articulates these risks and discusses technical and policy mitigations. Chapter 17, on Economic Impact, analyses the potential economic impacts of advanced AI assistants. We start with an analysis of the economic impacts of AI in general, focusing on employment, job quality, productivity growth and inequality. We then examine the potential economic impacts of advanced AI assistants for each of these four variables, and we supplement the analysis with a discussion of two case studies: educational...

arXiv (Cornell University), Jan 16, 2024
With the growing popularity of conversational agents based on large language models (LLMs), we ne... more With the growing popularity of conversational agents based on large language models (LLMs), we need to ensure their behaviour is ethical and appropriate. Work in this area largely centres around the 'HHH' criteria: making outputs more helpful and honest, and avoiding harmful (biased, toxic, or inaccurate) statements. Whilst this semantic focus is useful when viewing LLM agents as mere mediums or output-generating systems, it fails to account for pragmatic factors that can make the same speech act seem more or less tactless or inconsiderate in different social situations. With the push towards agentic AI, wherein systems become increasingly proactive in chasing goals and performing actions in the world, considering the pragmatics of interaction becomes essential. We propose an interactional approach to ethics that is centred on relational and situational factors. We explore what it means for a system, as a social actor, to treat an individual respectfully in a (series of) interaction(s). Our work anticipates a set of largely unexplored risks at the level of situated social interaction, and offers practical suggestions to help agentic LLM technologies treat people well.
arXiv (Cornell University), Jun 29, 2022

Meeting them halfway: Altering language conventions to facilitate human-robot interaction
SPIL plus, Mar 1, 2019
This article considers the remaining hindrances for natural language processing technologies in a... more This article considers the remaining hindrances for natural language processing technologies in achieving open and natural (human-like) interaction between humans and computers. Although artificially intelligent (AI) systems have been making great strides in this field, particularly with the development of deep learning architectures that carry surface-level statistical methods to greater levels of sophistication, these systems are yet incapable of deep semantic analysis, reliable translation, and generating rich answers to open-ended questions. I consider how the process may be facilitated from our side, first, by altering some of our existing language conventions (which may occur naturally) if we are to proceed with statistical approaches, and secondly, by considering possibilities in using a formalised artificial language as an auxiliary medium, as it may avoid many of the inherent ambiguities and irregularities that make natural language difficult to process using rule-based methods. As current systems have been predominantly English-based, I argue that a formal auxiliary language would not only be a simpler and more reliable medium for computer processing, but may also offer a more neutral, easy-to-learn lingua franca for uniting people from different linguistic backgrounds with none necessarily having the upper hand.
Stellenbosch : Stellenbosch University, Dec 1, 2020
By submitting this thesis electronically, I declare that the entirety of the work contained there... more By submitting this thesis electronically, I declare that the entirety of the work contained therein is my own, original work, that I am the sole author thereof (save to the extent explicitly otherwise stated), that reproduction and publication thereof by Stellenbosch University will not infringe any third party rights and that I have not previously in its entirety or in part submitted it for obtaining any qualification.
arXiv (Cornell University), Jan 31, 2024

Computers as Bad Social Actors: Dark Patterns and Anti-Patterns in Interfaces that Act Socially
Proceedings of the ACM on Human-Computer Interaction, Volume 8, Issue CSCW1, Article No. 202, pp 1–25, 2024
Interfaces increasingly mimic human social behaviours. Beyond prototypical examples like chatbots... more Interfaces increasingly mimic human social behaviours. Beyond prototypical examples like chatbots, basic automated systems like app notifications or self-checkout machines likewise address or 'talk to' people in person-like ways. Whilst early evidence suggests social cues can enhance user experience, we lack a good understanding of when, and why, their use in interaction design may be inappropriate. We combined a qualitative survey (n=80) with experience sampling, interview, and workshop studies (n=11) to understand people's attitudes and preferences regarding how a range of automated systems talk to/at them. We thematically analysed examples of phrasings or conduct our participants disliked, their reasons, and how they would prefer to be treated instead. One category of inappropriate use we identified is when social design elements are used to manipulate user behaviour. We distinguish four such tactics: 'agents' playing on users' emotions (e.g., guilt-tripping, coaxing), being pushy, mothering users, or being passive-aggressive. Another category regards pragmatics: personal or situational factors that can make even a seemingly helpful or friendly message come across as rude, tactless, invasive, etc. These include contextual insensitivity (e.g., embarrassing users in public); expressing clearly false personalised care; or treating a user in ways they find misaligned with the system's role or the nature of their relationship. We discuss these inappropriate uses in terms of an emerging 'social' class of dark and anti-patterns. From participant suggestions, we offer recommendations for improving how interfaces treat people in interaction, including broader normative reflections on treating users respectfully.

Recent years have seen a surge in applications and technologies aimed at motivating users to achi... more Recent years have seen a surge in applications and technologies aimed at motivating users to achieve personal goals and improve their wellbeing. However, these often fail to promote long-term behaviour change, and sometimes even backfire. We consider how self-determination theory (SDT), a metatheory of human motivation and wellbeing, can help explain why such technologies fail, and how they may better help users internalise the motivation behind their goals and make enduring changes in their behaviour. In this work, we systematically reviewed 15 papers in the ACM Digital Library that apply SDT to the design of behaviour change technologies (BCTs). We identified 50 suggestions for design features in BCTs, grounded in SDT, that researchers have applied to enhance user motivation. However, we find that SDT is often leveraged to optimise engagement with the technology itself rather than with the targeted behaviour change per se. When interpreted through the lens of SDT, the implication is that BCTs may fail to cultivate sustained changes in behaviour, as users' motivation depends on their enjoyment of the intervention, which may wane over time. An underexplored opportunity remains for designers to leverage SDT to support users to internalise the ultimate goals and value of certain behaviour changes, enhancing their motivation to sustain these changes in the long term.

arXiv:2401.09082v2 [cs.CL]
With the growing popularity of conversational agents based on large language models (LLMs), we ne... more With the growing popularity of conversational agents based on large language models (LLMs), we need to ensure their behaviour is ethical and appropriate. Work in this area largely centres around the 'HHH’ criteria - making outputs more helpful and honest, and avoiding harmful (biased, toxic, or inaccurate) statements. Whilst this semantic focus isu seful when viewing LLM agents as mere mediums or output-generating systems, it fails to account for pragmatic factors that can make the same speech act seem more or less tactless or inconsiderate in different social situations. With the push towards agentic AI, wherein systems become increasingly proactive in chasing goals and performing actions in the world, considering the pragmatics of interaction becomes essential. We propose an interactional approach to ethics that is centred on relational and situational factors. We explore what it means for a system, as a social actor, to treat an individual respectfully in a (series of) interaction(s). Our work anticipates a set of largely unexplored risks at the level of situated social interaction, and offers practical suggestions to help agentic LLM technologies treat people well.

Stellenbosch Papers in Linguistics, 2019
This article considers the remaining hindrances for natural language processing technologies in a... more This article considers the remaining hindrances for natural language processing technologies in achieving open and natural (human-like) interaction between humans and computers. Although artificially intelligent (AI) systems have been making great strides in this field, particularly with the development of deep learning architectures that carry surface-level statistical methods to greater levels of sophistication, these systems are yet incapable of deep semantic analysis, reliable translation, and generating rich answers to open-ended questions. I consider how the process may be facilitated from our side, first, by altering some of our existing language conventions (which may occur naturally) if we are to proceed with statistical approaches, and secondly, by considering possibilities in using a formalised artificial language as an auxiliary medium, as it may avoid many of the inherent ambiguities and irregularities that make natural language difficult to process using rule-based methods. As current systems have been predominantly English-based, I argue that a formal auxiliary language would not only be a simpler and more reliable medium for computer processing, but may also offer a more neutral, easy-to-learn lingua franca for uniting people from different linguistic backgrounds with none necessarily having the upper hand.
Thesis Chapters by Lize Alberts

Master's Thesis, 2020
In this thesis, I carry out a novel and interdisciplinary analysis of various complex factors i... more In this thesis, I carry out a novel and interdisciplinary analysis of various complex factors involved in human natural-language acquisition, use and comprehension, aimed at uncovering some of the basic requirements for if we were to try and develop artificially intelligent (AI) agents with similar capacities. Inspired by a recent publication wherein I explored the complexities and challenges involved in enabling AI systems to deal with the grammatical(i.e. syntactic and morphological) irregularities and ambiguities inherent in natural language (Alberts, 2019), I turn my focus here towards appropriately inferring the content of symbols themselves—as ‘grounded’ in real-world percepts, actions, and situations.
I first introduce the key theoretical problems I aim to address in theories of mind and language. For background, I discuss the co-development of AI and the controverted strands of computational theories of mind in cognitive science, and the grounding problem(or ‘internalist trap’) faced by them. I then describe the approach I take to address the grounding problem in the rest of the thesis. This proceeds in chapter I.
To unpack and address the issue, I offer a critical analysis of the relevant theoretical literature in philosophy of mind, psychology, cognitive science and (cognitive) linguistics in chapter II. I first evaluate the major philosophical/psychological debates regarding the nature of concepts; theories regarding how concepts are acquired, used, and represented in the mind; and, on that basis, offer my own account of conceptual structure, grounded in current (cognitively plausible) connectionist theories of thought. To further explicate how such concepts are acquired and communicated, I evaluate the relevant embodied (e.g. cognitive, perceptive, sensorimotor, affective, etc.) factors involved in grounded human (social) cognition, drawing from current scientific research in the areas of4E Cognition and social cognition. On that basis, I turn my focus specifically towards grounded theories of language, drawing from the cognitive linguistics programme that aims to develop a naturalised, cognitively plausible understanding of human concept/language acquisition and use. I conclude the chapter with a summary wherein I integrate my findings from these various disciplines, presenting a general theoretical basis upon which to evaluate more practical considerations for its implementation in AI—the topic of the following chapter.
In chapter III, I offer an overview of the different major approaches(and their integrations) in the area of Natural Language Understanding in AI, evaluating their respective strengths and shortcomings in terms of specific models. I then offer a critical summary wherein I contrast and contextualise the different approaches in terms of the more fundamental theoretical convictions they seem to reflect.
On that basis, in the final chapter, I re-evaluate the aforementioned grounding problem and the different ways in which it has been interpreted in different (theoretical and practical) disciplines, distinguishing between a stronger and weaker reading. I then present arguments for why implementing the stronger version in AI seems, both practically and theoretically, problematic. Instead, drawing from the theoretical insights I gathered, I consider some of the key requirements for ‘grounding’ (in the weaker sense) as much as possible of natural language use with robotic AI agents, including implementational constraints that might need to be put in place to achieve this. Finally, I evaluate some of the key challenges that may be involved, if indeed the aim were to meet all the requirements specified.
Uploads
Papers by Lize Alberts
Thesis Chapters by Lize Alberts
I first introduce the key theoretical problems I aim to address in theories of mind and language. For background, I discuss the co-development of AI and the controverted strands of computational theories of mind in cognitive science, and the grounding problem(or ‘internalist trap’) faced by them. I then describe the approach I take to address the grounding problem in the rest of the thesis. This proceeds in chapter I.
To unpack and address the issue, I offer a critical analysis of the relevant theoretical literature in philosophy of mind, psychology, cognitive science and (cognitive) linguistics in chapter II. I first evaluate the major philosophical/psychological debates regarding the nature of concepts; theories regarding how concepts are acquired, used, and represented in the mind; and, on that basis, offer my own account of conceptual structure, grounded in current (cognitively plausible) connectionist theories of thought. To further explicate how such concepts are acquired and communicated, I evaluate the relevant embodied (e.g. cognitive, perceptive, sensorimotor, affective, etc.) factors involved in grounded human (social) cognition, drawing from current scientific research in the areas of4E Cognition and social cognition. On that basis, I turn my focus specifically towards grounded theories of language, drawing from the cognitive linguistics programme that aims to develop a naturalised, cognitively plausible understanding of human concept/language acquisition and use. I conclude the chapter with a summary wherein I integrate my findings from these various disciplines, presenting a general theoretical basis upon which to evaluate more practical considerations for its implementation in AI—the topic of the following chapter.
In chapter III, I offer an overview of the different major approaches(and their integrations) in the area of Natural Language Understanding in AI, evaluating their respective strengths and shortcomings in terms of specific models. I then offer a critical summary wherein I contrast and contextualise the different approaches in terms of the more fundamental theoretical convictions they seem to reflect.
On that basis, in the final chapter, I re-evaluate the aforementioned grounding problem and the different ways in which it has been interpreted in different (theoretical and practical) disciplines, distinguishing between a stronger and weaker reading. I then present arguments for why implementing the stronger version in AI seems, both practically and theoretically, problematic. Instead, drawing from the theoretical insights I gathered, I consider some of the key requirements for ‘grounding’ (in the weaker sense) as much as possible of natural language use with robotic AI agents, including implementational constraints that might need to be put in place to achieve this. Finally, I evaluate some of the key challenges that may be involved, if indeed the aim were to meet all the requirements specified.