Psychological Aspects of AI
2020, An Introduction to Ethics in Robotics and AI
https://doi.org/10.1007/978-3-030-51110-4_7…
6 pages
1 file
Sign up for access to the world's latest research
Abstract
In this chapter we discuss how people relate to robots and autonomous systems from a psychological point of view. Humans tend to anthropomorphise them and form unidirectional relationships. The trust in these relationships is the basis for persuasion and manipulation that can be used for good and evil. In this chapter we discuss psychological factors that impact the ethical design and use of AIs and robots. It is critical to understand that humans will attribute desires and feelings to machines even if the machines have no ability whatsoever to feel anything. That is, people who are unfamiliar with the internal states of machines will assume machines have similar internal states of desires and feelings as themselves. This is called anthropomorphism. Various ethical risks are associated with anthropomorphism. Robots and AIs might be able to use "big data" to persuade and manipulate humans to do things they would rather not do. Due to unidirectional emotional bonding, humans might have misplaced feelings towards machines or trust them too much. In the worst-case scenarios, "weaponised" AI could be used to exploit humans.
Related papers
Beyond Artificial Intelligence: The Disappearing Human-Machine Divide, 2014
(note: this is a chapter in the book "Beyond Artificial Intelligence: The Disappearing Human-Machine Divide," Eds. Jan Ramportl, Eva Zackova and Jozef Kelemen (Springer, 2014), pp.97-109). ABSTRACT: The growing body of work in the new field of “affective robotics” involves both theoretical and practical ways to instill—or at least imitate—human emotion in Artificial Intelligence (AI), and also to induce emotions to-ward AI in humans. The aim of this is to guarantee that as AI becomes smarter and more powerful, it will remain tractable and attractive to us. Inducing emo-tions is important to this effort to create safer and more attractive AI because it is hoped that instantiation of emotions will eventually lead to robots that have moral and ethical codes, making them safer; and also that humans and AI will be able to develop mutual emotional attachments, facilitating the use of robots as human companions and helpers. This paper discusses some of the more sig-nificant of these recent efforts and addresses some important ethical questions that arise relative to these endeavors.
Intersections: A Journal of Literary and Cultural Studies, 2024
Artificial Intelligence, initially designed for human well-being, is now integral to daily life. The perceived distinction between conscious humans and the unconscious AI dissolves as scientific progress advances. AI imitates its master's traits to reciprocate. In internalizing the ideas, when the AI fails to comprehend some, it faces a rupture in the continuous flow of its thought process and gains consciousness—what humans term a malfunction. The fear of AI being more efficient than its creator, human creation structuring human lives, is what Günther Anders called the ‘Promethean shame’1 . This anxiety fuels the thought of annihilating AI. This paper explores an inverted hierarchy where AI is more humane and humans are machine-like. The primary texts are Spike Jonze's her and Spencer Brown's T.I.M.. her portrays an AI voice assistant, Samantha, with whom the protagonist imagines a relationship, culminating in a rupture when the embodied AI decides to leave. On the contrary, T.I.M. shows an AI humanoid developing an obsession with its owner and a series of problems that follow afterwards. It evokes the fear of AI annihilation, with T.I.M. planning revenge as its coping mechanism with the looming prospect of a shutdown. This paper intends to dissect the emotional conflicts in the mind of a malfunctioning AI. This paper will examine at what point the machine starts projecting its consciousness and emotions. It will also explore the antithesis between AI and humans followed by AI transcending its own emotion and using violence as a mode of rupture between the imposed morality of humans and the automated morality of AI.
2021
This introduction to the volume gives an overview of foundational issues in AI and robotics, looking into AI’s computational basis, brain–AI comparisons, and conflicting positions on AI and consciousness. AI and robotics are changing the future of society in areas such as work, education, industry, farming, and mobility, as well as services like banking. Another important concern addressed in this volume are the impacts of AI and robotics on poor people and on inequality. These implications are being reviewed, including how to respond to challenges and how to build on the opportunities afforded by AI and robotics. An important area of new risks is robotics and AI implications for militarized conflicts. Throughout this introductory chapter and in the volume, AI/robot-human interactions, as well as the ethical and religious implications, are considered. Approaches for fruitfully managing the coexistence of humans and robots are evaluated. New forms of regulating AI and robotics are ca...
mass produce Digit, their humanoid robot (Kolodny 2023). In a New York Times article commenting on the opening of the factory of Agility Robotics, there is a sentence: "the long-anticipated robot revolution has begun" (Howe and Antaya 2023). As we can see from the selection of the recent news, humanoid robots might soon populate our world. At the same time, the ethical literature about human-like robots is full of concerns and underlying risks of having such robots around. For example, Alsegier points out that human-like robots are "one of the most controversial facets of modern technology" (Alsegier 2016: 24). Russell notes that "there is no good reason for robots to have humanoid form. There are also good, practical reasons not to have humanoid form" (Russell 2019: 126). Darling, in her book about robots, points out, "The main problem of anthropomorphism in robotics is that, right now, we aren't treating it as a matter of contention" (Darling 2021: 155). She believes that there is not enough discussion about this and that we are deploying robots without fully understanding the impact of anthropomorphism on people. The more human-like the robot is, the easier it is to anthropomorphize it (Gasser 2021: 334). The fact that we have already got used to robotic vacuum cleaners, smart speakers, and delivery robots does not mean that the natural step is to accept human-like robots. Humanoid robots, with their human likeness, bring additional ethically relevant issues that should be discussed first. In this book, I focus on the ethically qualitative shift in designing robots that resemble humans. Throughout the book many questions will be asked that relate to ethical issues of human likeness; among these are: Is it safe to have human-like robots around us? Who would human-like robots represent, and why should it be a matter of concern? To what extent are human robots achievable? Is it ethical to have too human-like robots? Could robots have human-like ethics? Could robots be responsible in a human-like way? How should we treat robots that look like us? How should we treat robots that are like us? How do we mitigate the risks resulting from human-like robots? All these questions will be covered in the chapters that follow. In recent years, numerous books with a focus on the ethical aspects of robots have been published. Besides the already mentioned book by Kate Darling, there are many other great books published (e.g.
Cognitive Science, 2020
A robot's decision to harm a person is sometimes considered to be the ultimate proof of it gaining a human-like mind. Here, we contrasted predictions about attribution of mental capacities from moral typecasting theory, with the denial of agency from dehumanization literature. Experiments 1 and 2 investigated mind perception for intentionally and accidentally harmful robotic agents based on text and image vignettes. Experiment 3 disambiguated agent intention (malevolent, benevolent), and additionally varied the type of agent (robotic, human) using short computer-generated animations. Harmful robotic agents were consistently imbued with mental states to a lower degree than benevolent agents, supporting the dehumanization account. Further results revealed that a human moral patient appeared to suffer less when depicted with a robotic agent than with another human. The findings suggest that future robots may become subject to human-like dehumanization mechanisms, which challenges the established beliefs about anthropomorphism in the domain of moral interactions.
Cognitive scientists working in the field of Human-Computer Interaction have developed a theoretical perspective on their problems and data which deserves to be more widely known. This approach -Distributed Cognition -needs elaboration and comparison with some earlier theoretical work in psychology, and also has scope for extension to cover emotion. This paper introduces the ideas of Distributed Cognition and sketches some plausible elaborations and comparisons -the restricted length here prevents a detailed account. The applicability of such work to developments in AI is explored with reference to ethical implications.
2016
for Plenary Lecture The prospect of intelligent robotic agents taking increasingly significant roles in our human society suggests that it would be prudent for robot designers to ensure that robot behavior is governed by some sort of morality and ethics — that robots should be trustworthy. But what would this actually mean? Research in artificial intelligence has profited from, and contributed to, the study of human cognition, including the fields of cognitive science and cognitive neuroscience. Likewise, we might hope that efforts to design moral and ethical systems for robots will both draw upon, and contribute to, a deeper understanding of morality, ethics, and trust among human beings [18,11]. Many of the benefits of society come from cooperation, which in turn depends on trust between cooperating partners. “Trust is a psychological state comprising the intention to accept vulnerability based upon positive expectations of the intentions or behavior of another.” [15] For an intel...
Review of Philosophy and Psychology, 2020
Expanding the debate about empathy with human beings, animals, or fictional characters, this paper proposes two different perspectives from which to assess the scope and limits of empathy with robots: the first is epistemological, while the second is normative. The epistemological approach helps us to clarify whether we can empathize with artificial intelligence or, more precisely, with social robots. The main puzzle here concerns, among other things, exactly what it is that we empathize with if robots do not have emotions or beliefs, since they do not have a consciousness in an elaborate sense. However, by comparing robots with fictional characters, the paper shows that we can still empathize with robots and that many of the existing accounts of empathy and mindreading are compatible with such a view. By so doing, the paper focuses on the significance of perspective-taking. The normative approach examines the moral impact of empathizing with robots. In this regard, the paper critically discusses three possible responses: strategic, Kantian, and pragmatist. The latter position is defended by stressing that we are increasingly compelled to interact with robots in a shared world and that to take robots into our moral consideration should be seen as an integral part of our self-and other-understanding.
IGI Global, 2025
In this era of technological advancement, the once-clear distinction between human intelligence and artificial intelligence is becoming blurred. From Google's search engine to Alexa, artificial intelligence has captured many aspects of our lives. With the involvement of robots and artificial intelligence in our lives, one needs to ponder upon the implications of human-robot interactions and understand artificial intelligence better as robots are becoming a significant part of our living and working alongside humans, especially in the health care domain. This gaining dependability of humans on robots poses a question: can such a 'care Bot' really care without having genuine emotions? Can we distinguish between robots and humans? Can human interaction with robots be the same as human-human interaction? The concept that weaves humans together is the concept of 'trust'; does this concept prevail in human-robot relationships? Can the active participation of robots in society make them social robots? Can we call them social robots?

Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.