Academia.eduAcademia.edu

Outline

The Limits of Machine Ethics

Religions

https://doi.org/10.3390/REL8050100

Abstract

Machine Ethics has established itself as a new discipline that studies how to endow autonomous devices with ethical behavior. This paper provides a general framework for classifying the different approaches that are currently being explored in the field of machine ethics and introduces considerations that are missing from the current debate. In particular, law-based codes implemented as external filters for action-which we have named filtered decision making-are proposed as the basis for future developments. The emergence of values as guides for action is discussed, and personal language -together with subjectivity-are indicated as necessary conditions for this development. Last, utilitarian approaches are studied and the importance of objective expression as a requisite for their implementation is stressed. Only values expressed by the programmer in a public language-that is, separate of subjective considerations-can be evolved in a learning machine, therefore establishing the limits of present-day machine ethics.

References (29)

  1. Allen, Colin, Iva Smit, and Wendell Wallach. 2005. Artificial morality: Top-down, bottom-up, and hybrid approaches. Ethics and Information Technology 7: 149-55. [CrossRef]
  2. Arkin, Ronald. 2009. Governing Lethal Behavior in Autonomous Robots. Boca Raton: CRC Press.
  3. Asimov, Isaac. 1950. I, Robot. New York: Gnome Press.
  4. Avraham, Ronen, Kyle D. Logue, and Daniel Schwarcz. 2012. Understanding Insurance Anti-Discrimination Laws. Available online: http://repository.law.umich.edu/cgi/viewcontent.cgi?article=1163&context=law_econ_ current (accessed on 19 May 2017).
  5. Campbell, Robert L., John Chambers Christopher, and Mark H. Bickhard. 2002. Self and values: An interactivist foundation for moral development. Theory & Psychology 12: 795-823.
  6. Casey, Bryan James. 2017. Amoral machines, or: How roboticists can learn to stop worrying and love the law. Available online: https://ssrn.com/abstract=2923040 (accessed on 1 May 2017).
  7. Danielson, Peter. 1998. Modeling Rationality, Morality, and Evolution. Oxford: Oxford University Press on Demand.
  8. Floridi, Luciano. 2005. Information ethics, its nature and scope. ACM SIGCAS Computers and Society 35: 3. [CrossRef]
  9. Greene, J.D., R.B. Sommerville, L.E. Nystrom, J.M. Darley, and J.D. Cohen. 2001. An fMRI investigation of emotional engagement in moral judgment. Science Magazine 293: 2105-8. [CrossRef] [PubMed]
  10. Handelsman, Mitchell Samuel Knapp, and Michael C. Gottlieb. 2009. Positive ethics: Themes and variations. In Oxford Handbook of Positive Psychology. Oxford: Oxford University Press, pp. 105-13.
  11. Hardy, Sam A., and Gustavo Carlo. 2011. Moral identity: What is it, how does it develop, and is it linked to moral action? Child Development Perspectives 5: 212-18. [CrossRef]
  12. Head, Simon. 2014. Mindless: Why Smarter Machines are Making Dumber Humans. New York: Basic Books. Honderich, Ted. 2005. The Oxford Companion to Philosophy. Oxford: Oxford University Press.
  13. Howard, Ronald Arthur, and Clinton D. Korver. 2008. Ethics for the Real World: Creating a Personal Code to Guide Decisions in Work and Life. Cambridge: Harvard Business Press.
  14. International Federation of Robotics (IFR). 2016. World Robotics 2016. Frankfurt: International Federation of Robotics.
  15. Jackson, Frank. 1991. Decision-theoretic consequentialism and the nearest and dearest objection. Ethics 101: 461-82. [CrossRef]
  16. Kahneman, Daniel. 2011. Thinking, Fast and Slow. New York: Macmillan.
  17. Kant, Immanuel, and Thomas Kingsmill Abbott. 2004. Critique of Practical Reason. Miami: Courier Corporation.
  18. Kuipers, Benjamin. 2008. Drinking from the firehose of experience. Artificial Intelligence in Medicine 44: 155-70. [CrossRef] [PubMed]
  19. Kurzweil, Ray. 2012. How to Create a Mind: The Secret of Human Thought Revealed. London: Penguin.
  20. Lapsley, Daniel K., and Darcia Narvaez. 2006. Character education. In Handbook of Child Psychology. New York: John Wiley & Sons.
  21. Leach, Javier. 2011. Mathematics and Religion: Our Languages of Sign and Symbol. West Conshohocken Templeton Foundation Press.
  22. Lichtenberg, Judith. 2010. Negative duties, positive duties, and the "new harms". Ethics 120: 557-78. [CrossRef] Luxton, David D. 2014. Recommendations for the ethical use and design of artificial intelligent care providers. Artificial Intelligence in Medicine 62: 1-10. [CrossRef] [PubMed]
  23. Martin, Jack. 2004. Self-regulated learning, social cognitive theory, and agency. Educational Psychologist 39: 135-45.
  24. Powers, Thomas M. 2006. Prospects for a kantian machine. IEEE Intelligent Systems 21: 46-51. [CrossRef] Rosenbrock, Howard H. 1990. Machines with a Purpose. Oxford: Oxford University Press.
  25. Slote, Michael A. 1985. Common-Sense Morality and Consequentialism. Abingdon-on-Thames: Routledge & Kegan.
  26. Van de Voort, Marlies, Wolter Pieters, and Luca Consoli. 2015. Refining the ethics of computer-made decisions: A classification of moral mediation by ubiquitous machines. Ethics and Information Technology 17: 41-56.
  27. Veruggio, Gianmarco, Fiorella Operto, and George Bekey. 2016. Roboethics: Social and ethical implications. In Springer Handbook of Robotics. Berlin and Heidelberg: Springer, pp. 2135-60.
  28. Wilson, Edward O. 1975. Sociology: New Synthesis. Cambridge: Belknap Press.
  29. Yampolskiy, Roman, and Joshua Fox. 2013. Safety engineering for artificial general intelligence. Topoi 32: 217-26.