Academia.eduAcademia.edu

Outline

What to Do with the Singularity Paradox?

2013, Studies in Applied Philosophy, Epistemology and Rational Ethics

https://doi.org/10.1007/978-3-642-31674-6_30

Abstract

The paper begins with an introduction of the Singularity Paradox, an observation that: "Superintelligent machines are feared to be too dumb to possess commonsense". Ideas from leading researchers in the fields of philosophy, mathematics, economics, computer science and robotics regarding the ways to address said paradox are reviewed and evaluated. Suggestions are made regarding the best way to handle the Singularity Paradox.

References (75)

  1. Anonymous, Hugo de Garis, Wikipedia.org (1999), http://en.wikipedia.org/wiki/Hugo_de_Garis
  2. Anonymous, Tech Luminaries Address Singularity, IEEE Spectrum. Special Report: The Singularity (June 2008), http://spectrum.ieee.org/computing/hardware/ tech-luminaries-address-singularity
  3. Armstrong, S.: Chaining God: A qualitative approach to AI, trust and moral systems. New European Century (2007), http://www.neweuropeancentury.org/GodAI.pdf
  4. Asimov, I.: Runaround in Astounding Science Fiction (March 1942)
  5. Bancel, P., Nelson, R.: The GCP Event Experiment: Design, Analytical Methods, Results. Journal of Scientific Exploration 22(4) (2008)
  6. Benford, G.: "Me/Days", in Alien Flesh. Victor Gollancz, London (1988)
  7. Berglas, A.: Artificial Intelligence Will Kill Our Grandchildren (February 22, 2009), http://berglas.org/Articles/AIKillGrandchildren/ AIKillGrandchildren.html
  8. Bishop, M.: Why Computers Can't Feel Pain. Minds and Machines 19(4), 507-516 (2009)
  9. Bostrom, N.: Ethical Issues in Advanced Artificial Intelligence. Review of Contem- porary Philosophy 5, 66-73 (2006)
  10. Bostrom, N.: Oracle AI (2008), http://lesswrong.com/lw/qv/the_rhythm_of_disagreement/
  11. Bostrom, N., Yudkowsky, E.: The Ethics of Artificial Intelligence. In: Ramsey, W., Frankish, K. (eds.) Cambridge Handbook of Artificial Intelligence. Cambridge Uni- versity Press (2011)
  12. Brin, D.: Lungfish (1987), http://www.davidbrin.com/lungfish1.html
  13. Bugaj, S., Goertzel, B.: Five Ethical Imperatives and their Implications for Human- AGI Interaction. Dynamical Psychology (2007), http://goertzel.org/dynapsyc/2007/ Five_Ethical_Imperatives_svbedit.html
  14. Butler, S.: Darwin Among the Machines, To the Editor of Press, Christchurch, New Zealand, June 13 (1863)
  15. Chalmers, D.: The Singularity: A Philosophical Analysis. Journal of Consciousness Studies 17, 7-65 (2010)
  16. Dennett, D.C.: Why You Can't Make a Computer That Feels Pain. Synthese 38(3), 415-456 (1978)
  17. Dietrich, E.: After the Humans are Gone. Journal of Experimental & Theoretical Ar- tificial Intelligence 19(1), 55-67 (2007)
  18. Drexler, E.: Engines of Creation. Anchor Press (1986)
  19. Fox, J., Shulman, C.: Superintelligence Does Not Imply Benevolence. In: 8th Euro- pean Conference on Computing and Philosophy, Munich, Germany, October 4-6 (2010)
  20. Freeman, T.: Using Compassion and Respect to Motivate an Artificial Intelligence (2009), http://www.fungible.com/respect/paper.html
  21. Garis, H.D.: The Artilect War. ETC publications (2005)
  22. Geraci, R.M.: Apocalyptic AI: Religion and the Promise of Artificial Intelligence. The Journal of the American Academy of Religion 76(1), 138-166 (2008)
  23. Geraci, R.M.: Religion for the Robots, Sightings. Martin Marty Center at the Univer- sity of Chicago, June 14 (2007), http://divinity.uchicago.edu/martycenter/ publications/~sightings/archive_2007/0614.shtml
  24. Geraci, R.M.: Spiritual Robots: Religion and Our Scientific View of the Natural World. Theology and Science 4(3), 229-246 (2006)
  25. Gibson, W.: Neuromancer. Ace Science Fiction, New York (1984)
  26. Goertzel, B.: The All-Seeing (A)I. Dynamic Psychology (2004), http://www.goertzel.org/dynapsyc
  27. Goertzel, B.: Apparent Limitations on the "AI Friendliness" and Related Concepts Imposed By the Complexity of the World (September 2006), http://www.goertzel.org/papers/ LimitationsOnFriendliness.pdf
  28. Goertzel, B.: Encouraging a Positive Transcension. Dynamical Psychology (2004), http://www.goertzel.org/dynapsyc/2004/ PositiveTranscension.html
  29. Goertzel, B.: Thoughts on AI Morality. Dynamical Psychology (2002), http://www.goertzel.org/dynapsyc
  30. Good, I.J.: Speculations Concerning the First Ultraintelligent Machine. Advances in Computers 6, 31-88 (1966)
  31. Gordon-Spears, D.: Assuring the behavior of adaptive agents. In: Rouff, C.A., et al. (eds.) Agent Technology From a Formal Perspective, pp. 227-259. Kluwer (2004)
  32. Gordon-Spears, D.F.: Asimov's Laws: Current Progress. In: Hinchey, M.G., Rash, J.L., Truszkowski, W.F., Rouff, C.A., Gordon-Spears, D.F. (eds.) FAABS 2002. LNCS (LNAI), vol. 2699, pp. 257-259. Springer, Heidelberg (2003)
  33. Gordon, D.F.: Well-Behaved Borgs, Bolos, and Berserkers. In: 15th International Conference on Machine Learning (ICML 1998), San Francisco, CA (1998)
  34. Hall, J.S.: Ethics for Machines (2000), http://autogeny.org/ethics.html
  35. Hanson, R.: Economics of the Singularity. IEEE Spectrum 45(6), 45-50 (2008)
  36. Hanson, R.: Prefer Law to Values (October 10 , 2009), http://www.overcomingbias.com/2009/10/ prefer-law-to-values.html
  37. Hawking, S.: Science in the Next Millennium. In: The Second Millennium Evening at The White House, Washington, DC, March 6 (1998)
  38. Hibbard, B.: Critique of the SIAI Collective Volition Theory (December 2005), http://www.ssec.wisc.edu/~billh/g/SIAI_CV_critique.html
  39. Hibbard, B.: Critique of the SIAI Guidelines on Friendly AI (2003), http://www.ssec.wisc.edu/~billh/g/SIAI_critique.html
  40. Hibbard, B.: The Ethics and Politics of Super-Intelligent Machines (July 2005), http://www.ssec.wisc.edu/~billh/g/SI_ethics_politics.doc
  41. Hibbard, B.: Super-Intelligent Machines. Computer Graphics 35(1), 11-13 (2001)
  42. Horvitz, E., Selman, B.: Interim Report from the AAAI Presidential Panel on Long- Term AI Futures (August 2009), http://aaai.org/Organization/Panel/panel-note.pdf
  43. Joy, B.: Why the Future Doesn't Need Us. Wired Magazine 8(4) (April 2000)
  44. Kaczynski, T.: Industrial Society and Its Future. The New York Times, September 19 (1995)
  45. Kurzweil, R.: The Singularity is Near: When Humans Transcend Biology. Viking (2005)
  46. Legg, S.: Friendly AI is Bunk, Vetta Project (2006), http://commonsenseatheism.com/wp-content/uploads/2011/02/
  47. Mccauley, L.: AI Armageddon and the Three Laws of Robotics. Ethics and Informa- tion Technology 9(2) (2007)
  48. Nagel, T.: What is it Like to be a Bat? The Philosophical Review LXXXIII(4), 435- 450 (1974)
  49. Omohundro, S.M.: The Basic AI Drives. In: Wang, P., Goertzel, B., Franklin, S. (eds.) Proceedings of the First AGI Conference. Frontiers in Artificial Intelligence and Applications, vol. 171. IOS Press (February 2008)
  50. Omohundro, S.M.: The Nature of Self-Improving Artificial Intelligence, Singularity Summit, San Francisco, CA (2007)
  51. Pynadath, D.V., Tambe, M.: Revisiting Asimov's First Law: A Response to the Call to Arms. In: Meyer, J.-J.C., Tambe, M. (eds.) ATAL 2001. LNCS (LNAI), vol. 2333, p. 307. Springer, Heidelberg (2002)
  52. Sawyer, R.J.: Robot Ethics. Science 318, 1037 (2007)
  53. Shulman, C., Jonsson, H., Tarleton, N.: Machine Ethics and Superintelligence. In: 5th Asia-Pacific Computing & Philosophy Conference, Tokyo, Japan, October 1-2 (2009)
  54. Shuman, C., Tarleton, N., Jonsson, H.: Which Consequentialism? Machine Ehics and Moral Divergence. In: Asia-Pacific Conference on Computing and Philosophy (APCAP 2009), Tokyo, Japan, October 1-2 (2009)
  55. Solomonoff, R.J.: The Time Scale of Artificial Intelligence: Reflections on Social Ef- fects. North-Holland Human Systems Management 5, 149-153 (1985)
  56. Sotala, K.: Evolved Altruism, Ethical Complexity, Anthropomorphic Trust. In: 7th European Conference on Computing and Philosophy (ECAP 2009), Barcelona, July 2-4 (2009)
  57. Turing, A.: Computing Machinery and Intelligence. Mind 59(236), 433-460 (1950)
  58. Turing, A.M.: Intelligent Machinery, A Heretical Theory. Philosophia Mathemati- ca 4(3), 256-260 (1996)
  59. Turney, P.: Controlling Super-Intelligent Machines. Canadian Artif. Intell., 27 (1991)
  60. Vinge, V.: The Coming Technological Singularity: How to Survive in the Post- human Era. In: Vision 21: Interdisciplinary Science and Engineering in the Era of Cyberspace, Cleveland, OH, March 30-31, pp. 11-22 (1993)
  61. Warwick, K.: Cyborg Morals, Cyborg Values, Cyborg Ethics. Ethics and Information Technology 5, 131-137 (2003)
  62. Waser, M.: Deriving a Safe Ethical Architecture for Intelligent Machines. In: 8th Conference on Computing and Philosophy (ECAP 2010), October 4-6 (2010)
  63. Waser, M.R.: Designing a Safe Motivational System for Intelligent Machines. In: The Third Conference on Artificial General Intelligence, Lugano, Switzerland, March 5-8 (2010)
  64. Waser, M.R.: Discovering the Foundations of a Universal System of Ethics as a Road to Safe Artificial Intelligence, AAAI Technical Report FS-08-04, Menlo Park, CA (2008)
  65. Weld, D.S., Etzioni, O.: The First Law of Robotics (a Call to Arms). In: National Conference on Artificial Intelligence, pp. 1042-1047 (1994)
  66. Yampolskiy, R.V.: AI-Complete CAPTCHAs as Zero Knowledge Proofs of Access to an Artificially Intelligent System. ISRN Artificial Intelligence, 271878 (2011)
  67. 67] Yampolskiy, R.V.: Artificial Intelligence Safety Engineering: Why Machine Eth- ics is a Wrong Approach. In: Philosophy and Theory of Artificial Intelligence (PT-AI 2011), Thessaloniki, Greece, October 3-4 (2011)
  68. Yampolskiy, R.V.: Leakproofing Singularity -Artificial Intelligence Confinement Problem. Journal of Consciousness Studies (JCS), 19(1-2) (2012)
  69. Yudkowsky, E.: Artificial Intelligence as a Positive and Negative Factor in Global Risk. In: Bostrom, N., Cirkovic, M.M. (eds.) Global Catastrophic Risks, pp. 308-345. Oxford University Press, Oxford (2008)
  70. Yudkowsky, E.: What is Friendly AI? (2005), http://singinst.org/ourresearch/publications/ what-is-friendly-ai.html
  71. Yudkowsky, E.S.: The AI-Box Experiment (2002), http://yudkowsky.net/singularity/aibox
  72. Yudkowsky, E.S.: Coherent Extrapolated Volition, Singularity Institute for Artificial Intelligence (May 2004), http://singinst.org/upload/CEV.html
  73. Yudkowsky, E.S.: Creating Friendly AI -The Analysis and Design of Benevolent Goal Architectures (2001), http://singinst.org/upload/CFAI.html
  74. Yudkowsky, E.S.: General Intelligence and Seed AI (2001), http://singinst.org/ourresearch/publications/GISAI/
  75. Yudkowsky, E.S.: Three Major Singularity Schools, Singularity Institute Blog (September 2007), http://yudkowsky.net/singularity/schools