Academia.eduAcademia.edu

Outline

Rethinking Turing's test

2013, The Journal of Philosophy

Abstract

he Turing test is one of the philosophical foundations of Artificial Intelligence. In the sixty years since Alan Turing's "Computing Machinery and Intelligence" appeared in Mind, there have been two widely accepted interpretations of the test-the canonical behaviorist and the rival inductive (or epistemic) accounts. These accounts are based on Turing's 1950 Mind paper, and few commentators know that Turing described two other versions of his imitation game. These versions are scarcely mentioned in the voluminous literature on the Turing test but are (I shall argue) essential to understanding the test. Turing described the first version in 1948, the year he moved to Manchester to run the Computing Machine Laboratory at the university. 1 Earlier, in June 1948, the world's first electronic stored-program digital computer, the Manchester "Baby", ran its first program in the Laboratory. During 1948-49, the Baby was expanded to become a much more substantial machine, and in May 1949 the Electronic Delay Storage Automatic Calculator (EDSAC) became the second electronic stored-program computer to function, at the Mathematical Laboratory of the University of Cambridge. Further developments at Manchester led to the first commercially available electronic digital computer, in February 1951. In this way, in the late 1940s and early 1950s, the abstract question whether machine intelligence is possible came to be focused on a particular form of machine. Could machinery like the Baby and the EDSAC, if given additional high-speed memory and enhanced processing speed, be said to think? To investigate this question, Turing required a "criterion for 'thinking'" 2 -the "imitation game" was to provide this. In a 1948 report on machine learning entitled "Intelligent Machinery", he * I am indebted to Jack Copeland for comments on an earlier draft of this paper. 1 On the Manchester computers, see further B.

References (20)

  1. Andrew Hodges, Alan Turing: The Enigma (London: Vintage, 1992), p. 415. 19 Commentators who take this line include Judith Genova, "Turing's Sexual Guessing Game," Social Epistemology, viii, 4 (1994): 313-26; Patrick Hayes and Kenneth Ford, "Turing Test Considered Harmful," IJCAI-95 Proceedings of the Four- teenth International Joint Conference on Artificial Intelligence, Montreal, Quebec, August 20-25, 1995, vol. I (Morgan Kaufman, 1995), pp. 972-77;
  2. Susan G. Sterrett, "Turing's Two Tests for Intelligence," in Moor, ed., The Turing Test: The Elusive Standard of Artificial Intelligence, pp. 79-97;
  3. Douglas B. Lenat, "The Voice of the Turtle: Whatever Happened to AI?," AI Magazine, xxix, 2 (Summer 2008): 11-22; Lenat, "Building a Machine Smart Enough to Pass the Turing Test: Could We, Should We, Will We?," in Robert Epstein, Gary Roberts, and Grace Beber, eds., Parsing the Turing Test: Philo- sophical and Methodological Issues in the Quest for the Thinking Computer (Berlin: Springer, 2008), pp. 261-82.
  4. Turing, "Computing Machinery and Intelligence," p. 434; Copeland points this out ("The Turing Test," p. 9).
  5. Ayse Pinar Saygin, Ilyas Cicekli, and Varol Akman, "Turing Test: 50 Years Later," in Moor, ed., The Turing Test: The Elusive Standard of Artificial Intelligence, pp. 23-78. See pp. 26, 69.
  6. Block, "The Mind as the Software of the Brain," p. 378. Block claims that Turing "was willing to settle for a 'sufficient condition' formulation of his behaviorist defini- tion of intelligence" ("Psychologism and Behaviorism," p. 15). See note 32. 40 Block, "Psychologism and Behaviorism," p. 18. 41 Ibid.
  7. Block, "The Mind as the Software of the Brain," p. 381. In "Psychologism and Behaviorism" the hypothetical machine is the Aunt Bertha machine.
  8. Block, "Psychologism and Behaviorism," p. 21; "The Mind as the Software of the Brain," p. 383.
  9. Block, "The Mind as the Software of the Brain," p. 381. In "Psychologism and Behaviorism" Block also argues that his hypothetical machine may be nomologically possible (given a fixed-length Turing test).
  10. Block, "Psychologism and Behaviorism," p. 30.
  11. Copeland, "The Turing Test," pp. 14-15.
  12. Turing, "Can Digital Computers Think?," p. 486. 72 Moor, "The Status and Future of the Turing Test," p. 203.
  13. Rodney A. Brooks, "Intelligence without Reason," in Luc Steels and Brooks, eds., The Artificial Life Route to Artificial Intelligence (Hillsdale, NJ: Lawrence Erlbaum, 1995), pp. 25-81. See p. 57.
  14. Jordan B. Pollack, "Mindless Intelligence," IEEE Intelligent Systems, xxi, 3 (May/ June 2006): 50-56. See p. 51. 75
  15. See Proudfoot, "Anthropomorphism and AI: Turing's Much Misunderstood Imitation Game," Artificial Intelligence, clxxv, 5-6 (April 2011): 950-57; and Proudfoot, "The Implications of an Externalist Theory of Rule-Following Behaviour for Robot Cognition," Minds and Machines, xiv, 3 (August 2004): 283-308.
  16. See Cynthia Breazeal, Designing Sociable Robots (Cambridge: MIT, 2002); Breazeal, "Emotive Qualities in Lip-Synchronized Robot Speech," Advanced Robotics, xvii, 2 (2003): 97-113; and Breazeal, "Role of Expressive Behaviour for Robots that Learn from People," Philosophical Transactions of the Royal Society B, ccclxiv, 1535 (Dec. 12, 2009): 3527-38. AI researchers deliberately exploit the tendency to anthro- pomorphize, for example to aid human-computer interaction (on the efficacy of anthropomorphism, see Li Gong, "How Social is Social Responses to Computers? The Function of the Degree of Anthropomorphism in Computer Representations," Computers in Human Behavior, xxiv, 4 ( July 2008): 1494-509). Kismet is constructed so that untrained human observers believe that they understand its "facial" and "bodily" displays, and respond to its behavior as to ordinary human social signals (see Breazeal, "Toward Sociable Robots," Robotics and Autonomous Systems, xlii, 3-4 (Mar. 31, 2003): 167-75; and Breazeal, "Emotion and Sociable Humanoid Robots," International Journal of Human-Computer Studies, lix, 1-2 ( July 2003): 119-55). The grand goal is to build a "socially intelligent" robot (see Kerstin Dautenhahn, "Socially Intelligent Robots: Dimensions of Human-Robot Interaction," Philosophical Trans- actions of the Royal Society B, ccclxii, 1480 (2007): 679-704), or even Turing's "child- machine" (Turing, "Computing Machinery and Intelligence"). 77 Breazeal and Paul Fitzpatrick, "That Certain Look: Social Amplification of Ani- mate Vision," in Proceedings of the AAAI Fall Symposium, Socially Intelligent Agents: The Human in the Loop (2000), http://people.csail.mit.edu/paulfitz/publications.shtml; Breazeal, "Affective Interaction between Humans and Robots," in Jozef Kelemen and Petr Sosík, eds., Advances in Artificial Life: 6 th European Conference, ECAL 2001 (Berlin: Springer-Verlag, 2001), pp. 582-91. 78 Breazeal and Juan Velásquez, "Toward Teaching a Robot 'Infant' using Emo- tive Communication Acts," in Proceedings of 1998 Simulation of Adaptive Behavior, Work- shop on Socially Situated Intelligence (1998). http://www.ai.mit.edu/projects/sociable/ publications.html.
  17. Breazeal, "Affective Interaction between Humans and Robots," p. 585; "Toward Sociable Robots," p. 147. 80 Breazeal and Fitzpatrick, "That Certain Look: Social Amplification of Animate Vision"; Breazeal, "Toward Sociable Robots," p. 172. On "expressive" face robots, see Proudfoot, "Can a Robot Smile? Wittgenstein on Facial Expression," forthcoming in Timothy P. Racine and Kathleen L. Slaney, eds., A Wittgensteinian Perspective on the Use of Conceptual Analysis in Psychology (New York: Palgrave Macmillan, 2013), pp. 172-84. 85 Hugh Loebner adopted the standard three-player form of the imitation game in 2004 (Loebner, "How to Hold a Turing Test Contest," in Epstein, Roberts, and Beber, eds., Parsing the Turing Test: Philosophical and Methodological Issues in the Quest for the Thinking Computer, pp. 173-79).
  18. As described in Turing et al., "Can Automatic Calculating Machines Be Said to Think?". Copeland ("The Turing Test") points this out. 87 On strategic judging in the 2000 contest, see Copeland, "The Turing Test," p. 7;
  19. and Moor, "The Status and Future of the Turing Test," p. 204. For the outcomes of the 2003 contest, see http://www.loebner.net/Prizef/loebner-prize.html. 88 Recent examples of this claim are found in William J. Rapaport, "How to Pass a Turing Test," in Moor, ed., The Turing Test: The Elusive Standard of Artificial Intelligence, pp. 161-84; and Shieber, "The Turing Test as Interactive Proof."
  20. Robert M. French, "The Turing Test: The First 50 Years," Trends in Cognitive Science, iv, 3 (Mar. 1, 2000): 115-22. See p. 116.