History and Philosophy of Neural Networks
Abstract
This chapter conceives the history of neural networks emerging from two millennia of attempts to rationalise and formalise the operation of mind. It begins with a brief review of early classical conceptions of the soul, seating the mind in the heart; then discusses the subsequent Cartesian split of mind and body, before moving to analyse in more depth the twentieth century hegemony identifying mind with brain; the identity that gave birth to the formal abstractions of brain and intelligence we know as 'neural networks'. The chapter concludes by analysing this identity-of intelligence and mind with mere abstractions of neural behaviour-by reviewing various philosophical critiques of formal connectionist explanations of 'human understanding', 'mathematical insight' and 'consciousness'; critiques which, if correct, in an echo of Aristotelian insight, suggest that cognition may be more profitably understood not just as a result of [mere abstractions of] neural firings, but as a consequence of real, embodied neural behaviour, emerging in a brain, seated in a body, embedded in a culture and rooted in our world; the so called 4Es approach to cognitive science: the Embodied, Embedded, Enactive, and Ecological conceptions of mind. Contents 1. Introduction: the body and the brain 2. First steps towards modelling the brain 3. Learning: the optimisation of network structure 4. The fall and rise of connectionism 5. Hopfield networks 6. The 'adaptive resonance theory' classifier 7. The Kohonen 'feature-map' 8. The multi-layer perceptron 9. Radial basis function networks 10. Recent developments in neural networks 11. "What artificial neural networks cannot do .." 12. Conclusions and perspectives Glossary Nomenclature References Biographical sketches
Key takeaways
AI
AI
- Neural networks evolved from philosophical inquiries into the nature of mind and intelligence over two millennia.
- The 4Es approach emphasizes cognition as embodied, embedded, enactive, and ecological, challenging traditional neural network models.
- Critiques from philosophers like Searle and Penrose argue that computation alone cannot account for human cognition or understanding.
- Key historical figures, including McCulloch, Pitts, and Rosenblatt, laid foundational concepts for modern neural networks.
- Recent advancements in deep learning and reinforcement learning highlight neural networks' growing importance in AI and cognitive science.
References (147)
- Abbot, L., (1990), Learning in Neural Network Memories, Network: Computation in Neural Systems 1, pp. 105-122. Offers a critique of neural modelling in classical connectionist models of low level cognition.
- Auer, P, Burgsteiner, H. & Maass, W., (2002), Reducing communication for distributed learning in neural networks, in: J. R. Dorronsoro (Ed.), (2002), Proc. of the International Conference on Arti- ficial Neural Networks ICANN 2002, Vol. 2415 of Lecture Notes in Computer Science, pp. 123-128, Springer. Auer suggests that to approximate a given input/output behaviour F on particular functions of time (or spike trains), it is simply necessary to randomly pick some sufficiently complex reservoir of re- current circuits of spiking neurons, and then merely adapt the weights of a single pool of feedforward spiking neurons to approximate the desired target output behaviour.
- Ackley, D.H., Hinton, G,E & Sejnowski, T,J., (1985). A Learning Algorithm for Boltzmann Machines. Cognitive Science: 9 (1), pp. 147?169. First description of Boltzmann machine learning algorithm for neural networks with hidden layers.
- Ahmad, S., (1991), VISIT: An efficient computational model of human visual attention, PhD Thesis, University of Illinois, USA. Another critique of neural modelling in classical connectionist models of low level cognition.
- Aleksander, I, (1970), Microcircuit learning networks: hamming distance behaviour, Electronics Letters 6 (5). Early introduction to the idea of using small microcircuit devices to implement Bledsoe and Browning's n-tuple recognition systems.
- Aleksander, I. & Stonham, T.J., (1979), "Guide to Pattern Recognition using Random Access Mem- ories", Computers & Digital Techniques 2 (1), pp. 29-40, UK. Early review of the n-tuple or weightless network paradigm.
- Aleksander, I, Thomas, W.V. & Bowden, P.A., (1984),"WISARD a Radical Step Forward in Image Recognition", Sensor Review, UK. A description of a real-time vision system built using weightless networks.
- Aleksander, I & Morton, H., (1995), An Introduction to Neural Computing, Cengage Learning EMEA. Introductory text on neural networks.
- Aleksander, I & Morton, H., (2012), Aristotle's laptop: the discovery of our informational mind, World Scientific Publishing, Singapore. Research text outlining Aleksander and Morton's idea of the 'informational mind'.
- Bain, A, (1873), Mind and Body. The theories of their relation, D.Appleton & Co, NY. In which Bain suggests that our thoughts and body activity result from interactions among neurons within the brain.
- Barto, A.G., Sutton, R. S. & Anderson, C. W. , (1983), Neuron like adaptive elements that can solve difficult learning control problems, IEEE Transactions on Systems, Man and Cybernetics SMC:13: pp. 834-846. The classic introduction to reinforcement learning.
- Barnden, J., Pollack, J. (eds.), (1990), High-Level Connectionist Models, Ablex, Norwood NJ, USA. Edited volume which includes critiques of neural modelling in classical connectionist models of high level cognition.
- Beer, R. D., (1995), On the dynamics of small continuous-time recurrent neural networks, Adaptive Behaviour 3(4), pp. 469-509.
- A key introduction to Continuous Time Recurrent Neural Networks.
- Benacerraf, P., (1967), God, the Devil & Gödel, Monist 51. Benacerraf 's response to Lucas's Gödelian argument against materialism.
- Bickhard, M. H. (1993). Representational Content in Humans and Machines. Journal of Experimen- tal and Theoretical Artificial Intelligence 5, pp. 285-333.
- Mark Bickard highlighting the problem of symbol grounding in cognitive science.
- Bickhard, M. H. & Terveen, L.,(1995). Foundational Issues in Artificial Intelligence and Cognitive Science, North Holland. Introduction to Mark Bickhard's interactionist account of cognition.
- Bishop, J.M.: Stochastic Searching Networks, Proc. 1st IEE Int. Conf. on Artificial Neural Networks, pp. 329-331, London, UK. First description of the 'Stochastic Diffusion Search' algorithm.
- Bishop, J.M., Bushnell, M.J. & Westland, S., (1991), The Application of Neural Networks to Com- puter Recipe Prediction, Color, 16 (1), pp. 3-9. Successful application of neural networks to colour recipe prediction.
- Bishop, J.M., Keating, D.A. & Mitchell, R.J., (1998), A Compound Eye for a Simple Robotic Insect, in Austin, J., RAM-Based Neural Networks, pp. 166-174, World Scientific. The use of weightless neural network in mobile robot localisation.
- Bishop, J.M., (2002), Dancing with Pixies: strong artificial intelligence and panpyschism. in: Pre- ston, J. & Bishop, J.M., (eds), (2002), Views into the Chinese Room: New Essays on Searle and Artificial Intelligence, Oxford University Press, Oxford UK. First publication of the Dancing with Pixies (DwP) reductio against machine consciousness.
- Bishop, J M, (2004), A view inside the Chinese room, Philosopher 28 (4), pp. 47-51. Summary of Searle's Chinese room argument.
- Bishop, J.M. (2009), A Cognitive Computation fallacy? Cognition, computations and panpsychism, Cognitive Computation 1(3), pp. 221-233. Critique of the ubiquitous computational metaphor in cognitive science.
- Bledsoe, W.W. & Browning, I., (1959), Pattern recognition and reading by machine, Proc. Eastern Joint Computer Conference, pp. 225-232. First description of the n-tuple method for pattern classification later developed into the 'weightless neural network' paradigm by Igor Aleksander.
- Bringsjord, S., & Xiao, H., (2000), A refutation of Penrose' Gödelian case against artificial intelli- gence, J. Exp. Theo. Art. Int. 12, pp. 307-329.
- Selmer Bringsjord's critique of Roger Penrose's Gödelian argument against artificial intelligence.
- Bringsjord, S., & Noel, R., (2002), Real Robots and the Missing Thought-Experiment in the Chinese Room Dialectic. in: Preston, J. & Bishop, J.M., (eds), (2002), Views into the Chinese Room: New Essays on Searle and Artificial Intelligence, Oxford University Press, Oxford UK. Selmer Bringsjord's exposition of a 'missing thought experiment' in Searle's Chinese room argument.
- Broomhead, D. S. & Lowe, D., (1988), Radial basis functions, multi-variable functional interpolation and adaptive networks (Technical report). Royal Signals and Radar Establishment RSRE 4148, UK. First exposition of the Radial Basis Function network.
- Bryson, A.E., (1963), Optimal programming problems with inequality constraints. I: Necessary conditions for extremal solutions, J. AAAI, 1 (11), pp. 2544-2550. Vapnik cites this paper by Bryson et al as the first publication of the 'back propagation' learning algorithm.
- Carpenter, G. & Grossberg, S., (1987), A massively parallel architecture for a self organising neural pattern recognition machine, Computer Vision, Graphics and Image Processing: 37, pp. 54-115. Early publication of the ART-1 classifier.
- Chalmers, D., (1990), Why Fodor and Pylyshyn Were Wrong: The Simplest Refutation, In Pro- ceedings of The Twelfth Annual Conference of the Cognitive Science Society, pp. 340-347.
- David Chalmers' rebuttal of Fodor and Pylyshyn's critique of connectionism.
- Chalmers, D., (1995), Minds, Machines, And Mathematics A Review of Shadows of the Mind by Roger Penrose, PSYCHE, 2: 9. David Chalmers' critique of Roger Penrose's Gödelian argument against artificial intelligence.
- Chalmers, D.J., (1996), The Conscious Mind: In Search of a Fundamental Theory, Oxford University Press, Oxford, UK.
- David Chalmers' classic monograph espousing his 'psycho-functionalist' theory of mind.
- Chrisley R., (1995), Why Everything Doesn't Realize Every Computation, Minds and Machines: Minds and Machines Volume 4, pp. 403-420.
- Ron Chrisley addresses Hilary Putnam's critique of functionalism.
- Chrisley R., (2006), Counterfactual computational vehicles of consciousness, Toward a Science of Consciousness (April 4th-8th 2006), Tucson Arizona, USA. Ron Chrisley addresses Bishop's 'Dancing with Pixies' reductio against machine consciousness.
- Ciresan, D.C., Meier, U. , Gambardella, L. M. Schmidhuber, J., (2010), Deep Big Simple Neural Nets For Handwritten Digit Recognition, Neural Computation 22 (12): pp. 3207?3220. Deep learning networks applied to hand writing recognition.
- Copeland B.J., (1997), The broad conception of computation, American Behavioral Scientist 40 (6), pp. 690-716.
- Copeland discusses foundations of computing and outlines so called 'real-valued' super-Turing ma- chines.
- Cortes, C. & Vapnik, V.N., (1995), Support-Vector Networks, Machine Learning 20 (3), pp. 273 - 297. Early discussion of the Support Vector machine.
- Deacon, T, (2012), Incomplete Nature: How Mind Emerged from Matter, W. W. Norton & Com- pany. All encompassing monograph emphasising the importance of appropriate embodiment for the emer- gence of mind.
- Dinsmore, J., (1990), Symbolism and Connectionism: a difficult marriage, Technical Report TR 90-8, Southern University of Carbondale, USA. The limitations of uni-variate knowledge representation in classical connectionist systems to arity zero predicates.
- Dreyfus, H.L. & Dreyfus, S.E., (1988), Making a mind versus modelling the brain: artificial intelli- gence at a branch point, Artificial Intelligence 117: 1. Historical review of the battle between connectionism and GOFAI in cognitive science.
- Fodor, J.A., (1983), Modularity of Mind: An Essay on Faculty Psychology. The MIT Press. In which Fodor postulates a vertical and modular psychological organisation underlying biologically coherent behaviours.
- Fodor, J.A., Pylyshyn, Z.W., (1988), Connectionism and Cognitive Architecture: a critical analysis, Cognition 28, pp. 3-72. The classic critique suggesting serious limitations of connectionist systems in comparison to symbol processing systems.
- Freeman, A. (2003), Output still not really convinced, The Times Higher (April 11th 2003), UK. ' Discussion of the Chinese room argument.
- Freeman, A., (2004), The Chinese Room Comes of Age: a review of Preston & Bishop, Journal of Consciousness Studies 11 (5/6), pp. 156-158. Discussion of the Chinese room argument and Preston/Bishop's edited text.
- Fukushima, K, (1980), Neocognitron: a self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biological Cybernetics 36 (4), pp. 193?202. The Neocognitron: one of the first 'deep learning' connectionist systems.
- Gardenfors, P, (2004), Conceptual Spaces as a Framework for Knowledge Representation, Mind and Matter 2 (2), pp. 9-27. . On the problem of modelling representations.
- Garvey, J., (2003), A room with a view?, The Philosophers Magazine (3rd Quarter 2003), p. 61. Discussion of the Chinese room argument and Preston/Bishop's edited text.
- Gerstner, W. & Kistler, W., (2002), Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, UK. Classic text on spiking neuron connectionist modelling.
- Gibson, J.J., (1979), The Ecological Approach to Visual Perception. Boston: Houghton Mifflin. Gibson's classical text outlining the radical 'ecological approach' to vision and visual perception.
- Grossberg, S., (1976), Adaptive Pattern Classification and Universal recording: II. Feedback, ex- pectation, olfaction, and illusions, Bio. Cybernetics 23: pp. 187-202. Early exposition of the ART classifier.
- Harnad S., (1990), The Symbol Grounding Problem. Physica D 42, pp. 335-346. Harnad's key reference on the classical 'symbol grounding' problem.
- Harnish, S., (2001), Minds, Brains, Computers: An Historical Introduction to the Foundations of Cognitive Science, Wiley-Blackwell. Excellent general introductory text to Cognitive Science.
- Hebb, D.O. (1949). The Organisation of Behaviour. New York: Wiley & Sons. The original exposition of Hebbian learning.
- Hinton, G, (2007), Learning multiple layers of representation, Trends in Cognitive Science 11 (10), pp. 428-434.
- Hinton's discussion of deep learning.
- Hodgkin, A. L. & Huxley, A. F., (1952), A quantitative description of membrane current and its application to conduction and excitation in nerve. Journal of Physiology 117 (4), pp. 500?544. The classic paper on the detailed Hodgkin-Huxley model of the [squid] neuron.
- Hopfield, J..J., (1982), Neural networks and physical systems with emergent collective computational abilities, Proc. Nat. Acad. Sci. 79 (8), pp. 2554-2558. Outlines the use of methods from physics to analyse properties of neural systems.
- Hopfield, J. J. (1984) Neurons with graded response have collective computational properties like those of two-state neurons. Proc. Nat. Acad. Sci. 81, pp. 3088-3092. The binary Hopfield neural network.
- Hopfield, J. J. and Tank, D. W., (1985), Neural computation of decisions in optimization problems. Biological Cybernetics 55, pp. 141-146. The use of Hopfield networks in optimisation.
- Hubel, D.H., Wiesel, T.N. (1962), Receptive fields, binocular interaction and functional architecture in the cat's visual cortex, Journal of Physiology 160 (1), pp, 106-154.
- Hubel & Wiesel's classic paper on neuron behaviour.
- Irwin, G., Warwick, K. & Hunt, K., (eds), (1995), Neural Network Applications In Control. IET Digital Library, UK. A selection of neural network applications.
- Jaeger , H. & Haas, H. (2004), Harnessing Nonlinearity: Predicting Chaotic Systems and Saving Energy in Wireless Communication, Science 304: 5667, pp. 78-80. On why echo state networks learn very efficiently.
- James, W, (1891), Psychology (briefer course), Holt, New York, USA. Early classic text on the principles of psychology.
- Katevas, N., Sgouros, N.M., Tzafestas, S., Papakonstantinou G., Beattie, P., Bishop, J.M., Tsanakas, P & Koutsouris D., (1997), The Autonomous Mobile Robot SENARIO: A Sensor-Aided Intelligent Navigation for Powered Wheelchairs, IEEE Robotics and Automation 4 (4), pp. 60-70. The use of Stochastic Diffusion in mobile robot localisation.
- Kleene, S. C., (1956), Representation of events in nerve nets and finite automata. In Automata Studies 34, Annals of Mathematics Studies, pp. 3?42. Princeton University Press, Princeton, NJ.
- Before Minsky, Kleene proved that rational-weighted recurrent neural networks with boolean activation functions are computationally equivalent to classical finite state automata.
- Klein, C., (2004), Maudlin on Computation, (working paper). On Maudlin's model of the implementation of computation.
- Kohonen, T., (1988), Self-Organising Maps, Springer. Introduction to Kohonen feature maps.
- Kohonen, T, Barna, G. & Chrisley, R., (1988), Statistical Pattern Recognition with Neural Net- works, In: Proceedings of the IEEE Conference on Neural Network 1: 61-68, 24-27 July 1988, San Diego, CA. A discussion of practical issues relating to the Boltzmann machine and comparison of performance with other neural architectures.
- Le Cun, Y., (1985), Une procedure d'apprentissage pour reseau a seuil asymmetrique (a Learn- ing Scheme for Asymmetric Threshold Networks), Proceedings of Cognitiva 85, pp. 599-604, Paris, France. Independent derivation of the back propagation learning algorithm.
- Lighthill, J., (1973), Artificial Intelligence: A General Survey, in Artificial Intelligence: a paper symposium, Science Research Council, UK. UK government-funded study providing a critique of GOFAI methods.
- Lucas, J.R., (1961), Minds, Machines & Gödel, Philosophy, 36. Lucas outlines the use of Gödelian argument against materialism.
- Ludermir, T.B., de Carvalho, A., Braga, A.P. & de Souto, M.C.P., (1999), Weightless neural models: a review of current and past works, Neural Computing Surveys 2, pp. 41-61. Recent review of the weightless neural network paradigm.
- Maass, W. & Bishop, C., (eds.), (1999), Pulsed Neural Networks, The MIT Press, Cambridge MA, USA. Classic text on spiking neural networks.
- Maass, W, Natschlager, T. & Markram, H., (2002), Real-time computing without stable states: A new framework for neural computation based on perturbations, Neural Computation 14 (11), pp. 2531-2560. Learning rules for Liquid State machines.
- Maass, W. & Markram, H., (2004), On the computational power of circuits of spiking neurons, Journal of Computer and System Sciences 69 (4), pp. 593?616. On the use of the same connectionist hardware to compute different functions.
- Maudlin, T., (1989), Computation and Consciousness, Journal of Philosophy 86, pp. 407-432. On the execution of computer programs.
- McCarthy, J., Minsky, M., Rochester, N. & Shannon, C., (1955), A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence. In: McCarthy, J., Minsky, M., Rochester, N. & Shannon, C., (2006) A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence August 31 1955, AI MAgazine 27(4): 12-14. Reprint of the proposal for the 1956 Dartmouth Conference, along with the short autobiographical statements of the proposers.
- McCorduck, P., (1979), Machines who think, Freeman & Co., NY. An account of the Dartmouth Conference.
- McCulloch, W.S., Pitts, W., (1943), A logical calculus immanent in nervous activity, Bulletin of Mathematical Biophysics 5, pp. 115-133. On the computational power of the McCulloch-Pitts neuron model.
- Minsky, M., (1954), Neural Nets and the brain model problem. PhD thesis, Princeton, USA. Early work from Minsky on a connectionist theory of mind.
- Minsky, M. L. (1967). Computation: finite and infinite machines. Prentice-Hall, Inc., Upper Saddle River, NJ, USA. After Kleene, Minsky proved that rational-weighted recurrent neural networks with boolean activation functions are computationally equivalent to classical finite state automata.
- Minsky, M & Papert, S., (1969), Perceptions: an introduction to computational geometry, The MIT Press. Devastatining critique of the power of single layer perceptrons.
- Mumford, D., (1994), Neural Architectures for Pattern-theoretic Problems. In: Koch, Ch., Davies, J.L. (eds.), (1994), Large Scale Neuronal Theories of the Brain. The MIT Press, Cambridge MA. In which Mumford observes that systems which depend on interaction between feedforward and feed- back loops are quite distinct from models based on Marr's feedforward theory of vision.
- Nasuto, S.J., Bishop, J.M., Lauria, S., (1998), Time Complexity Analysis of the Stochastic Diffusion Search, Neural Computation '98, Vienna, Austria. Derivation of the computational time complexity of stochastic diffusion search.
- Nasuto, S.J., Bishop, J.M.:, (1998), Neural Stochastic Diffusion Search Network -a theoretical solution to the binding problem, Proc. ASSC2, pp. 19-20, Bremen, Germany. Examination of the binding problem through the lens of stochastic diffusion search.
- Nasuto, S.J., Dautenhahn K., Bishop, J.M., (1999), Communication as an Emergent Metaphor for Neuronal Operation. In: Nehaniv, C., (ed), (1999), Computation for Metaphors, Analogy, and Agents, Lecture Notes in Artificial Intelligence 1562, pp. 365-379. Springer. Outlines a new metaphor grounded upon stochastic diffusion search for neural processing.
- Nasuto, S.J., Bishop, J.M., (1999), Convergence of the Stochastic Diffusion Search, Parallel Algo- rithms 14 (2), pp. 89-107. Proof of the convergence of stochastic diffusion search to global optimum.
- Nasuto, S.J., Bishop, J.M. & De Meyer (2009), Communicating neurons: a connectionist spiking neuron implementation of stochastic diffusion search, Neurocomputing 72: pp. 4-6, pp. 704-712. Review of a communication metaphor -based upon stochastic diffusion search -for neural processing.
- Newell, A., & Simon, H.A., (1976), Computer science as empirical inquiry: symbols and search, Communications of the ACM 19 (3), pp. 113-126. Outlines the Physical Symbol System Hypothesis.
- Ng, A. & Dean, J., (2012), Building High-level Features Using Large Scale Unsupervised Learning, Proc. 29th Int. Conf. on Machine Learning, Edinburgh, Scotland, UK. Deep learning neural network that successfully learned to recognise higher-level concepts (e.g. human faces and cats) from unlabelled video streams.
- Newell, A., Shaw, J.C. & Simon, H.A., (1960), A variety of intelligent learning in a General Problem Solver, in Yovits, M.C & Cameron, S., (eds), 'Self Organising Systems', Pergamon Press, NY. Early discussion of the General Problem Solver.
- Overill, J., (2004), Views into the Chinese Room: New Essays on Searle and Artificial Intelligence, Journal of Logic and Computation 14 (2), pp. 325-326. Critique of the Chinese room argument.
- Penrose, R., (1989), The Emperor's New Mind: Concerning Computers, Minds, and the Laws of Physics, Oxford University Press, Oxford UK.
- Roger Penrose's text criticising computational artificial intelligence from a Gödelian perspective.
- Penrose, R., Shadows of the Mind: A Search for the Missing Science of Consciousness, Oxford University Press, Oxford UK.
- Roger Penrose's second text criticising computational artificial intelligence from a Gödelian perspec- tive.
- Penrose, R., Beyond the Doubting of a Shadow A Reply to Commentaries on Shadows of the Mind, PSYCHE, 2 (23). Roger Penrose replies to criticism of his Gödelian arguments.
- Pinker, S. & Prince, A., (1988), On Language and Connectionism: Analysis of a Parallel Distributed Processing Model of Language Acquisition. In: Pinker, S., Mahler, J. (eds.), (1988), Connections and Symbols, The MIT Press, Cambridge MA. Deficiencies in the representation of complex knowledge by classical neural networks .
- Preston, J. & Bishop, J.M., (eds), (2002), Views into the Chinese Room: New Essays on Searle and Artificial Intelligence, Oxford University Press, Oxford UK. Edited collection of essays on John Searle's Chinese room argument.
- Psyche, Symposium on Roger Penrose's Shadows of the Mind, http://psyche.cs.monash.edu.au/psyche-index-v2.html, PSYCHE VOL 2. . Special issues on Roger Penrose's Gödelian arguments against AI.
- Putnam, H., (1988), Representation & Reality, Bradford Books, Cambridge MA. The appendix contains Putnam's refutation of functionalism.
- Rapaport, W.J., (2006), Review of Preston, J. & Bishop, J.M., (eds), (2002), Views into the Chinese Room: New Essays on Searle and Artificial Intelligence, Oxford University Press, Oxford UK. In: Australian Journal of Philosophy, 94(1), pp. 129-145. Review of work on the Chinese room argument.
- Rey, G, (2002), Searle's Misunderstanding of Functionalism and Strong AI. In: Preston, J. & Bishop, J.M., (eds), (2002), Views into the Chinese Room: New Essays on Searle and Artificial Intelligence, Oxford University Press, Oxford UK. Criticism of Searle's Chinese room argument.
- Richeimer, J., (2004), Review of Preston, J. & Bishop, J.M., (eds), (2002), Views into the Chinese Room: New Essays on Searle and Artificial Intelligence, Oxford University Press, Oxford UK, Philo- sophical Books 45 (2), pp.162-167. Review of work on the Chinese room argument.
- Rosenblatt, F, (1958), Two theorems of statistical separability in the Perceptron. In: The Mecha- nisation of thought processes: proceedings of a symposium held at the National Physical Laboratory 1, pp. 419-449, HMSO London. Early work by Frank Rosenblatt on the power of the Perceptron.
- Rosenblatt, F, (1962), The Principles of Neurodynamics, Spartan Books, New York. Introduction to Perceptrons.
- Rumelhart, D.E., McClelland, J.L.., (1986), Parallel Distributed Processing: Explorations in the Microstructure of Cognition, The MIT Press, Cambridge MA. Seminal text which helped re-awaken interest in neural networks post Minsky & Papert's critique.
- Rumelhart, D.E.., Hinton, G.E. & Williams, R.J., (1986), Learning representations by back- propagating errors, Nature 323 (6088), pp. 533-536. Independent derivation of the back-propagation algorithm.
- Russel, B., (1905), On Denoting, Mind XIV(4), pp. 479-493, Oxford UK. On quantified logical inference.
- Schank, R C & Abelson, R P, (1977), Scripts, plans, goals and understanding: An inquiry into human knowledge structures, Psychology Press. On the 'machine understanding' of stories.
- Searle, J. R., (1980), Minds, brains, and programs. Behavioral and Brain Sciences 3 (3): 417-457, Cambridge, UK. The Chinese room argument.
- Searle, J., (1990), Is the Brain a Digital Computer?, Proceedings of the American Philosophical Association 64, pp.21-37. Is computational syntax reducible to physics?.
- Searle, J., (1992), The Rediscovery of the Mind, pp. 227, MIT Press, Cambridge MA. On computations and mind.
- Searle, J., (1994), The Mystery of Consciousness, Granta Books, London UK. Interviews and reflections with a selection of twentieth century thinkers on consciousness.
- Sejnowski, T.J., Rosenberg, C.R., (1987), Parallel networks that learn to pronounce English text. Complex Systems 1, pp. 145-168. Example of an ANN which learns to pronounce English text.
- Siegelmann, H. T., (1993), Neural and Super-Turing Computing, Minds and Machines 13, pp. 103?114. On Siegelmann's claim that recurrent neural networks have super-Turing computational power.
- Siegelmann, H. T., (1995), Computation Beyond the Turing Limit, Science 268 (5210), pp. 545-548. On Siegelmann's claim that recurrent neural networks have super-Turing computational power.
- Siegelmann, H.T & Sontag, E.F., (1984), Analog computation via neural networks, Theoretical Computer Science 131, pp. 331-360. On Siegelmann's claim that recurrent neural networks have super-Turing computational power.
- Smith P., (1998), Explaining Chaos. Cambridge University Press, Cambridge, UK. Includes discussion of the 'shadowing theorems' in chaos.
- Smolensky, P., (1988), Connectionist Mental States: A Reply to Fodor and Pylyshyn , Southern Journal of Philosophy 26 (supplement), pp.137-161.
- A refutation of Fodor and Pylyshyn's criticism of connectionism.
- Sprevak, M.D., (2005), The Chinese Carnival, Studies in the History & Philosophy of Science 36, pp. 203-209. Reflections on Searle's Chinese room argument.
- Tanay, T., Bishop, J.M., Spencer, M.C., Roesch, E.B. & Nasuto, S.J., (2013), Stochastic Diffusion Search applied to Trees: a Swarm Intelligence heuristic performing Monte-Carlo Tree Search. In: Bishop, J.M & Erden Y.J, (2013), (eds), Proceedings of the AISB 2013: Computing and Philosophy symposium,'What is computation?', Exeter, UK. Describes the use of a swarm intelligence paradigm -stochastic diffusion search -to play the strategic game of HeX.
- Tesauro, G. 1989. Neurogammon wins Computer Olympiad. Neural Computing 1, pp. 321-323. Playing backgammon using neural networks.
- Tarassenko, L., (1998), A Guide to Neural Computing Applications, Arnold, London. Practical introduction to neural computing.
- Tassinari, R.P. & D'Ottaviano, I.M.L., (2007), Cogito ergo sum non machina! About Gödel's first incompleteness theorem and Turing machines, CLE e-Prints, Vol. 7 (3). Discussion of Penrose arguments against artificial intelligence.
- Thompson, E, (2010), Mind in Life, Belknap Press. Thompson's claim that mind is causally related to life.
- Torrance, S., Thin Phenomenality and Machine Consciousness, in R.Chrisley, R.Clowes and S.Torrance, (eds), (2005), Proc. 2005 Symposium on Next Generation Approaches to Machine Consciousness: Imagination, Development, Intersubjectivity and Embodiment, AISB05 Convention. Hertfordshire: University of Hertfordshire, pp. 59-66. On differences between brain simulations and real brains.
- Touzet, C., (1997), Neural Reinforcement Learning for Behaviour Synthesis. in: N. Sharkey, N., (ed) (1997), Special issue on Robot Learning: The New Wave, Journal of Robotics and Autonomous Systems 22 (3/4), pp. 251-281. In reinforcement learning Touzet details replacing the Q-table with an MLP network or Kohonen network.
- Turing, A. M., (1937), [Delivered to the Society in November 1936], "On Computable Numbers, with an Application to the Entscheidungsproblem". Proceedings of the London Mathematical Society. 2 42. pp. 23-65, London, UK. Introduction to the Turing machine and its application to the Entscheidungsproblem.
- Van Gelder, T., Port, R.F. (1995), It's About Time: an overview of the dynamical approach to cognition. In: Port, R.F. & Van Gelder, T. (eds.), (1995), Mind as Motion, pp. 1-45, The MIT Press, Cambridge MA. Introduction to the dynamical system theory of mind.
- Van de Velde, F., (1997), On the use of computation in modelling behaviour, Network: Computa- tion in Neural Systems 8, pp. 1-32. Critique of neural modelling in classical connectionist models of low level cognition.
- Varela, F.J., Thompson, E. & Roesch, E., (1991), The Embodied Mind: Cognitive Science and Human Experience. The MIT Press. Varela et al's seminal introduction to the so called 'embodied concept' of mind.
- Waskan, Jonathan A., (2005), Review of Preston, J. & Bishop, J.M., (eds), (2002), Views into the Chinese Room: New Essays on Searle and Artificial Intelligence, Oxford University Press, Oxford UK. Philosophical Review, 114: 2, (2005). Further reflections on the Chinese room argument.
- Werbos, P., (1974), Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences, PhD thesis, Harvard University. 'Usually' identified as the first derivation of the back propagation algorithm.
- Wheeler, M, (2002), Change in the Rules: Computers, Dynamical Systems, and Searle. In: Preston, J. & Bishop, J.M., (eds), (2002), Views into the Chinese Room: New Essays on Searle and Artificial Intelligence, Oxford University Press, Oxford UK. In which Mike Wheeler uses the Chinese room argument to target all purely formal theories of mind.
- Whitehead A.N. & Russell, B., (1910), Principia Mathematica (3 volumes), Cambridge University Press, Cambridge UK. An attempt to derive mathematics from 'first principles'.
- Wittgenstein, L., (1948), Last Writings on the Philosophy of Psychology (vol. 1). Blackwell, Oxford, UK. The writings Wittgenstein composed during his stay in Dublin between October 1948 and March 1949, one of his most fruitful periods, which informed the Philosophical Investigations. Biographical sketches
- Dr. Mark Bishop is Professor of Cognitive Computing at Goldsmiths, University of London; Director of the Goldmsiths centre for Radical Cognitive Science and Chair of the UK society for Artificial Intelligence and the Simulation of Behaviour (AISB). He has published over 140 articles in the field of Cognitive Computing: its theory, where his