Academia.eduAcademia.edu

Outline

Artificial Intelligence: What it Was, and What it Should Be?

International Journal of Advanced Computer Science and Applications

https://doi.org/10.14569/IJACSA.2020.0110609

Abstract

Artificial Intelligence was embraced as an idea of simulating unique abilities of humans, such as thinking, selfimprovement, and expressing their feelings using different languages. The idea of "Programs with Common Sense" was the main and central goal of Classical AI; it was, mainly built around an internal, updatable cognitive model of the world. But, now almost all the proposed models and approaches lacked reasoning and cognitive models and have been transferred to be more data driven. In this paper, different approaches and techniques of AI are reviewed, specifying how these approaches strayed from the main goal of Classical AI, and emphasizing how to return to its main objective. Additionally, most of the terms and concepts used in this field such as Machine Learning, Neural Networks and Deep Learning are highlighted. Moreover, the relations among these terms are determined, trying to remove mysterious and ambiguities around them. The transition from the Classical AI to Neuro-Symbolic AI and the need for new Cognitive-based models are also explained and discussed.

References (42)

  1. Gary F. Marcus, The Next Decade in AI: Four Steps Towards Robust Artificial Intelligence, Computer Science, ArXiv, 2020.
  2. Tammet, Tanel. "Extending Automated Deduction for Commonsense Reasoning." ArXiv abs/2003.13159 (2020).
  3. Russell, Stuart J.; Norvig, Peter (2003), "Artificial intelligence: a modern approach" (2nd ed.), Upper Saddle River, New Jersey: Prentice Hall, ISBN 0-13-790395-2.
  4. Golgi, Camillo. "On the Structure of the Grey Matter of the Brain." In Golgi Centennial Symposium: Perspectives in Neurobiology, ed. and transl. M. Santini, 647-50. New York: Raven, 1975.
  5. Abraham, T.H. Nicolas Rashevsky's Mathematical Biophysics. Journal of the History of Biology 37, 333-385 (2004).
  6. McCarthy, John, ``Programs with common sense'', in Proceedings of the Teddington Conference on the Mechanization of Thought Processes, Her Majesty's Stationery Office, London, 1959.
  7. Moor, J., The Dartmouth College Artificial Intelligence Conference: The next fifty years, AI Magazine, Vol 27, No., 4, Pp. 87-9, 2006.
  8. Turing, Alan , "Computing machinery and intelligence", Mind, LIX (236): (October 1950), 433-460.
  9. Keil, F.C., "Nativism," in R.A. Wilson and F.C. Keil (eds.), "The MIT encyclopedia of the cognitive science" . Cambridge, MA: MIT Press, pp. 583-586. Keil , 1999.
  10. Simon, H. A.," Models of man", New York: Wiley, 1957.
  11. McCarthy, John ,``Programs with common sense'', in Proceedings of the Teddington Conference on the Mechanization of Thought Processes, Her Majesty's Stationery Office, London. Reprinted in [McCarthy, 1990].
  12. Lenat, D. B., Prakash, M., & Shepherd, M. (1985). CYC: Using common sense knowledge to overcome brittleness and knowledge acquisition bottlenecks. AI magazine, 6(4), 65-65.
  13. Lenat, D. What AI can learn from Romeo & Juliet. Forbes, 2019.
  14. Lenat, D. B., Guha, R.V., "Building large knowledge based systems.", Addison Wesley, Reading, Massachusetts, 1990.
  15. Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A. et al. (2017). Mastering the game of Go without human knowledge. Nature, 550(7676), 354-359.
  16. Superintelligence: Paths, Dangers, Strategies Reprint Edition, Kindle Edition.,2017.
  17. McCarthy, J., Minsky, M. L., Rochester, N., Shannon, C. E.: A Proposal, f or the Dartmouth Summer Research Project on Artificial Intelligence. The AI Magazine 27 (Winter 2006) 12-14.
  18. Bongard, Mikhail Moiseevitch. (1970). Pattern Recognition. Rochelle Park, N.J.: Hayden Book Co., Spartan Books.
  19. Kurt G¨odel. On Formally Undecidable Propositions of Principia Mathematica and Related Systems. Dover, 1962.
  20. Gödel, Nagel, Minds, and Machines, Solomon Feferman, The Journal of Philosophy, Vol. 106, No. 4, Special Issue: Our knowledge of nature and number: grounds and limits (Apr., 2009), pp. 201-219.
  21. Dreyfus, Stuart E.; Dreyfus, Hubert L. "A five-stage model of the mental activities involved in directed skill acquisitio, (February 1980).
  22. Stuart E. Dreyfus, "Coping with Change: Why people can and computers can't." logos (1986), 7:17-33.
  23. What Computers "Still" Can't Do: A Critique of artificial reason. Revised edition. Cambridge, Mass.: MIT Press, 1992.
  24. Harper & Row, " What computers can't do: a critique of artificial Reason", New York:, 1972.
  25. Yoshua Bengio, Aaron Courville, and Pascal Vincent. "Representation learning: A review and new perspectives". IEEE Trans. Pattern Analysis and Machine Intelligence (PAMI), 35(8):1798-1828, 2013.
  26. Yoshua Bengio, Alexandra Luccioni, "On the morality of artificial intelligence" . IEEE Technol. Soc. Mag. 39(1): 16-25 (2020).
  27. Yoshua Bengio, Chinnadhurai Sankar, Sandeep Subramanian, Chris Pal, SarathChandar, "do neural dialog systems use the conversation history effectively? an empirical study. ACL (1) 2019: 32-37.
  28. Gallistel, C. R. "Learning, development, and conceptual change. The organization of learning". The MIT Press, 1990.
  29. C. R. Gallistel , Adam Philip King, Memory and the computational brain: why cognitive science will transform neuroscience, Wiley, ISBN:9781405122870, 2010.
  30. Baars, Bernard J. (2002) The conscious access hypothesis: Origins and recent evidence. Trends in Cognitive Sciences, 6 (1), 47-52.
  31. Yoshua Bengio, The Consciousness Prior, Université de Montréal, Mila, 2019.
  32. Yoshua Bengio. Learning deep architectures for AI. Now Publishers, 2009.
  33. Yoshua Bengio. Deep learning and cultural evolution. In Proceedings of the Companion Publication of the 2014 Annual Conference on Genetic and Evolutionary Computation, pages 1-2. ACM, 2014. URL http://dl.acm.org/citation.cfm?id=2598395.
  34. Dehaene and L. Naccache. Towards a cognitive neuroscience of consciousness: basic evidence and a workspace framework. Cognition, 79(1-2):1-37, 2001.
  35. Dehaene, H. Lau, and S. Kouider. What is consciousness, and could machines have it? Science, 358 (6362):486-492, 2017.
  36. Backpropagation and the brain, Timothy P. Lillicrap , Adam Santoro, Luke Marris, Colin J. Akerman and Geoffrey Hinton , 2020.
  37. The Neuro-Symbolic Concept Learner: Interpreting Scenes, Words, and Sentences From Natural Supervision, Jiayuan Mao, Chuang Gan, Pushmeet Kohli, Joshua B. Tenenbaum, Jiajun Wu, arXiv.org , 2019.
  38. Cranmer, M. D., Xu, R., Battaglia, P., & Ho, S. (2019). Learning symbolic physics with graph networks. arXiv preprint arXiv:1909.05862.
  39. Raedt, L. D., Kersting, K., Natarajan, S., & Poole, D. "Statistical relational artificial intelligence: Logic, probability, and computation". Synthesis Lectures on Artificial Intelligence and Machine Learning, 10(2), 1-189 , 2016.
  40. Bingham, E., Chen, J. P., Jankowiak, M., Obermeyer, F., Pradhan, N., Karaletsos, T. et al. " Pyro: Deep universal probabilistic programming". The Journal of Machine Learning Research, 20(1), 973-978, 2019.
  41. Abraham, T.H. Nicolas Rashevsky's, "Mathematical Biophysics", Journal of the History of Biology 37, 333-385 (2004). https://doi.org/10.1023/B:HIST.0000038267.09413.0d.
  42. Lenat, D. B., Guha, R.V., "Building large knowledge based systems". Addison Wesley, Reading, Massachusetts, 1990.