Academia.eduAcademia.edu

Outline

A Psychopathological Approach to Safety Engineering in AI and AGI

2018, Developments in Language Theory

https://doi.org/10.1007/978-3-319-99229-7_46

Abstract

The complexity of dynamics in AI techniques is already approaching that of complex adaptive systems, thus curtailing the feasibility of formal controllability and reachability analysis in the context of AI safety. It follows that the envisioned instances of Artificial General Intelligence (AGI) will also suffer from challenges of complexity. To tackle such issues, we propose the modeling of deleterious behaviors in AI and AGI as psychological disorders, thereby enabling the employment of psychopathological approaches to analysis and control of misbehaviors. Accordingly, we present a discussion on the feasibility of the psychopathological approaches to AI safety, and propose general directions for research on modeling, diagnosis, and treatment of psychological disorders in AGI.

References (17)

  1. Ashrafian, H.: Can artificial intelligences suffer from mental illness? a philosophical matter to consider. Science and engineering ethics 23(2), 403-412 (2017)
  2. Association, A.P., et al.: Diagnostic and statistical manual of mental disorders (DSM-5 R ). American Psychiatric Pub (2013)
  3. Atkinson, D.J.: Emerging cyber-security issues of autonomy and the psy- chopathology of intelligent machines. In: Foundations of Autonomy and Its (Cyber) Threats: From Individuals to Interdependence: Papers from the 2015 AAAI Spring Symposium, Palo Alto, CA. http://www. aaai. org/ocs/index. php/SSS/SSS15/paper/viewFile/10219/10049 (2015)
  4. Butcher, J.N., Hooley, J.M.: Apa handbook of psychopathology: Psychopathology: Understanding, assessing, and treating adult mental disorders, vol. 1 (2018)
  5. Collins, A., Smith, E.E.: Readings in cognitive science: A perspective from psy- chology and artificial intelligence. Elsevier (2013)
  6. Davis, T.: Conceptualizing psychiatric disorders using four ds of diagnoses (2018)
  7. Dennett, D.C.: Artificial intelligence as philosophy and as psychology. Brainstorms: Philosophical essays on mind and psychology pp. 109-26 (1978)
  8. Kelly, J., Gooding, P., Pratt, D., Ainsworth, J., Welford, M., Tarrier, N.: Intelligent real-time therapy: Harnessing the power of machine learning to optimise the deliv- ery of momentary cognitive-behavioural interventions. Journal of Mental Health 21(4), 404-414 (2012)
  9. Kendler, K.S.: The dappled nature of causes of psychiatric illness: Replacing the organic-functional/hardware-software dichotomy with empirically based plural- ism. Molecular psychiatry 17(4), 377 (2012)
  10. Kotseruba, I., Tsotsos, J.K.: A review of 40 years of cognitive architecture research: Core cognitive abilities and practical applications. arXiv preprint arXiv:1610.08602 (2016)
  11. of Life Institute, F.: The Landscape of AI Safety and Beneficence Research: Input for Brainstorming at Beneficial AI 2017. Future of Life Institute (2017)
  12. Montague, P.R., Hyman, S.E., Cohen, J.D.: Computational roles for dopamine in behavioural control. Nature 431(7010), 760 (2004)
  13. Nordström, A.L., Farde, L., Wiesel, F.A., Forslund, K., Pauli, S., Halldin, C., Uppfeldt, G.: Central d2-dopamine receptor occupancy in relation to antipsychotic drug effects: a double-blind pet study of schizophrenic patients. Biological psychi- atry 33(4), 227-235 (1993)
  14. Sutton, R.S., Barto, A.G.: Reinforcement learning: An introduction, vol. 1. MIT press Cambridge (1998)
  15. Yampolskiy, R.V.: Utility function security in artificially intelligent agents. Journal of Experimental & Theoretical Artificial Intelligence 26(3), 373-389 (2014)
  16. Yampolskiy, R.V.: Taxonomy of pathways to dangerous artificial intelligence. In: AAAI Workshop: AI, Ethics, and Society (2016)
  17. Yampolskiy, R.V.: Detecting qualia in natural and artificial agents. arXiv preprint arXiv:1712.04020 (2017)