High level cognitive information processing in neural networks
1992
Sign up for access to the world's latest research
Abstract
1. Summary This project comprised two related research efforts:(A) high-level connectionist cognitive modeling,(B) local neural circuit modeling. The goals of the first effort were to develop connectionist models of high-level cognitive processes such as problem solving or natural language understanding, and to understand the computational requirements of such models. The goals of the second effort were to develop biologically-realistic models of local neural circuits, and to understand the computational behavior of such models.
Related papers
Current Opinion in Neurobiology, 2014
Computational neuroscience has focused largely on the dynamics and function of local circuits of neuronal populations dedicated to a common task, such as processing a common sensory input, storing its features in working memory, choosing between a set of options dictated by controlled experimental settings or generating the appropriate actions. Most of current circuit models suggest mechanisms for computations that can be captured by networks of simplified neurons connected via simple synaptic weights. In this article I review the progress of this approach and its limitations. It is argued that new experimental techniques will yield data that might challenge the present paradigms in that they will (1) demonstrate the computational importance of microscopic structural and physiological complexity and specificity; (2) highlight the importance of models of large brain structures engaged in a variety of tasks; and (3) reveal the necessity of coupling the neuronal networks to chemical and environmental variables.
Nature Reviews Neuroscience, 2021
| Neural network models are potential tools for improving our understanding of complex brain functions. To address this goal, these models need to be neurobiologically realistic. However, although neural networks have advanced dramatically in recent years and even achieve human-like performance on complex perceptual and cognitive tasks, their similarity to aspects of brain anatomy and physiology is imperfect. Here, we discuss different types of neural models, including localist, auto-associative and hetero-associative, deep and whole-brain networks, and identify aspects under which their biological plausibility can be improved. These aspects range from the choice of model neurons and of mechanisms of synaptic plasticity and learning, to implementation of inhibition and control, along with neuroanatomical properties including area structure and local and long-range connectivity. We highlight recent advances in developing biologically grounded cognitive theories and in mechanistically explaining, based on these brain-constrained neural models, hitherto unaddressed issues regarding the nature, localization and ontogenetic and phylogenetic development of higher brain functions. In closing, we point to possible future clinical applications of brain-constrained modelling. 3 PULVERMÜLLER ET AL., BIOLOGICAL CONSTRAINTS ON NEURAL NETWORK MODELS OF COGNITIVE FUNCTIONS An important step towards addressing the neural substrate was taken by so-called localist models of cognition and language 8-12 , which filled the boxes of modular models with single artificial 'neurons' thought to locally represent cognitive elements 13 such as perceptual features and percepts, phonemes, word forms, meaning features, concepts and so on (Fig. 1a). The 1:1 relationship between the artificial neuron-like computational-algorithmic implementations and the entities postulated by cognitive theories made it easy to connect the two types of models. However, the notion that individual neurons each carry major cognitive functions is controversial today and difficult to reconcile with evidence from neuroscience research 14,15. This is not to dispute the great specificity of some neurons' responses 16 , but rather to highlight the now dominant view that even these very specific cells "do not act in isolation but are part of cell assemblies representing familiar concepts", objects or other entities 17,18. A further limitation of the localist models was that they did not systematically address the mechanisms underlying the formation of new representations and their connections. Auto-associative networks. Neuroanatomical observations suggest that the cortex is characterized by ample intrinsic and recurrent connectivity between its neurons and, therefore, it can be seen as an associative memory 19,20. This position inspired a family of artificial neural networks, called 'auto-associative networks' or 'attractor networks' 21-32. Auto-associative network models implement neurons with connections between their neuron members, so that each neuron interlinks with several or even all of the other neurons included in the set. This contrasts with the hetero-associative networks discussed below, where connections run between sub-populations of network neurons without any connections within each neuron pool. To simulate the effect of learning in auto-associative networks, so-called learning rules are included that change the connection weights between neurons as a consequence of their prior activity. For example, biologically founded unsupervised Hebbian learning, which strengthens connections between co-activated neurons 5 , is frequently applied and leads to the formation of strongly connected cell assemblies within a weakly connected auto-associative neuron pool (Fig. 2b). These cell assemblies can function as distributed network correlates or representations of perceptual, cognitive or 'mixed' context-dependent perceptual-cognitive states 6,30,32-34. Therefore, the observations that cortical neurons work together in groups and that representations are distributed across such groups 14,18 can both be accommodated by this artificial network type, along with learning mechanisms, thus overcoming major shortcomings of localist networks. Additional cognitively relevant features of auto-associative networks include the ability of a cell assembly to fully activate after only partial stimulation-a possible mechanism for Gestalt completion; that is, the recognition of an object (such as a cat) given only partial input (tail and paws). The mechanism is illustrated in Fig. 2b, where stimulation of neurons α and β is sufficient for activating the cell assembly formed by neurons α-to-γ. Furthermore, auto-associative networks integrate the established observations that: cortical neural codes can be sparse (that is, only a small fraction of available neurons respond to a given (complex) stimulus) 15,18,22,35,36 ; and that some (other) neurons respond to elementary and frequently occurring features of several stimuli (thus behaving in a less-sparse manner) 37. The reason for this lies in cell assembly overlap; that is, the possibility that two or more such circuits can share neurons while remaining functionally separate. This is illustrated in Fig. 2b, by the 'overlap neuron' of cell assemblies α-to-γ and γ-to-ε. Auto-associative networks can model a wide spectrum of cognitive processes, ranging from object, word and concept recognition to navigation, syntax processing, memory, planning and
Encyclopedia of Artificial Intelligence
1992
Abstract: We developed several novel representational and processing techniques for use in connectionist systems designed for high-level AI-like applications such as common-sense reasoning and natural language understanding. The techniques were used, for instance, in a connectionist system (Composit/SYLL) that implements Johnson-Laird's mental-model theory of human syllogistic reasoning.
Frontiers in Bioscience-Landmark
Sigact News, 1991
Neural computation, also called connectionism, parallel distributed processing, neural network modeling or brain-style computation, has grown rapidly in the last decade. Despite this explosion, and ultimately because of impressive applications, there has been a dire need for a concise introduction from a theoretical perspective, analyzing the strengths and weaknesses of connectionist approaches and establishing links to other disciplines, such as statistics or control theory.
2003
9. Conclusion This article describes computer models that simulate the neural networks of the brain, with the goal of understanding how cognitive functions (perception, memory, thinking, language, etc.) arise from their neural basis. Many neural network models have been developed over the years, focused at many different levels of analysis, from engineering, to low-level biology, to cognition. Here, we consider models that try to bridge the gap between biology and cognition.
Although connectionism is advocated by its proponents as an alternative to the classical computational theory of mind, doubts persist about its computational credentials. Our aim is to dispel these doubts by explaining how connectionist networks compute. We first develop a generic account of computation—no easy task, because computation, like almost every other foundational concept in cognitive science, has resisted canonical definition. We opt for a characterisation that does justice to the explanatory role of computation in cognitive science. Next we examine what might be regarded as the ‘‘conventional’’ account of connectionist computation. We show why this account is inadequate and hence fosters the suspicion that connectionist networks are not genuinely computational. Lastly, we turn to the principal task of the paper: the development of a more robust portrait of connectionist computation. The basis of this portrait is an explanation of the representational capacities of connection weights, supported by an analysis of the weight configurations of a series of simulated neural networks.
Artificial Neural Networks -- Comparison of 3 Connectionist Models

Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.