Computation, grammars, and consciousness
2025
Sign up for access to the world's latest research
Abstract
Lecture delivered for the seminar 'Spiritual and Scientific approaches to Consciousness', coordinated by Thomas G. Bever (University of Arizona), May 2nd, 2025. The broad goal is to relate consciousness, computation, and language. •Are there properties of computation in conscious beings that are qualitatively different from non-conscious systems? •How can we characterise these properties? I argue that information compression and computational oscillations (coexistence between processes of different computational complexity) are unique to conscious beings that are embedded in some physical context, because they are a consequence of how signal analysis has to work given finite processing resources (at both cognitive and physiological levels). Mapping the environment, interacting with it, making predictions about how the environment and other entities/actants will react to our actions... all that requires compression and mixed computation. A computational system that is not required to interact with a changing dynamical environment will not develop these fundamental properties. And at the same time, a physically embedded system will necessarily develop them if it is to survive.
Related papers
Journal of Consciousness Studies, 2008
The human brain is a particularly demanding system to infer its nature from observations. Thus, there is on one hand plenty of room for theorizing and on the other hand a pressing need for a rigorous theory. We apply statistical mechanics of open systems to describe the brain as a hierarchical system in consuming free energy in least time. This holistic tenet accounts for cellular metabolism, neuronal signaling, cognitive processes all together, or any other process by a formal equation of motion that extends down to the ultimate precision of one quantum of action. According to this general thermodynamic theory cognitive processes are no different by their operational and organizational principle from other natural processes. Cognition too will emerge and evolve along path-dependent and non-determinate trajectories by consuming free energy in least time to attain thermodynamic balance within the nervous system itself and with its surrounding systems. Specifically, consciousness can be ascribed to a natural process that integrates various neural networks for coherent consumption of free energy, i.e., for meaningful deeds. The whole hierarchy of integrated systems can be formally summed up to thermodynamic entropy. The holistic tenet provides insight to the character of consciousness also by acknowledging awareness in other systems at other levels of nature's hierarchy.
Within theoretical and empirical enquiries, many different meanings associated with consciousness have appeared, leaving the term itself quite vague. This makes formulating an abstract and unifying version of the concept of consciousness – the main aim of this article –into an urgent theoretical imperative. It is argued that consciousness, characterized as dually accessible (cognized from the inside and the outside), hierarchically referential (semantically ordered), bodily determined (embedded in the working structures of an organism or conscious system), and useful in action (pragmatically functional), is a graded rather than an all-or-none phenomenon. A gradational approach, however, despite its explanatory advantages, can lead to some counterintuitive consequences and theoretical problems. In most such conceptions consciousness is extended globally (attached to primitive organisms or artificial systems), but also locally (connected to certain lower-level neuronal and bodily processes). For example, according to information integration theory (as introduced recently by Tononi and Koch, 2014), even such simple artificial systems as photodiodes possess miniscule amounts of consciousness. The major challenge for this article, then, is to establish reasonable, empirically justified constraints on how extended the range of a graded consciousness could be. It is argued that conscious systems are limited globally by the ability to individuate information (where individuated information is understood as evolutionarily embedded, socially altered, and private), whereas local limitations should be determined on the basis of a hypothesis about the action-oriented nature of the processes that select states of consciousness. Using these constraints, an abstract concept of consciousness is arrived at, hopefully contributing to a more unified state of play within consciousness studies itself.
Progress in brain research, 2005
Science, 1998
Preprint, 2024
This paper delves into one of the most fundamental questions in cognitive science and philosophy of mind: How does language, the very tool we use to understand and describe our consciousness, distort our perception of reality? Drawing on insights from neuroscience, artificial intelligence, and philosophy, this work explores how symbolic representation—particularly through language—creates cognitive artifacts like selfhood, emotions, and qualia. Building on the predictive coding framework, the paper argues that what we commonly experience as stable realities—such as the self and emotional experiences—are not metaphysical truths, but functional constructs developed by the brain to minimize uncertainty and optimize survival. By compressing the complexity of individual emotions and experiences into simplified symbols, language creates an illusion of shared reality, inflating the ego and misguiding our understanding of consciousness. The paper appeals to scholars across disciplines—from neuroscience and AI to psychology and philosophy—by challenging the widely accepted view of selfhood as an intrinsic part of human identity. Instead, it presents a compelling argument that emotions and consciousness are cognitive tools, shaped pragmatically to serve adaptive purposes. The text invites a deeper exploration into how these cognitive artifacts function both in biological systems and in artificial intelligence, encouraging interdisciplinary collaboration to further investigate the illusionary nature of subjective experiences. Ultimately, this paper is a thought-provoking contribution to ongoing debates about the nature of consciousness and invites readers from diverse fields to rethink long-held assumptions about the role of language and symbols in shaping our cognitive realities.
2000
This thesis explores the foundations of cognition. I present, analyze, and defend the claim of Computationalism, which states that cognition reduces to computations. I show how various objections, in particular Searle’s Chinese Room argument, are flawed due to a deep confusion regarding the nature of computation. In my defense of Computationalism, I appeal to the notion of emergence, which is a notion that I analyze in detail, since it has been suggested that emergence can solve the problem of consciousness. I argue against this latter suggestion, proposing an alternative view on consciousness instead.
In this essay, we attempt to demonstrate several key things-1. We are guided by the following reasoning methods-Occam's razor (simplicity), Holmes Shroud (falsifiability), and Alan Newell's Unified Theories of Cognition (teleology, basically goal-orientation, guides all real computation, there is no way to design a computer that doesn't solve some problem) This
Proceedings of the National Academy of Sciences
Significance This paper provides evidence that a theoretical computer science (TCS) perspective can add to our understanding of consciousness by providing a simple framework for employing tools from computational complexity theory and machine learning. Just as the Turing machine is a simple model to define and explore computation, the Conscious Turing Machine (CTM) is a simple model to define and explore consciousness (and related concepts). The CTM is not a model of the brain or cognition, nor is it intended to be, but a simple substrate-independent computational model of (the admittedly complex concept of) consciousness. This paper is intended to introduce this approach, show its possibilities, and stimulate research in consciousness from a TCS perspective.
2018
If we are to truly understand consciousness, we have to account for the subjectivity of Self, the sense that we all have of being separate from others, from our own bodies and from the physical environment that surrounds us. How is it that I 'know' that I am not you, my body, the doorjamb, or the trees outside my window?

Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.