Academia.eduAcademia.edu

Connectionist Networks

description10 papers
group0 followers
lightbulbAbout this topic
Connectionist networks, also known as neural networks, are computational models inspired by the human brain's architecture. They consist of interconnected nodes (neurons) that process information in parallel, enabling the modeling of complex patterns and functions through learning algorithms, particularly in the fields of artificial intelligence and machine learning.
lightbulbAbout this topic
Connectionist networks, also known as neural networks, are computational models inspired by the human brain's architecture. They consist of interconnected nodes (neurons) that process information in parallel, enabling the modeling of complex patterns and functions through learning algorithms, particularly in the fields of artificial intelligence and machine learning.

Key research themes

1. How do connectionist networks learn abstract and hierarchical representations through layered architectures?

This research area investigates the mechanisms by which connectionist networks, especially deep learning models, learn multi-level, abstract representations of data by composing simple nonlinear transformations in multiple layers. Understanding these hierarchical representations is crucial as it underpins the recent breakthroughs in perception, cognition modeling, and many AI applications by enabling systems to capture complex structures in raw data.

Key finding: Demonstrates that deep learning achieves hierarchical feature learning by composing multiple layers of nonlinear transformations, where initial layers detect simple features (e.g., edges), intermediate layers detect motifs... Read more
Key finding: Provides a theoretical framework showing that deep convolutional networks (DCNs) can be interpreted as hierarchies of kernel machines, with each layer implementing kernel computations via rectifying nonlinearities and... Read more
Key finding: Analyzes how connectionist models simultaneously learn representations and solve cognitive tasks through distributed hidden-layer representations. The study highlights that learning modifies shared connection weights and that... Read more
Key finding: Introduces a method for connectionist networks to discover abstract low-dimensional constraint manifolds in high-dimensional data by learning internal coordinate systems (representations) to compress information. This... Read more
Key finding: Presents a feedback-based recurrent connectionist architecture where representations are iteratively refined through feedback from output to input layers. This approach enables early predictions and hierarchical... Read more

2. What mechanisms enable transfer learning and shared representation across tasks in connectionist networks?

This theme explores how connectionist networks leverage shared internal representations or weights to facilitate transfer of knowledge between related tasks. Investigating these mechanisms is pivotal because transfer learning allows models to generalize better and reduce training time when adapting to new but structurally similar problems, thereby enhancing learning efficiency in multi-task and continual learning scenarios.

Key finding: Demonstrates that connectionist networks sharing internal weights between hidden layers when jointly trained on structurally analogous tasks develop shared internal representations (identical elements). These shared weights... Read more
Key finding: Introduces Locality Guided Neural Networks (LGNN) that impose local topological structures on neurons within layers, clustering neurons by correlated activations. This structured organization enhances interpretability and... Read more
Key finding: Shows that when Hopfield networks are trained on noisy examples rather than archetypes, a supervised Hebbian learning protocol can infer the underlying archetypes effectively. This approach allows the network to extract... Read more

3. How do connectionist models address generalization, causality, and limitations in modeling cognition and vision?

This research direction focuses on the capacity of connectionist networks to generalize beyond training data, including out-of-distribution generalization, causal feature extraction, and the current limitations of deep networks in modeling human cognition and vision. Understanding these aspects is essential for improving network robustness and developing cognitively plausible AI that can reason and operate in varied environments.

Key finding: Develops a causal framework for explaining poor out-of-distribution performance of image classifiers due to reliance on spurious, non-robust features. The study proposes an estimand for the causal effect in image... Read more
Key finding: Critically evaluates claims that deep neural networks are accurate models of human vision, showing that despite outperforming in certain benchmarks, these models fail to capture key psychological findings of human vision.... Read more
Key finding: Argues from a physicalist and epistemological perspective that autonomous self-learning robots, which require coherent categorization of infinite, continuously varying sensory inputs, face fundamental indistinguishability... Read more
Key finding: Presents a statistical framework interpreting many connectionist models as performing Maximum A Posteriori (MAP) estimation under subjective probability distributions induced by their architecture. This formalism helps... Read more

4. How can connectionist network architectures be improved to overcome depth-related issues and enhance interpretability?

This theme addresses architectural innovations that enable connectionist networks to better manage increasing depth, avoid degradation in training performance, and improve interpretability. These improvements are critical for scaling up network depth without loss of performance and enhancing explainability, which are key challenges in contemporary deep learning.

Key finding: Proposes GloNet architecture that integrates a globally connected layer summing features from all network depths to the output head, allowing self-regulation of information flow and mitigating depth-related training... Read more
Key finding: Develops LGNNs that enforce topological locality within each layer by learning correlated neighboring neuron clusters. This facilitates explainability without modifying model architecture or post-processing, allowing both... Read more

All papers in Connectionist Networks

Bohm's holistic reframing of physics in Wholeness and the Implicate Order (1980) has direct relevance for consciousness studies. Bohm's interest in meaning reflects his theory of wholeness, with its structure of implicate and the... more
We present a general discussion concerning the wholeness of what has been called infinite awareness, but here is called Omni-local consciousness. This model of consciousness has an interconnecting structure that has both local and... more
Defined by enigmatic phenomena such as superposition, uncertainty, and entanglement, the quantum world represents the foundational reality beneath the perceivably deterministic classical environment. Despite our experiential interaction... more
The task allocation problem in a distributed environment is one of the most challenging problems in a multiagent system. We propose a new task allocation process using deep reinforcement learning that allows cooperating agents to act... more
In this paper, we argue that Bohm's unbroken and undivided totality he called the holomovement, the title he gave to the concept of the self-organizing universe, is more coherently understood when viewed as universal consciousness. Bohm's... more
Existing traffic light controls are ineffective and causes a handful of problems such as congestion and pollution. The purpose of this study is to investigate the application of deep reinforcement learning on traffic control systems to... more
In this essay I will attempt to define the dream and the experience of free will in terms of models of holographic processing in the brain. There are two basic models with similar results. Accordingly, in the first, Schrodinger (but not... more
Recent research in task transfer and task clustering has necessitated the need for task similarity measures in reinforcement learning. Determining task similarity is necessary for selective transfer where only information from relevant... more
The spontaneous self-organization of complex physical structures is a central issue in artificial intelligence. If self-organization were indeed a natural process, then 'cognition', i.e. all the mental processes that involve perception,... more
Historically, models of dreams and dreaming have been situated either in the physiology or the psychology of the dreamer, that is, the physical or psychological state of the dreamer is generally thought to be the ground from which dream... more
These are the references for my thesis, Dreaming in the Holo-Net
Historically, models of dreams and dreaming have been situated either in the physiology or the psychology of the dreamer, that is, the physical or psychological state of the dreamer is generally thought to be the ground from which dream... more
The current dominant thinking about the nature of living things and the cognition with which they are endowed with is that their functionalities must all be reduced to « algorithms ». That is, sets of operating rules, instructions,... more
Self-learning robots are physically unfeasible. Are the computational theories of cognition founded ? Michel Troublé – director of research, robotics and artificial intelligence. Robot autonomy is a major issue. The aim is to create... more
ABSTRACT Robust autonomous agents should be able to cooperate with new teammates effectively by employing ad hoc teamwork. Reasoning about ad hoc teamwork allows agents to perform joint tasks while cooperating with a variety of teammates.... more
Recently researchers have introduced methods to develop reusable knowledge in reinforcement learning (RL). In this paper, we define simple principles to combine skills in reinforcement learning. We present a skill combination method that... more
Download research papers for free!