Key research themes
1. How do neural population models characterize effective connectivity and dynamics in brain activity?
This research theme investigates mathematical and computational models—especially neural mass, field, and conductance-based models—to represent the activity of neuronal populations and their synaptic connectivity. These models aim to capture the mesoscopic scale of brain dynamics underlying electrophysiological signals (EEG/MEG) and to infer the properties of effective connectivity in distributed brain networks. Their relevance lies in enabling mechanistic interpretations of neural activity and bridging neurobiology with observed data, which is critical for understanding both normal and pathological brain functions.
2. How do connectionist architectures facilitate transfer learning, analogical reasoning, and structural mapping in cognitive tasks?
This theme explores connectionist (neural network) models that learn structural relationships across different but related tasks, enabling transfer of knowledge and analogical inference. The focus is on how weight sharing, shared hidden representations, and multitask learning enable networks to encode identical or analogous elements, accelerating learning and generalization in novel tasks. These models provide computational insights into human cognitive functions involving analogy, transfer learning, and schema formation, essential for understanding complex learning and reasoning.
3. What roles do connectionist models play in linguistic processing, concept representation, and language acquisition?
This theme focuses on connectionist approaches to modeling language phenomena, including morphological acquisition, rule learning, lexical representation, linguistic relativity, and cognitive semantics. Such models challenge traditional symbolic frameworks by demonstrating emergent rule-like behavior, graded representations, and integrated learning of lexical and grammatical forms. The research investigates how distributed representations and learning dynamics underlie concept formation, word acquisition, and the interplay between linguistic and non-linguistic cognition, contributing to theories of the mental lexicon and language processing architectures.
4. How do deep learning models compare to biological neural systems in vision and cognition, and what are their limitations?
This theme critically examines the relationship and differences between deep artificial neural networks and biological neural systems, especially in vision. It explores the extent to which neural network architectures and operational principles correspond to brain mechanisms, challenges the assumption that deep models fully capture human cognition, and discusses the implications for explainability and neuroscientific relevance. It highlights fundamental mismatches, such as differing physical substrates, representational abstractions, and algorithmic processes, and the ensuing challenges in interpreting deep models as cognitive or neural analogs.