Jump to content

Neural network (biology)

From Wikipedia, the free encyclopedia
(Redirected from Biological neural networks)

Animated confocal micrograph, showing interconnections of medium spiny neurons in mouse striatum

A neural network, also called a neuronal network, is an interconnected population of neurons (typically containing multiple neural circuits).[1] Biological neural networks are studied to understand the organization and functioning of nervous systems.

Closely related are artificial neural networks, machine learning models inspired by biological neural networks. They consist of artificial neurons, which are mathematical functions that are designed to be analogous to the mechanisms used by neural circuits.

This remainder of this article will describe the biological aspects of neural networks, going into detail on the systemic structure from neuron to neuronal networks. Additionally, it will bridge these biological concepts with artificial neuronal networks, and cover how research and advancements with artificial neural networks have impacted biological studies and theories in neuroscience.

Key biology

[edit]

A biological neural network is composed of a group of chemically connected or functionally associated neurons.[2] A single neuron may be connected to many other neurons and the total number of neurons and connections in a network may be extensive. Connections, called synapses, are usually formed from axons to dendrites, though dendrodendritic synapses[3] and other connections are possible. Apart from electrical signaling, there are other forms of signaling that arise from neurotransmitter diffusion.

The origin of a neural network begins with a simple neuron. Nerve cells, or neurons, are unique in that they have the ability to translate electrical signals into chemical signals, and connect pathways through the body together through that mechanism. Since they have a unique purpose, they also have a unique morphology.

A single neuron consists of three main parts, the cell body, the dendrites, and the axon. The cell body acts as a control center of the neuron, and contains the cell's nucleus and other organelles. Dendrites are branch-like extensions that come off of the cell body on one end, and their main purpose is to receive signals from other neurons. These signals are known as afferent signals, meaning they move toward the central nervous system. The axon is a long, tail-like structure that exits out of the cell body on the other end, and it is responsible for carrying action potentials away from the dendrites and cell body to other neurons (efferent signals). The axon terminal is the end of the axon where the action potential (electrical signaling) triggers the release of neurotransmitters, neuromodulators, or neurohormones (chemical signal), which creates the synapse that communicates with neighboring neurons.[4] When this synaptic connection occurs between a large number of neurons, a neural network is formed. In the process of creating a synaptic connection, there is a synaptic transmission from the presynaptic neuron to the postsynaptic neuron.

While it is true that the transmission of a signal within a neuron is carried out through chemicals, the transmission of a signal within a neuron, from the dendrites to the axon terminal, occurs through changes in membrane potential. This action potential occurs because each neuron has a charged cellular membrane (meaning that there is an imbalance of voltage between the outside and the inside of the cell), which is created through the presence of voltage-gated ion channels. The charge of neurons are influenced by neurotransmitters and other external stimuli, which allows the process of transmitting the chemical signal into an electrical signal to occur again. In essence, a membrane has a resting potential when the neuron is not transmitting a signal, and this resting membrane potential is maintained by sodium potassium pumps and potassium leak channels. An action potential then occurs, and these are regulated by voltage-gated sodium and potassium channels and sodium potassium pumps. When the action potential passes through the neuron, it will release neurotransmitters or other chemical messengers through the axon hillock and terminal, which will send a message to the adjacent neuron in the neural network, thus making another action potential more or less likely to occur to either continue or stop the message being transmitted.[5] This is the "language" that neural networks use to communicate, and is the basis of the entire nervous system's function.

Connection to artificial neural networks

[edit]

Artificial neural networks are popular tools in computational studies, biological studies, and artificial intelligence. Artificial neural networks are modelled after biological neural networks, and can provide significant contributions to the study of biological neural networks.

A biological neural networks is not a simple system. The human brain forms millions of neural networks through 10¹¹ neurons that have about 10¹⁵ synaptic connections between them. All of these neural networks are specialized to work towards a specific function, such as basic survival, intense thought processing, or memory formation.[6]

The biological neural network can be modeled mathematically by first looking at the function of a neuron in a mathematical sense. The mathematical interpretation of a single neuron, known as a node within the neural network, is based on the input signal that the neuron receives from surrounding neurons and the total activation then becomes the sum of all inputs, which is then statistically connected to the synaptic connectivity that it has with the rest of the neuronal connections around it.[6] The output signal of the neural then becomes a function of its activation, and when these functions are layered together in a more complex manner, an artificial neural network can be created.

Artificial intelligence, cognitive modelling, and artificial neural networks are information processing paradigms inspired by how biological neural systems process data. Artificial intelligence and cognitive modelling try to simulate some properties of biological neural networks, since artificial neural networks act as a subset of machine learning that allows artificial intelligence to slowly adapt and process complex data.[7] In the artificial intelligence field, artificial neural networks have been applied successfully to speech recognition, image analysis and adaptive control, in order to construct software agents (in computer and video games) or autonomous robots.

Neural network theory, the study of computation models (artificial neural networks) that have been inspired by the biology of the brain and nervous system (biological neural network) has served to better identify how the neurons in the brain function and provide the basis for efforts to create artificial intelligence and more complex systems that are based on non-linear transformation and optimization.[8]

History

[edit]

The preliminary theoretical base for contemporary neural networks was independently proposed by Alexander Bain[9] (1873) and William James[10] (1890). In their work, both thoughts and body activity resulted from interactions among neurons within the brain.

Computer simulation of the branching architecture of the dendrites of pyramidal neurons[11]

For Bain,[9] every activity led to the firing of a certain set of neurons. When activities were repeated, the connections between those neurons strengthened. According to his theory, this repetition was what led to the formation of memory. The general scientific community at the time was skeptical of Bain's[9] theory because it required what appeared to be an inordinate number of neural connections within the brain. It is now apparent that the brain is exceedingly complex and that the same brain "wiring" can handle multiple problems and inputs.[12]

James'[10] theory was similar to Bain's;[9] however, he suggested that memories and actions resulted from electrical currents flowing among the neurons in the brain. His model, by focusing on the flow of electrical currents, did not require individual neural connections for each memory or action.[13]

C. S. Sherrington[14] (1898) conducted experiments to test James' theory. He ran electrical currents down the spinal cords of rats. However, instead of demonstrating an increase in electrical current as projected by James, Sherrington found that the electrical current strength decreased as the testing continued over time. Importantly, this work led to the discovery of the concept of habituation.

McCulloch and Pitts[15] (1943) also created a computational model for neural networks based on mathematics and algorithms. They called this model threshold logic. These early models paved the way for neural network research to split into two distinct approaches. One approach focused on biological processes in the brain and the other focused on the application of neural networks to artificial intelligence.

In 1956, Svaetichin discovered some of the neural processes underlaying neural networks in vivo. He had studied the functioning of second order retinal cells (Horizontal Cells), and discovered that in this first processing layer they operated by an opponency mechanism. This helped explain the first layer of processing of the visual system.

The parallel distributed processing of the mid-1980s became popular under the name connectionism. The text by Rumelhart and McClelland[16] (1986) provided a full exposition on the use of connectionism in computers to simulate neural processes.

Artificial neural networks, as used in artificial intelligence, have traditionally been viewed as simplified models of neural processing in the brain, even though the relation between this model and brain biological architecture is debated, as it is not clear to what degree artificial neural networks mirror brain function.[17]

Use to advance neuroscience

[edit]

While artificial neural networks have larger role in neural network theory and other advances regarding machine-based learning, artificial intelligence, and modern digital services, artificial neural networks can also be used to advance biological studies and neuroscience. Theoretical and computational neuroscience is the field concerned with the analysis and computational modeling of biological neural systems. Since neural systems are closely related to cognitive processes and behavior, the field is closely related to cognitive and behavioral modeling.

The aim of the field is to create models of biological neural systems in order to understand how biological systems work. To gain this understanding, neuroscientists strive to make a link between observed biological processes (data), biologically plausible mechanisms for neural processing and learning (neural network models) and theory (statistical learning theory and information theory).

Types of models

[edit]

Many models are used; defined at different levels of abstraction, and modeling different aspects of neural systems. They range from models of the short-term behavior of individual neurons, through models of the dynamics of neural circuitry arising from interactions between individual neurons, to models of behavior arising from abstract neural modules that represent complete subsystems. These include models of the long-term and short-term plasticity of neural systems and their relation to learning and memory, from the individual neuron to the system level.

Connectivity

[edit]

In August 2020 scientists reported that bi-directional connections, or added appropriate feedback connections, can accelerate and improve communication between and in modular neural networks of the brain's cerebral cortex and lower the threshold for their successful communication. They showed that adding feedback connections between a resonance pair can support successful propagation of a single pulse packet throughout the entire network.[18][19] The connectivity of a neural network stems from its biological structures and is usually challenging to map out experimentally. Scientists used a variety of statistical tools to infer the connectivity of a network based on the observed neuronal activities, i.e., spike trains. Recent research has shown that statistically inferred neuronal connections in subsampled neural networks strongly correlate with spike train covariances, providing deeper insights into the structure of neural circuits and their computational properties.[20]

Future of neuroscience research

[edit]

Using different models and artificial intelligence tools creates opportunities for a new way of understanding neural function. In doing so, these technologies have the potential to offer treatments and advanced technology to help with neuroscience-related pathologies such as Alzheimer's and post-traumatic stress disorder.[21] As mentioned above, artificial neural networks already have given neuroscience the foundation for studying "complex behaviors, heterogenous neural activity, and circuit connectivity" in ways that could not be studied before.[22] Artificial neural networks provide scientists with data analysis tools, help with modeling complex behaviors and complex activity, and provide an optimization perspective.[22] A recent study involving brain-computer interfaces (BCIs) show how this technology has worked to rework the nervous system programming of patients with paralysis. By inputing an individual's motor intention through intracortical neural recording to control commands, patients are able to interact with external devices or regain sensory and motor function.[23] Future applications of artificial neural networks will continue to build on the abilities that have already been gaining interest in the field of neuroscience, such as analyzing large-scale data, building predictive visual cortex models, accelerating the discovery of drugs and therapies, and stimulating neural development and plasticity.[24][25][26][27]

Recent improvements

[edit]

In terms of recent biological neuroscience findings, initial research had been concerned mostly with the electrical characteristics of neurons, but in recent years, a particularly important part of the investigation has been the exploration of the role of neuromodulators such as dopamine, acetylcholine, and serotonin on behavior and learning.[28][29]

Biophysical models, such as BCM theory, have been important in understanding mechanisms for synaptic plasticity, and have had applications in both computer science and neuroscience.[30]

See also

[edit]

References

[edit]
  1. ^ Hopfield JJ (April 1982). "Neural networks and physical systems with emergent collective computational abilities". Proceedings of the National Academy of Sciences of the United States of America. 79 (8): 2554–2558. Bibcode:1982PNAS...79.2554H. doi:10.1073/pnas.79.8.2554. PMC 346238. PMID 6953413.
  2. ^ Sterratt D, Graham B, Gillies A, Willshaw D (2011). "Chapter 9". Principles of Computational Modelling in Neuroscience. Cambridge, U.K.: Cambridge University Press.
  3. ^ Arbib, p.666
  4. ^ Ludwig PE, Reddy V, Varacallo MA (2026), "Neuroanatomy, Neurons", StatPearls, Treasure Island (FL): StatPearls Publishing, PMID 28723006, retrieved April 5, 2026
  5. ^ "Neurons | Organismal Biology". organismalbio.biosci.gatech.edu. Retrieved April 6, 2026.
  6. ^ a b "Biological and Artificial Neural Networks". pages.hmc.edu. Retrieved April 6, 2026.
  7. ^ "Origins of AI: From neurons to neural networks - Diplo". September 17, 2025. Retrieved April 6, 2026.
  8. ^ Peterson P (April 18, 2022). "Neural Network Theory" (PDF). University of Vienna.
  9. ^ a b c d Bain A (1873). Mind and Body: The Theories of Their Relation. New York: D. Appleton and Company.
  10. ^ a b James W (1890). The Principles of Psychology. New York: H. Holt and Company.
  11. ^ Cuntz H (2010). "PLoS Computational Biology Issue Image | Vol. 6(8) August 2010". PLOS Computational Biology. 6 (8) ev06.i08. doi:10.1371/image.pcbi.v06.i08.
  12. ^ Wilkes AL, Wade NJ (April 1997). "Bain on neural networks". Brain and Cognition. 33 (3): 295–305. doi:10.1006/brcg.1997.0869. PMID 9126397.
  13. ^ Queenan BN, Ryan TJ, Gazzaniga MS, Gallistel CR (May 2017). "On the research of time past: the hunt for the substrate of memory". Annals of the New York Academy of Sciences. 1396 (1): 108–125. Bibcode:2017NYASA1396..108Q. doi:10.1111/nyas.13348. PMC 5448307. PMID 28548457.
  14. ^ Sherrington CS (1898). "Experiments in Examination of the Peripheral Distribution of the Fibers of the Posterior Roots of Some Spinal Nerves". Proceedings of the Royal Society of London. 190: 45–186. doi:10.1098/rstb.1898.0002.
  15. ^ McCulloch W, Pitts W (1943). "A Logical Calculus of Ideas Immanent in Nervous Activity". Bulletin of Mathematical Biophysics. 5 (4): 115–133. Bibcode:1943BMaB....5..115M. doi:10.1007/BF02478259.
  16. ^ Rumelhart DE, McClelland J (1986). Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Cambridge: MIT Press.
  17. ^ Russell I. "Neural Networks Module". Archived from the original on May 29, 2014.
  18. ^ "Neuroscientists demonstrate how to improve communication between different regions of the brain". medicalxpress.com. Retrieved September 6, 2020.
  19. ^ Rezaei H, Aertsen A, Kumar A, Valizadeh A (August 2020). "Facilitating the propagation of spiking activity in feedforward networks by including feedback". PLOS Computational Biology. 16 (8) e1008033. Bibcode:2020PLSCB..16E8033R. doi:10.1371/journal.pcbi.1008033. PMC 7444537. PMID 32776924. S2CID 221100528. Text and images are available under a Creative Commons Attribution 4.0 International License.
  20. ^ Liang T, Brinkman BA (April 2024). "Statistically inferred neuronal connections in subsampled neural networks strongly correlate with spike train covariances". Physical Review E. 109 (4–1) 044404. Bibcode:2024PhRvE.109d4404L. doi:10.1103/PhysRevE.109.044404. PMID 38755896.
  21. ^ "How Artificial Neural Networks Help Us Understand Neural Networks in the Human Brain | Stanford HAI". hai.stanford.edu. Retrieved April 6, 2026.
  22. ^ a b Yang GR, Wang XJ (September 23, 2020). "Artificial Neural Networks for Neuroscientists: A Primer". Neuron. 107 (6): 1048–1070. doi:10.1016/j.neuron.2020.09.005. ISSN 1097-4199. PMC 11576090. PMID 32970997.
  23. ^ Dong Y, Wang S, Huang Q, Berg RW, Li G, He J (2023). "Science". Cyborg and Bionic Systems (Washington, D.c.). 4: 0044. doi:10.34133/cbsystems.0044. PMC 10380541. PMID 37519930. Retrieved April 6, 2026.
  24. ^ Zhu F, Grier HA, Tandon R, Cai C, Agarwal A, Giovannucci A, et al. (December 2022). "A deep learning framework for inference of single-trial neural population dynamics from calcium imaging with subframe temporal resolution". Nature Neuroscience. 25 (12): 1724–1734. doi:10.1038/s41593-022-01189-0. ISSN 1546-1726. PMC 9825112. PMID 36424431.
  25. ^ Lotter W, Kreiman G, Cox D (April 2020). "A neural network trained for prediction mimics diverse features of biological neurons and perception". Nature Machine Intelligence. 2 (4): 210–219. doi:10.1038/s42256-020-0170-9. ISSN 2522-5839. PMC 8291226. PMID 34291193.
  26. ^ Aborode AT, Emmanuel OA, Onifade IA, Olotu E, Otorkpa OJ, Mehmood Q, et al. (March 1, 2025). "The role of machine learning in discovering biomarkers and predicting treatment strategies for neurodegenerative diseases: A narrative review". NeuroMarkers. 2 (1) 100034. doi:10.1016/j.neumar.2024.100034. ISSN 2950-5887.
  27. ^ Yamazaki K, Vo-Ho VK, Bulsara D, Le N (June 30, 2022). "Spiking Neural Networks and Their Applications: A Review". Brain Sciences. 12 (7): 863. doi:10.3390/brainsci12070863. ISSN 2076-3425. PMC 9313413. PMID 35884670.
  28. ^ Slater C, Liu Y, Weiss E, Yu K, Wang Q (July 2022). "The Neuromodulatory Role of the Noradrenergic and Cholinergic Systems and Their Interplay in Cognitive Functions: A Focused Review". Brain Sciences. 12 (7): 890. doi:10.3390/brainsci12070890. PMC 9320657. PMID 35884697.
  29. ^ Peters KZ, Cheer JF, Tonini R (June 2021). "Modulating the Neuromodulators: Dopamine, Serotonin, and the Endocannabinoid System". Trends in Neurosciences. 44 (6): 464–477. doi:10.1016/j.tins.2021.02.001. PMC 8159866. PMID 33674134.
  30. ^ Squadrani L, Curti N, Giampieri E, Remondini D, Blais B, Castellani G (May 2022). "Effectiveness of Biologically Inspired Neural Network Models in Learning and Patterns Memorization". Entropy. 24 (5): 682. Bibcode:2022Entrp..24..682S. doi:10.3390/e24050682. PMC 9141587. PMID 35626566.