Neuromorphic Computing For Handwriting Pattern Recognition
2024, International Conference on Scientific and Academic Research
Sign up for access to the world's latest research
Abstract
These systems mimic the actions of neurons and synapses by using digital or analogue circuits that are optimised for parallel processing. Neuromorphic computing has the ability to outperform conventional computing architectures in some areas, especially pattern recognition, image processing, and artificial intelligence. This is because it makes use of the distributed processing and parallelism principles found in biological brains. The goal of this third generation of AI computing is to mimic the intricate neuronal network seen in the human brain. AI is needed to compute and analyse unstructured data at a rate that can compete with the biological brain's exceptional energy efficiency. Spiking neural networks (SNN) are the artificial intelligence counterpart of our neural network of synapses. Artificial neurons are layered structures containing individual spiking neurons that may fire and interact with one another to initiate a cascade of changes in response to external inputs.
Related papers
arXiv (Cornell University), 2017
Neuromorphic computing has come to refer to a variety of brain-inspired computers, devices, and models that contrast the pervasive von Neumann computer architecture. This biologically inspired approach has created highly connected synthetic neurons and synapses that can be used to model neuroscience theories as well as solve challenging machine learning problems. The promise of the technology is to create a brainlike ability to learn and adapt, but the technical challenges are significant, starting with an accurate neuroscience model of how the brain works, to finding materials and engineering breakthroughs to build devices to support these models, to creating a programming framework so the systems can learn, to creating applications with brain-like capabilities. In this work, we provide a comprehensive survey of the research and motivations for neuromorphic computing over its history. We begin with a 35-year review of the motivations and drivers of neuromorphic computing, then look at the major research areas of the field, which we define as neuro-inspired models, algorithms and learning approaches, hardware and devices, supporting systems, and finally applications. We conclude with a broad discussion on the major research topics that need to be addressed in the coming years to see the promise of neuromorphic computing fulfilled. The goals of this work are to provide an exhaustive review of the research conducted in neuromorphic computing since the inception of the term, and to motivate further work by illuminating gaps in the field where new research is needed.
2017 IEEE International Symposium on Circuits and Systems (ISCAS), 2017
Despite being originally inspired by the central nervous system, artificial neural networks have diverged from their biological archetypes as they have been remodeled to fit particular tasks. In this paper, we review several possibilites to reverse map these architectures to biologically more realistic spiking networks with the aim of emulating them on fast, lowpower neuromorphic hardware. Since many of these devices employ analog components, which cannot be perfectly controlled, finding ways to compensate for the resulting effects represents a key challenge. Here, we discuss three different strategies to address this problem: the addition of auxiliary network components for stabilizing activity, the utilization of inherently robust architectures and a training method for hardwareemulated networks that functions without perfect knowledge of the system's dynamics and parameters. For all three scenarios, we corroborate our theoretical considerations with experimental results on accelerated analog neuromorphic platforms.
Spiking Neuron Networks (SNNs) are often referred to as the third generation of neural networks. Highly inspired from natural computing in the brain and recent advances in neurosciences, they derive their strength and interest from an accurate modeling of synaptic interactions between neurons, taking into account the time of spike firing. SNNs overcome the computational power of neural networks made of threshold or sigmoidal units. Based on dynamic event-driven processing, they open up new horizons for developing models with an exponential capacity of memorizing and a strong ability to fast adaptation. Today, the main challenge is to discover efficient learning rules that might take advantage of the specific features of SNNs while keeping the nice properties (general-purpose, easy-to-use, available simulators, etc.) of traditional connectionist models. This paper presents the history of the "spiking neuron", summarizes the most currently-in-use models of neurons and synaptic plasticity, the computational power of SNNs is addressed and the problem of learning in networks of spiking neurons is tackled.
Molecules, 2019
Materials exhibiting memory or those capable of implementing certain learning schemes are the basic building blocks used in hardware realizations of the neuromorphic computing. One of the common goals within this paradigm assumes the integration of hardware and software solutions, leading to a substantial efficiency enhancement in complex classification tasks. At the same time, the use of unconventional approaches towards signal processing based on information carriers other than electrical carriers seems to be an interesting trend in the design of modern electronics. In this context, the implementation of light-sensitive elements appears particularly attractive. In this work, we combine the abovementioned ideas by using a simple optoelectronic device exhibiting a short-term memory for a rudimentary classification performed on a handwritten digits set extracted from the Modified National Institute of Standards and Technology Database (MNIST)(being one of the standards used for bench...
2013
Abstract- In this paper we focus on how spiking neural networks can be used to identify a class of an input image by LIF model. We address the problem of identifying English alphabet character image By Artificial Neural Network which are mathematical computational models inspired by biological Neurons. An English alphabet character images (“A to z”) are first analyzed with Feed Forward Neural Network which belongs to Artificial Neural Network (ANN). Classes K1&K2 are considered for Capital letter and small letter respectively. The whole Recognition process is divided into four steps-Preprocessing, Classification, Post-Processing, and finally the comparison of
Frontiers in Neuroscience
Learning and development in real brains typically happens over long timescales, making long-term exploration of these features a significant research challenge. One way to address this problem is to use computational models to explore the brain, with Spiking Neural Networks a popular choice to capture neuron and synapse dynamics. However, researchers require simulation tools and platforms to execute simulations in real- or sub-realtime, to enable exploration of features such as long-term learning and neural pathologies over meaningful periods. This article presents novel multicore processing strategies on the SpiNNaker Neuromorphic hardware, addressing parallelization of Spiking Neural Network operations through allocation of dedicated computational units to specific tasks (such as neural and synaptic processing) to optimize performance. The work advances previous real-time simulations of a cortical microcircuit model, parameterizing load balancing between computational units in ord...
2021
The capabilities of natural neural systems have inspired new generations of machine learning algorithms as well as neuromorphic very large-scale integrated (VLSI) circuits capable of fast, low-power information processing. However, it has been argued that most modern machine learning algorithms are not neurophysiologically plausible. In particular, the workhorse of modern deep learning, the backpropagation algorithm, has proven difficult to translate to neuromorphic hardware. In this study, we present a neuromorphic, spiking backpropagation algorithm based on synfire-gated dynamical information coordination and processing, implemented on Intel's Loihi neuromorphic research processor. We demonstrate a proof-of-principle three-layer circuit that learns to classify digits from the MNIST dataset. To our knowledge, this is the first work to show a Spiking Neural Network (SNN) implementation of the backpropagation algorithm that is fully on-chip, without a computer in the loop. It is ...
2021
All systolic or distributed neuromorphic architectures require power efficient processing nodes. In this paper, a unifying tutorial is presented which implements multiple neuromorphic processing elements using a systematic analog approach including synapse, neuron and astrocyte models. It is shown that the proposed approach can successfully synthesize multidimensional dynamical systems into analog circuitry with minimum effort.
International Journal of Machine Learning and Computing
Spiking Neural Networks (SNNs) are known as a branch of neuromorphic computing and are currently used in neuroscience applications to understand and model the biological brain. SNNs could also potentially be used in many other application domains such as classification, pattern recognition, and autonomous control. This work presents a highly-scalable hardware platform called POETS, and uses it to implement SNN on a very large number of parallel and reconfigurable FPGA-based processors. The current system consists of 48 FPGAs, providing 3072 processing cores and 49152 threads. We use this hardware to implement up to four million neurons with one thousand synapses. Comparison to other similar platforms shows that the current POETS system is twenty times faster than the Brian simulator, and at least two times faster than SpiNNaker.
This is a brief overview of current cognitive computing research in neuromorphic computation. In recent times, cognitive computing has been actively explored in artificial intelligence and in different fields of neuroscience. Researchers have been studying possibilities to reproduce biological brain design in computer simulation for the past twenty years. Since 2008, a team of researchers at the IBM Almaden Center in association with scientists from five universities has been working on an ambitious project named SyNAPSE2 to create a brain-like architecture with the ultimate goal of constructing a unified computational theory of human cognition. The choice of computer simulation as a method is based on the “understanding by building” approach (Todd, 2012, p.ix) which assumes that human brain can be better studied through large-scale modeling of its neural networks.

Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.