Neuromorphic systems, as opposed to von Neumann's computer architecture, provide more original and novel solutions to the artificial intelligence discipline.
This revolutionary approach, inspired by biology, has applied the theory of human brain modelling by connecting fictitious neurons and synapses to unveil new neuroscience concepts.
Many researchers have invested much on neuro-inspired models, algorithms, learning techniques, and operating systems for the exploration of the neuromorphic system, and many relevant applications have been realised.
This study provides a thorough overview of Neuromorphic Computing and its potential advancement in new research applications.
Finally, we conclude with a comprehensive discussion and a plausible plan for the most recent application prospects to help developers gain a better grasp of Neuromorphic Computing in order to build their own artificial intelligence projects.
Nowadays, neuromorphic computing has surpassed von Neumann computing architecture as the preferred architecture for applications such as cognitive processing.
Based on highly connected synthetic neurons and synapses, biologically inspired methodologies are being developed in order to reach theoretical neuroscientific models and difficult machine learning techniques. The von Neumann architecture is the prevailing computing standard for machines.
However, it differs significantly from the human brain's working model in terms of organisational structure, power requirements, and processing capabilities .
As a result, neuromorphic calculations have evolved as an auxiliary architecture for the von Neumann system in recent years. A programming framework is created using neuromorphic calculations.
The system can learn and construct applications that simulate neuromorphic functionalities based on these computations. Neuro-inspired models, algorithms and learning methods, hardware and equipment, support systems, and applications are examples of these .
Neuromorphic architectures must meet a number of major and unique needs, including increased connectivity and parallelism, reduced power consumption, memory collocation, and processing .
Its superior capacity to execute complicated computational tasks when compared to standard von Neumann architectures, resulting in power savings and a smaller footprint.
Because these characteristics constitute the bottleneck of the von Neumann architecture, the neuromorphic architecture will be viewed as a suitable solution for implementing machine learning algorithms .
real-time performance, parallelism, von Neumann bottleneck, scalability, low power, footprint, fault tolerance, faster, online learning, and neuroscience are the top 10 reasons for implementing neuromorphic architecture . The major driving factor of the neuromotor system is real-time performance.
These devices may often perform neural network computing applications quicker than von Neumann systems due to parallelism and hardware-accelerated processing .
Low power consumption has become a more focused issue for neuromorphic system research in recent years .  . Biological neural networks are intrinsically asynchronous [8, 9], and the brain's effective data-driven processing can be based on event-based computing models [10, 11].
In the von Neumann architecture, however, handling the communication of asynchronous and event-based processes in big systems is a challenge .
Because it contains both processing memory and computation in the neuron nodes and achieves ultra-low power consumption in data processing, the hardware implementation of neuromorphic computing is advantageous to the large-scale parallel computing architecture.
Furthermore, because of the scalability, it is simple to obtain a big size neural network. Because of the aforementioned benefits, neuromorphic architecture is preferable to von Neumann for hardware implementation .
The fundamental issue with neuromorphic calculations is determining how to arrange the neural network model. Biological neurons are typically made up of cell bodies, axons, and dendrites.
The neuron models implemented by each component of the given model are classified into five types based on whether they are biologically or computationally driven.
MAN-MADE NEURAL NETWORKS
An Artificial Neural Network (ANN) is a network of nodes that is inspired by the biological human brain. ANN's goal is to perform cognitive activities like problem solving and machine learning.
The ANN's mathematical models were started in the 1940s, but it remained silent for a long period (Maass, 1997). With the success of ImageNet2 in 2009, ANNs have grown in popularity (Hongming et al., 2018).
This is due to advancements in ANN models and hardware systems capable of handling and implementing these models. Sugiarto and Pasila (2018) Based on their processing units and performance, ANNs can be classified into three generations (Figure 1).
Figure 1: Generations of Artificial Neural Networks in Neuromorphic Computing
Mc-Culloch and Pitts' work on the first generation of ANNs began in 1943 (Sugiarto & Pasila, 2018). Their approach is based on a computational model for neural networks in which each neuron is referred to as a “perceptron.”
In the 1960s, Widrow and his students modified their model with extra hidden layers (Multi-Layer Perceptron) for increased accuracy, naming it MAGDALENE (Widrow & Lehr, 1990). However, first-generation ANNs were distant from biological models, producing only digital outputs.
They were basically decision trees with if and else criteria. The second generation of ANNs contributed to the prior generation by incorporating functions into the first-generation models' decision trees. The functions interact with each visible and buried layer of perceptron to form the “deep neural networks” structure.
Patterson (2012) and Camuas-Mesa et al. (2019) As a result, second-generation models are more similar to biological brain networks. The functions of second-generation models are currently being researched, while existing models are in high demand from markets and science.
The majority of recent breakthroughs in artificial intelligence (ai) are based on these second-generation models, which have demonstrated their accuracy in cognitive tasks. Zheng and Mazumder (2020)
Spiking Neural Networks (SNNs) are the third generation of ANN. They are biologically inspired structures that encode information as binary occurrences (spikes).
Their learning method differs from previous generations and is based on brain principles (Kasabov, 2019). SNNs are not affected by the clock-cycle-based fire mechanism.
If the neurons acquire enough data to exceed the internal threshold, they produce an output (spike). Furthermore, neuron architectures can function concurrently (Sugiarto & Pasila, 2018).
SNNs, in principle, spend less energy and work quicker than second-generation ANNs due to these two characteristics (Maass, 1997).
SSNs have the following advantages over ANNs: (2019, Kasabov)
Efficient temporal – spatio temporal or spectro temporal – data modelling
Efficient modelling of systems involving many time scales
Bridging the gap between higher-level functions and “lower” level genetics
Integration of many modalities, such as sound and vision, into a single system
Event prediction and predictive modelling
Rapid and massively parallel data processing
Information processing that is compact
Structures that can be scaled (from tens to billions of spiking neurons)
If implemented on neuromorphic platforms, it consumes less energy.
Brain-inspired (BI) SNN deep learning and deep knowledge representation
BI-AI development is made possible by the use of brain-inspired SNN.
Although SNNs appear to have many advantages over ANNs (Table 2), advances in associated microchip technology, which gradually allows scientists to implement such complex structures and discover new learning algorithms (Lee et al., 2016) (Furber, 2016), are still relatively recent (after the 2010s).
Spiking Neural Networks Technology, which has only been in use for ten years, is thus relatively young when compared to the second generation.
As a result, it requires greater research and intensive implementation to maximise its benefits more efficiently and effectively.