Site icon Premium Researchers

Spiking Neural Network Architecture Design And Performance Exploration Towards The Design Of A Scalable Neuro-Inspired System For Complex Cognition Applications

Do You Have New or Fresh Topic? Send Us Your Topic

Spiking Neural Network Architecture Design And Performance Exploration Towards The Design Of A Scalable Neuro-Inspired System For Complex Cognition Applications

Abstract

Research into artificial neural networks (ANNs) is inspired by how information is dynamically and massively processed by biological neurons. Conventional ANNs research has received a wide range of applications including automation, but there are still problems of timing, power consumption, and massive parallelism.

Spiking neural networks (SNNs), being the third-generation of neural networks, has drawn attention from a greater number of researchers due to the timing concept, which defines its closeness to biological Spiking Neural Network (bio-SNN tested) functions.

Spike timing plays an important role in every spiking neuron and proves computationally more plausible than other conventional ANNs. The real biological and distinct neuron timing and spike firing can be modelled artificially using neuro dynamics and spike neuron models.

The spike timing dependent plasticity (STDP) learning rule also incorporates timing concepts and is suitable for training SNNs which describes general plasticity rules that depend on the actual timing of pre- and postsynaptic spikes.

This work presents a software implementation of an SNN based on the Leaky Integrate-and-Fire (LIF) neuron model and STDP learning algorithm. Also, we present a novel hardware design and architecture of a lightweight neuro-processing core (NPC) to be implemented in a packet-switched based neuro-inspired system, named NASH.

The NASH architecture uses the LIF neuron model and reduced flit format size that solves the problems of timing and high-power consumption. Software evaluation shows that our network tested 94% accuracy with MNIST datasets of handwritten digits.

Chapter One of Spiking Neural Network Architecture Design And Performance Exploration Towards The Design Of A Scalable Neuro-Inspired System For Complex Cognition Applications

INTRODUCTION1.1

Neuro-Inspired Systems and Spiking Neurons Artificial Neural Network is an attractive, competitive and colossal research area in artificial intelligence which is inspired by the incredible and powerful performance of the interconnected biological brain.

According to one of the first inventors of neurocomputing, Robert Hecht-Nielsen, Neural Networks is defined as ‘a computing system made up of some simple, highly interconnected processing elements, which process information by their dynamic state response to external inputs’.

The whole idea of biological neural networks of the brain gave birth to Artificial Neural Networks (ANNs), with scientists digging deep on how and the best way to mimic the brain functionalities using silicon chips.

It is based on the fact that the biological brain connection architecture can be mimicked with silicon and wires in place of living neurons and dendrites. The human brain is a structure made of 100 billion cells named neurons which connect thousands of cells by axons (von Bartheld, Bahney & Herculano-Houzel, 2016).

Inputs from sensory organs and the external environment are accepted by dendrites which create electric impulses that rapidly travel within the neural networks. Messages are sent across from neurons to other neurons.

However, Deep Learning as an exciting research area in machine learning is concerned with developing different algorithms inspired by the structure and function of the brain. Artificial Neural Networks have been developed to solve various computational problems, but earlier research never considered timing issues which are the hallmark of Spiking Neural Networks (SNNs).

Currently, most ANN models are built on extremely simplified brain dynamics (Ghosh-Dastidar & Adeli, 2009). They have been used as popular computational tools to solve complex classification, function estimation and pattern recognition problems.

2Spiking Neural Networks integrate Spike-timing-dependent Plasticity (STDP), which serves as a technique of adjusting the strength of connection (synapse) between neurons in the brain based on the relative timing of a particular neuron’s output and input action potentials.

In the past decade, SNNs were developed that comprising spiking neurons. Information transmission in these neurons mimics the information transmission in natural neurons due to their inherent dynamic representation.

The massive parallelism of the brain has focused researchers’ minds on many competitive areas of neuro-inspired systems, and many algorithms have been developed to enable machines and systems, even IoT devices, to leverage brain performance. Neurocomputing has various and robust applications in science and technology.

The applications of neuro-inspired systems have actually been found inspiring in the field of Biomedicine and Neuroscience because the collaboration between biological and electronic circuits has led to ultra-low-power and noise-robust chips that could serve the deaf, blind, and paralyzed and that could also lead to advanced ear-inspired radio receivers

(Sarpeshkar, 2012).1.2 Research Background and Motivation The background of this research centres on Spiking Neural Networks (SNNs) modelled with the Leaky Integrate-and-Fire (LIF) spiking neuron model that integrates the STDP learning algorithm. We are motivated by the efficient and parallel processing of the biological neuron.

The biological brain implements massively parallel computations using a complex architecture that is different from the current Von Neumann machine. Our brain is a low-power, fault-tolerant, and high-performance machine.

It consumes only about 20 W and brain circuits continue to operate as the organism needs even when the circuit is perturbed. The interconnection of the brain neurons drives our motivation to future on-chip systems.

31.3 Statement of Problem

Timing is a significant issue in implementing the neuro-inspired system and is not considered in conventional neural networks. Conventional neural networks encode information with static input coding (encoding pattern as 0011 and 0010, binary bits).While in SNN, besides the pattern coding, the time-related parameters can be used to present the information which increases the information processing capacity of ANN.

Implementing SNN with spiking neuron models solves this timing problem, hence the need for this research

.1.4 Research Aim and Objectives

This research is aimed at exploring the theoretical framework behind spiking neuron models and investigating the architecture of OASIS Network on Chip (OASIS-NoC). In addition to our research studies aim, we ought to be guided by the following objectives:

 To implement a software-based SNN using the Leaky Integrate-and-Fire neuron model;

 To train our network with STDP learning algorithm;

 To test our SNN for digit recognition with MNIST datasets of handwritten digits;

 To propose a novel scalable high-level Neuro-Inspired Architectures in Hardware (NASH) for future OASIS Network on Chip (OASIS-NoC).

1.5 Research Methodology

The methodology of this research is based on studying related literatures to get the general concepts. In our studies, we proposed a design for a novel high-level architecture, NASH for future on-chip systems. NASH main components were described.

4We implemented a software-based Spiking Neural Network (SNN) using a Leaky Integrate-and-Fire (LIF) neuron. The SNN is trained with an STDP learning algorithm for digit recognition using MNIST datasets of handwritten digits.The rest of the research work is organized in chapters as follows:

Chapter 2 is the literature review on some related works, discussing SNNs, learning mechanisms, neuron spiking models and software simulators; Chapter 3 covers methodology and implementation, discussing STDP, STDP algorithm, and methods;

Chapter 4 is on the design of scalable SNN based on NoC Architecture, discussing NoC, OASIS-NoC and proposing a novel NASH as the main contribution of this research; Chapter 5 contains results, analysis, and evaluation;

and finally Chapter 6 discusses the research conclusion, challenges, and future work.1.6 Research Contributions Interconnection of many cores on a single chip has remained a bottleneck in system design since high power consumption, scalability and high throughput must be considered appropriately.

Network on Chip is a promising solution for efficient interconnection of many cores on a single chip (Ahmed & Abdallah, 2012). This research leverages NoC architecture to propose a novel high-level and scalable Neuro-Inspired Architectures in Hardware for complex cognition applications.

Hence, we made the following sub-contributions.1. Study and implementation of a software-based Spiking Neural Network (SNN) using a Leaky Integrate-and-Fire (LIF) neuron model with a spike timing dependent plasticity (STDP) learning rule.2.

We proposed a design for a novel architecture and circuit development towards the implementation of a spiking neuro-inspired architecture. We performed hardware design and evaluation of a LIF Core for Neuro-inspired Spiking architecture.

 

 

education repository

Do You Have New or Fresh Topic? Send Us Your Topic 

Not What You Were Looking For? Send Us Your Topic

INSTRUCTIONS AFTER PAYMENT

After making payment, kindly send the following:

» Send the above details to our email; contact@premiumresearchers.com or to our support phone number; (+234) 0813 2546 417 . As soon as details are sent and payment is confirmed, your project will be delivered to you within minutes.