Project Materials

CHEMICAL ENGINEERING PROJECT TOPIC

Hardware Emulation Study Of Neuronal Processing In Cortex For Pattern Recognition



Do You Have New or Fresh Topic? Send Us Your Topic



Hardware Emulation Study Of Neuronal Processing In Cortex For Pattern Recognition

Abstract

Artificial Neural network (ANN) is an area of computing that is modeled after the neural network of the biological brain and over the last few decades, has experienced huge success in its application in areas such as business, Medicine, Industry, Automotive, Astronomy, Finance, etc.

Since Neural Networks are inherently parallel architectures, there have been several earlier researches to build custom ASIC based systems that include multiple parallel processing units. However, these ASIC based systems suffered from several limitations such as the ability to run only specific algorithms and limitations on the size of a network.

Recently, much work has focused on implementing artificial neural networks on reconfigurable computing platforms. Reconfigurable computing allows to increasing the processing density beyond that provided by general-purpose computing systems. Field Programmable Gate Arrays (FPGAs) can be used for reconfigurable computing and offer flexibility in design with performance speeds almost closer to Application Specific Integrated Circuits (ASICs).

This thesis presents a study of an FPGA-based acceleration solution and performance exploration of a Feedforward Artificial Neural Networks (FFANN). The architecture is described using Very-High-Speed Integrated Circuits Hardware Description Language (VHDL) and implemented and demonstrated on an FPGA board.

Synthesis and simulation are made with Quartus II tool and ModelSim respectively. The given system was efficiently trained and evaluated in hardware with digit recognition application. Chapter one of Hardware Emulation Study Of Neuronal Processing In Cortex For Pattern Recognition

1.1 Introduction

Moore’s law predicted that the number of transistors on a dense integrated circuit doubles every two years(Moore, 1975). So far, this has been true, but it is only a matter of time before this circuit will max out, and this is because further increasing the number of transistors on it will make it consume more power, overheat and become impossible to cool.

Again, it is difficult to get the conventional computer with Von Neumann architecture to perform operations like understanding human languages, recognizing objects, learning to dance, etc. activities the human brain does very easily.

The human brain is not good at arithmetic operations, but it does well in operations that involves processing continuous streams of data from the environment and can do it very quickly. So, to build a computer that will be able to carry out these activities, a computing paradigm called artificial neural network which mimics the biological brain was adopted(Abdallah, 2017).

Artificial Neural Network is a computing paradigm after the neural network of the biological brain. The Biological brain is made up of billions of neurons which are interconnected to form a network. It is fault tolerant, consumes extremely low amount of power and can carry out significant parallel computations(Indiveri, Linares-Barranco, Legenstein, Deligeorgis, & Prodromakis, 2013).

This computing paradigm started as early as 1943(Macukow, 2016) and has continued to improve having its application in the areas of pattern recognition; a discipline that is aimed at classifying objects (text, images, speech, etc.), image recognition, object classification and much more.

1.2 Statement of Problem

There has been massive success in implementing the neural network in software, and one of the reasons is because it allows for flexibility. However, real time applications like autonomous driving2vehicles, real-time surveillance cameras, air traffic control system, etc. require much speed for operation(Abdallah, 2016).

This speed can better be offered by neural networks implemented in hardware rather than in software. Although lacking flexibility, hardware implementation allows the neural network more speed, the advantage of more parallelism and cost effectiveness by reducing the number of components (Misra & Saha, 2010).

1.3 Biological Neuron

The biological neuron has four features which the artificial neuron, models and they are; the dendrites which are responsible for receiving input signals from other neurons into the cell body, the neuron cell body (nucleus) which is responsible for processing the input signal, the axon which is responsible for transferring the result of the processed signal out of the neuron cell body and the synapse that serve as the point of connection between two neurons and also plays a part in the transfer of output from one neuron to another(Hagan & Beale, n.d.).

Figure 1.1: Structure of a Biological Neuron

1.4 Artificial Neuron

An artificial neuron takes in input(s) as figures or set of figures, and each input is multiplied with the synaptic weight (which represents the strength of the connection between two neurons) of its3connection.

When this is done, the neuron takes the product of each input and the synaptic weight and performs a sum operation on them. A bias is added to the value of the sum, and finally, an activation function operates on it to determine the final output(Hagan & Beale, n.d.).

Several activation functions can be employed when designing a neural network, but the choice depends on the specification of the problem the designer wants the neuron to solve. These activation functions can either be linear; Linear, Saturating Linear, Symmetric Saturating Linear, etc. or non-linear; Log-Sigmoid, Hyperbolic Tangent Sigmoid, etc.(Hagan & Beale, n.d.).

Figure 1: Mathematical Model of an Artificial NeuronWhere:• x a column vector is the input.• w a matrix with one row is the synaptic weight.• b, the bias.• , the Summation function• and , the Activation function4

1.5 Artificial Neural Network (ANN)A neuron alone can solve tiny or no problems but to solve bigger problems, an interconnection of multiple neurons arranged in layers working together is required. This interconnection forms a network which is called a neural network. The way these neurons are arranged in relation to each other in a network is called the architecture.

The arrangement is basically organized by controlling the direction of the synaptic connection between neurons. Artificial neural networks are arranged in three layers; input, hidden and output.

The input layer is tasked with receiving input from the environment, the hidden layer with processing the input to identify patterns, and the output layer with presenting the result done in the hidden layer(da Silva, Hernane Spatti, Andrade Flauzino, Liboni, & dos Reis Alves, 2017).Figure 2: Artificial Neural Network (ANN)

1.6 ANN Architectures

The neurons in a neural network can be connected in different ways. These types of connection are what is referred to as architectures. The most common among them is the feedforward5neural network. This architecture has an input layer, one or more hidden layers, and an output layer.

The data comes in from the input layer and flows in one direction through the hidden layer(s) until it gets to the output layer. Other neural network architectures include a recurrent neural network which allows data to flow round in cycles being able to remember information for a long time and symmetrically connected neural networks(Hinton, Srivastava, & Swersky, 2012).

1.7 Learning Algorithms

For a neural network to solve problems and get an accurate result, it needs to be trained, and it must learn how to do so. Neural network learning is classified into three groups of algorithms.

1.7.1 Supervised learning In supervised learning, the network is given data in pairs, an input data and a target result. The aim is for the network to be able to extract information from the labeled dataset given to it, so that it can label new data sets. It can also be called functional approximation.

Figure 3: Supervised Learning61.7.2 Unsupervised learning Here, the network is given only input data set(s) and from it, the network is expected to derive some structure from the relationship that exists between the input data. This neural network architecture deals more with description.

Figure 4: Unsupervised Learning

1.7.3 Reinforced learning

This neural network is based on the concept of reward. So, the network makes decisions that will enable it to obtain a maximum reward.7Figure 5: Reinforced Learning1.8 Research Objectives

The goal of this thesis is to acquire a deep understanding of neuro-inspired computing and its application and to emulate in hardware neuronal processing in the cortex for pattern recognition. This will be carried out on a Field Programmable Gate Array (FPGA) a configurable integrated circuit that can be configured using Hardware Description Language (HDL).

 

Do You Have New or Fresh Topic? Send Us Your Topic 

 

1.9 Organization of Work

This work has been organized as follows: Chapter 2 starts with a brief history of the neural network and goes on to give an insight into related works. Chapter 3 presents the general system architecture, a description of the individual components that make up the entire system and the process of implementation.

Chapter 4 discusses the analysis of the implementation and the result achieved, evaluating the power consumption and complexity. Chapter 5 concludes the research, giving insight into possible future work.

Hardware Emulation Study Of Neuronal Processing In Cortex For Pattern Recognition


Not What You Were Looking For? Send Us Your Topic



INSTRUCTIONS AFTER PAYMENT

After making payment, kindly send the following:
  • 1.Your Full name
  • 2. Your Active Email Address
  • 3. Your Phone Number
  • 4. Amount Paid
  • 5. Project Topic
  • 6. Location you made payment from

» Send the above details to our email; [email protected] or to our support phone number; (+234) 0813 2546 417 . As soon as details are sent and payment is confirmed, your project will be delivered to you within minutes.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Advertisements