Project Materials

COMPUTER SCIENCE PROJECT TOPICS

HARDWARE EMULATION STUDY OF NEURONAL PROCESSING IN CORTEX FOR PATTERN RECOGNITION

HARDWARE EMULATION STUDY OF NEURONAL PROCESSING IN CORTEX FOR PATTERN RECOGNITION

Need help with a related project topic or New topic? Send Us Your Topic 

DOWNLOAD THE COMPLETE PROJECT MATERIAL

HARDWARE EMULATION STUDY OF NEURONAL PROCESSING IN CORTEX FOR PATTERN RECOGNITION

Chapter one
1.1 Introduction.
Moore’s law states that the number of transistors on a dense integrated circuit will double every two years (Moore, 1975). So far, this has proven true, but it is only a matter of time before this circuit reaches its maximum capacity, as increasing the number of transistors will lead it to consume more power, overheat, and become difficult to cool.

Again, it is difficult to get a normal computer with Von Neumann architecture to execute functions such as interpreting human languages, recognising things, learning to dance, and so on, which the human brain accomplishes effortlessly.

The human brain is not adept at arithmetic operations, but it excels at processing continuous streams of input from the environment, which it can accomplish quickly.

To create a computer capable of doing these tasks, a computing paradigm known as artificial neural network was used, which resembles the biological brain (Abdallah, 2017).

Artificial Neural Networks are computer paradigms modelled after the neural networks seen in the biological brain. The biological brain is made up of billions of neurons that link to form a network.

It is fault resilient, requires very little power, and can perform considerable parallel calculations (Indiveri, Linares-Barranco, Legenstein, Deligeorgis, & Prodromakis, 2013).

This computer paradigm began in 1943 (Macukow, 2016) and has continued to evolve, with applications in pattern recognition, picture identification, object classification, and much more.

1.2 Statement of Problem

The neural network has been widely adopted in software, and one of the reasons for this is its versatility. Real-time applications, such as driverless vehicles, security cameras, and air traffic control systems, require high speeds for functioning (Abdallah, 2016).

Neural networks deployed in hardware rather than software can provide faster performance. Although lacking in flexibility, hardware implementation provides the neural network with increased speed, parallelism, and cost effectiveness by lowering the number of components (Misra & Saha, 2010).

1.3 Biological Neuron.

The biological neuron has four features that the artificial neuron models: dendrites, which are responsible for receiving input signals from other neurons into the cell body; the neuron cell body (nucleus), which is responsible for processing the input signal;

the axon, which is responsible for transferring the result of the processed signal out of the neuron cell body; and the synapse, which serves as the point of connection between two neurons and also play

Figure 1.1 shows the structure of a biological neuron.

1.4 Artificial Neuron.

An artificial neuron receives input in the form of figures or sets, which are multiplied by the synaptic weight of its 3 connections. When this is completed, the neuron sums the product of each input and synaptic weight. A bias is added to the sum’s value, and then an activation function is applied to determine the final output.

When creating a neural network, a variety of activation functions can be used, but the decision is determined by the problem the designer wants the neuron to answer.

These activation functions can be linear (Linear, Saturating Linear, Symmetric Saturating Linear, etc.) or nonlinear (Log-Sigmoid, Hyperbolic Tangent Sigmoid, etc.) (Hagan & Beale, n.d.).

Figure 1 shows a mathematical model of an artificial neuron.

Where:

• The input is x, which is a column vector.

• Synaptic weight is represented by a one-row matrix.

• b represents the bias.

• : Summation function.

• and , activation function

4

1. 5 Artificial Neural Network (ANN)

A neuron can solve small or no problems on its own, but larger problems require the interaction of numerous neurons stacked in layers that operate together. This link creates a network known as a neural network.

The architecture of a network refers to the arrangement of these neurons in relation to one another. The layout is primarily organised by directing the direction of synaptic connections between neurons.

Artificial neural networks include three layers: input, hidden, and output. The input layer receives data from the environment, the hidden layer processes it to find patterns, and the output layer displays the results of the hidden layer (da Silva, Hernane Spatti, Andrade Flauzino, Liboni, & dos Reis Alves, 2017).

Figure 2. Artificial Neural Network (ANN)

1.6 ANN architectures

Neurons in a neural network can be connected in various ways. These types of connections are known as architectures. The feedforward neural network is the most prevalent type (5).

This design consists of an input layer, one or more hidden layers, and an output layer. The data enters the input layer and flows in a single path through the hidden layer(s) before reaching the output layer.

Other neural network topologies include recurrent neural networks, which allow data to flow in cycles and can remember information for a long period, and symmetrically linked neural networks (Hinton, Srivastava, & Swersky, 2012).

Need help with a related project topic or New topic? Send Us Your Topic 

DOWNLOAD THE COMPLETE PROJECT MATERIAL

 

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Advertisements