Have you ever thought of a brain inbuilt in a machine?
Don’t you get amazed by the technology Sophia possess?
You will get startled to know that Sophia is trained on artificially intelligent software that is constantly being trained in the lab so that her accuracy and speed gets on increasing day by day.
Yes, she can walk, talk, see, identify and understand like any human. Yes, she is damn fascinating. Yes, she is on the verge to beat any human.
Artificial Neural Network (ANN), as the name says is, artificially building a neural network i.e. building neural networks in a machine, similar to present in a machine.
A neural network comprises of a large number of neurons. It sends and processes signals in a form of chemical and electrical signals and then get the meaningful output from it.
Before discussing neural networks in detail, let’s look for the neuron first.
The neuron is the basic building block of Artificial Neural Network.
But the question arises how can we recreate these neurons in a machine? Because our whole purpose of deep learning is to mimic how the human brain works. We hope that by this we can have a machine which works like humans because the human brain is one of the most powerful learning tools on the planet.
By itself, a neuron is not strong, but when we have many neurons together, they create magic and can perform any task. This is one of the disadvantages of a neuron.
Now the question arises how they work together?
So, this task is done using Dendrites and Axon. Dendrites acts as a receiver of the signal for the neuron and Axon acts as a transmitter of the signal for the neuron.
The Axon of one neuron is connected to Dendrites of the next neuron i.e. it connects or passes the signal like this.
In the small picture above, we can see that Axon doesn’t really touch the Dendrites of the other neuron. The whole concept of transmission is done through Synapse.
Now, let’s move on to how we will represent or create neurons in a machine.
A basic neuron is as stated below:
Now, let’s have a look at the procedure followed in the neuron.
Deep learning is another name to “stacked neural networks” i.e. networks composed of several layers. These layers are composed of nodes. A node is a place where computation happens. Perceptrons in neural networks are inspired by real neurons in the human brain. Note it’s only an inspiration and not exactly like a human brain.
The procedure of a perceptron processing data is as follows:
- On the left side, you have neurons (small circles) of x with subscripts 1, 2, … , m carrying data input.
- We multiply each of the input by a weight w, also labeled with subscripts 1, 2, …, m, along with the arrow (also called a synapse) to the big circle in the middle. So w1 * x1, w2 * x2, w3 * x3 and so on.
- Once all the inputs are multiplied by a weight, we sum all of them up and add another pre-determined number called bias.
- Then, we push the result further to the right. Now, we have this step function in the rectangle. What it means is that if the result from step 3 is any number equal or larger than 0, then we get 1 as output, otherwise, if the result is smaller than 0, we get 0 as output.
- The output is either 1 or 0.
Now, if we look at the graph in colored manner:
Neural Networks has a lot to teach to Computer World. They learn by examples and this is the only feature which makes them powerful. Also, there is no need to compose an algorithm to perform a particular task i.e. by giving the input, it itself shows you the output and also finds out which input has the impact on that output. Also, for real-time systems, it is very helpful as it’s response time and computational time is fast due to its parallel architecture.
Stay tuned for next blog! We will talk about activation functions detail in later blogs. Also, we will look at how the Neural Networks learn and work.