“Is artificial intelligence less than our intelligence?” —Spike Jonze
It is natural to face such questions in mind while seeing that the same piece of calculation computer can do faster than brain. Or sometimes when you play chess on the computer. You might feel that why don’t we have such amazing fast powers.
But hold on man! Brain still has such stunning capabilities which any machine cannot achieve. Although a number of scientists are working on making the machines similar to the brain. How are they doing then? Don’t you find such questions while comparing the human brain and computer?
It is done using Neural Networks. In neural networks, hundreds or thousands of brain cells are combined together and then they act like the human brain.
Now, the question arises, how?
Let’s look at an example of Property evaluation.
While looking for any property, the measures a person look for are mentioned above in the image. Sometimes, a person considers only the locality and the condition.
Sometimes, a person seeks for the nearby market and the locality to be amazing. Sometimes a person may consider the age of the property. It may consider it in two cases: if the property is heritage i.e. 100 or 1000 years old or if a property is modern. Also, it is a good example of Rectifier function as it will possess 0 value until it is not 100 or 1000 year old (i.e. old one) and will possess high values as their demand increases.
While some persons may look for all the above-mentioned factors before looking for the property.
Now a neural network is formed in which we are feeding the measures as inputs, all the options considered by persons as hidden layers and price as output. All the hidden layers are fed to predict the price which is the output for the property evaluation.
How do Neural Networks learn?
When the machine is learning i.e. when it is being trained or after the time when it is trained, information is fed into the network via the input units, which trigger the layers of hidden units, and then we get output units. This common design is called a feedforward network.
In the above picture, the whole procedure is specified. Let’s say we have some input values that have been sent to perceptron. Then, the activation function is applied. We get some output and now we will plot it on a chart. There we get the output y^. In order to learn we need to compare the output value (predicted value) to the actual value(y) that we want our Neural Network to get. It is also plotted in the chart. We observe a bit difference between y and y^. To calculate this difference, we calculate the Cost function. It is one-half of the squared difference of predicted and actual values. Basically, Cost Function is telling us what is the error in our prediction. Our goal is to minimize the Cost function.
After getting the Cost function, it is fed back to the Neural Network.
Another type of learning in a neural network is closely related to how we learn in our regular lives by the activities we perform. In those activities the actions we do and are either accepted or corrected by a trainer or coach and guide us how to get better at a certain task. Similarly, neural networks require a trainer in order to describe what should have been given to the input. Based on the difference between the actual value and the output value given by the network, an error value is computed and sent back through the system. This error is the Cost Function. For every layer in the network, the error value is then analyzed and used to adjust the threshold and weights accordingly for the next input. In this way, in each run, the error keeps becoming marginally lesser as the network learns how to analyze values. Neural networks learn things in this way called backpropagation.
Like we threw a ball in bowling and then we realized that it went wrong. Then we again threw the ball by not repeating the same mistake we did earlier. Here we used feedback to compare the outcome we wanted with what actually happened, figured out the difference between the two, and used that to change it next time.
How Neural Networks work?
After being trained with numerous labeled images, one can send an unlabeled image through the neural network and ask the machine what is in the image?
The machine should still be able to understand that a dog with a hat is still a dog. By changing the example slightly, the machine should not change its mind totally. This is a property called “generalization” which means that the network can generalize what it learned from the specific examples and identify what is there in the next unlabeled image.
Thanks for spending time reading this blog. Here, I mentioned how Neural Networks work and learn. We discussed concepts like feedforward and backpropagation. Also, we discussed about the Cost function. Tried making the article simple and crispy.
You will find about Gradient Descent in my coming blog. If you have any query or feedback then feel free to comment.