The Brain of Artificial Intelligence – Neural Networks

The Brain of Artificial Intelligence – Neural Networks

We’re in the beginning of Artificial Intelligence era. We didn’t know what will be the true potential of AI.

The AI buzz started when Google’s Deep Mind project Alpha Go (a computer go game powered by DeepMind’s AI Algorithm) for the first time in the history of the world, an AI defeated professional European Go player champion Fan Hui for five to zero.

Later this year Elon Musk backed startup OpenAI defeats the world’s best Dota 2 players. You may think these AI is simply programmed to do perform instructions, but that’s not true. They used to learn a lot from complex algorithms to work or play or think as much like as humans.

These AI can learn from examples which are previously used, for example, consider the game chess, there is a lot of possibility for various moves in the game, the AI learn from these examples and able to play well like humans but far better than humans.

In case of humans, we’re able to predict the opponent moves only for very few steps. But for an AI is not a very difficult task, it can able to easily predict your future moves also, through its learning process.

To teach these moves/ steps scientists (or developers, researchers) use a method called Machine learning (ML).

(Wikipedia: Machine learning is a field of computer science that gives computers the ability to learn without being explicitly programmed.

Evolved from the study of pattern recognition and computational learning theory in artificial intelligence, machine learning explores the study and construction of algorithms that can learn from and make predictions on data – such algorithms overcome following strictly static program instructions by making data-driven predictions or decisions, through building a model from sample inputs.)

In ML there are various approaches used to develop algorithms, in that we’re going to see the most popular approach called ‘Neural Networks’ which is used to develop the above AI algorithms of AlphaGo and OpenAI.

Neural Networks

From the beginning, where does the idea for Neural Networks come from, obviously from your brain. Not kidding, it mimics from the brain’s biological neural networks. This is termed as Biomimicry.

Biomimicry is an approach to innovation that seeks sustainable solutions to human challenges by emulating nature’s time-tested patterns and strategies or simply an idea that is inspired from nature which is already existing.

The Neural Networks is inspired from human brain’s neural network like neurons connected in the brain. The first artificial neural network called ‘Perceptron‘ (an algorithm for pattern recognition by using neurons with weighted inputs and it’s the most influential model of all time) was invented by psychologist Frank Rosenblatt in 1958.

What is a Neural Network?

A Neural Network or Artificial Neural Network (ANN) is an information processing paradigm (model) similar to a brain processing information.

The key element of this model is the novel structure of the information processing system. It is composed of a large number of highly interconnected processing elements (neurones) working in the way to solve specific problems.

Neurones are similar to a biological neural network in the brain which is a collection of “neurons” with “synapses” connecting them.

ANN’s are like people, learn something by example. An ANN is configured for a specific application, such as pattern recognition or data classification, through a learning process.

Learning in biological systems involves adjustments to the synaptic connections that exist between the neurones. This is same as on ANN’s as well.

Working Structure (Architecture of neural networks)

An ANN learns by creating connections between many different processing elements (neurons or neurones). These neurones may be physically constructed or simulated in the digital computer.

The neurones are tightly interconnected and organized into three different layers as, the input layer, the hidden layer, and the output layer.

The input layer receives the input, the output layer produces the final output. Usually, the hidden layers are hidden in between the two layers.

But, there is n’ number of hidden layers which makes it impossible to predict or know the exact flow of data, this where ‘Deep learning’ comes in when there is more number of hidden layers.

The circles represent neurons and lines represent synapses. Synapses take the input and multiply it by a “weight” (the “strength” of the input in determining the output). Neurons add the outputs from all synapses and apply an activation function.

How to train them or How they learn

Training a neural network basically means calibrating all of the “weights” by repeating the two steps, forward propagation and back propagation.

In forward propagation, a set of weights is applied to the input data and calculate an output. For the first forward propagation, the set of weights is selected randomly.

In back propagation, by measuring the margin of error of the output and adjust the weights accordingly to decrease the error.

Neural networks repeat both forward and back propagation until the weights are calibrated to accurately predict an output.

There are three major learning paradigms or models each correspond to a particular learning task. They are supervised learning, unsupervised learning and reinforcement learning.

Not only the above methods, there are various methods for training an ANN, depending on the problems to solve.

  1. Adaptive learning: An ability to learn how to do tasks based on the data given for training or initial experience.
  2. Self-Organizing maps or Kohonen map: An ANN can create its own organisation or representation of the information it receives during learning time.
  3. Real-Time Operation: ANN computations may be carried out in parallel, and special hardware devices are being designed and manufactured which take advantage of this capability.
  4. Fault Tolerance via Redundant Information Coding: Partial destruction of a network leads to the corresponding degradation of performance. However, some network capabilities may be retained even with major network damage.

Difference between Neural networks and Conventional computers

Conventional computers use an algorithmic approach i.e. the computer follows a set of instructions in order to solve a problem. Unless the specific steps that the computer needs to follow are known the computer cannot solve the problem.

Neural networks process information in a similar way the human brain does. The network is composed of a large number of highly interconnected processing elements (neurones) working in parallel to solve a specific problem.

ANNs can’t be implemented on a conventional computer because the information processed by ANNs are in very large amounts of data. So, an ANN is to be built with multiple parallel processors, which provides greater speed advantage at very little development cost.

The parallel architecture also allows ANNs to process very large amounts of data very efficiently. When dealing with large, continuous streams of information, such as speech recognition, image recognition or machine sensor data, ANNs can operate considerably faster than their linear counterparts.

Applications of ANNs

There is a variety of real-world applications available for ANNs in various fields is limitless,

In Medicinemodelling parts of the human body and recognising diseases from various scans, etc.,

In Business – data analysis, marketing, sales, financial analysis, etc.,

In Weather forecasting, nuclear reactors, space voyage, diagnosis, factories management, crime investigation, diagnose malfunctions etc.,

If you need to learn about ANN in detail follow the references. References: Neural Networks by Christos Stergiou and Dimitrios Siganos, Artificial Neural Networks and How to Build a Neural Network.

Check our page on FacebookInstagram and subscribe to our YouTube channel.

Also read: We might be nearing singularity

Will the dusk of Moore’s Law, Be the dawn of singularity?

World’s first Artificial Intelligence politician developed for New Zealand

Scientists present the first bionic hand with the sense of touch
Cryptojacking might be the new privacy threat in 2018
Physicists developed a coldest chip runs at near Absolute Zero
Scientists prints ‘self-healing’ flexible metal circuits
A Tokyo-based Startup going to start ‘Moon Ads’
New research could develop batteries that triple’s the range of electric vehicles

No Comments

Leave a Comment.