What happens inside the brain of an ‘AI’ ?

What happens inside the brain of an ‘AI’ ?

We’re always fascinated by new things recently Graphcore (an AI chip manufacturer) a startup has used its AI processing units and software to create maps of what happens during a machine learning process i.e Inside an AI ‘brain’ – What does machine learning look like?

‘AI brain scans’ reveal what happens inside machine learning. A graph is simply the best way to describe the models you create in a machine learning system. These computational graphs are made up of vertices (think neurons) for the compute elements, connected by edges (think synapses), which describe the communication paths between vertices.

Unlike a scalar CPU or a vector GPU, the Graphcore Intelligent Processing Unit (IPU) is a graph processor. A computer that is designed to manipulate graphs is the ideal target for the computational graph models that are created by machine learning frameworks.

We’ve found one of the easiest ways to describe this is to visualize it. Our software team has developed an amazing set of images of the computational graphs mapped to our IPU. These images are striking because they look so much like a human brain scan once the complexity of the connections is revealed – and they are incredibly beautiful too.

Before explaining what we are looking at in these images, it’s useful to understand more about the software framework, Poplar™ which visualizes graph computing in this way.

Poplar is a graph programming framework targeting IPU systems, designed to meet the growing needs of both advanced research teams and commercial deployment in the enterprise. It’s not a new language, it’s a C++ framework which abstracts the graph-based machine learning development process from the underlying graph processing IPU hardware.

Many of the images created by Graphcore, which are technically graphs, are based on Microsoft’s ResNet – a neural network that won the ImageNet classification competition in 2015. Since then, other ResNets have been developed. This image shows the full training graph for Microsoft Research ResNet-34 architecture hosted on Graphcore’s IPU from December 2016. The image is coloured to highlight the density of computation resulting the glowing centre in the convolutional layers of the graph.

The ResNet architecture is used for building deep neural networks for computer vision and image recognition. The image shown here is the forward (inference) pass of the ResNet 50 layer network used to classify images after being trained using the Graphcore neural network graph library.

This is the full forward and backward pass of the image recognition architecture AlexNet using the ImageNet dataset for training. Graphcore’s Poplar graph turns a machine learning framework, such as TensorFlow or MXNet, into a computational graph of 18.7 million vertices – a point where two or more curves, lines, or edges meet – and 115.8 million edges.

The forward pass of the ResNet-34 computer vision architecture running on Graphcore’s IPU. The layers of the neural network are visible, with the connections between them shown in the centre of the image.

An image of a full training graph from ResNet-34 from September 2016. Graphcore says this looks like an MRI scan and it is one of the first times it had imaged the complete graph for this network. The image shows computationally intensive vertices, with their connections highlighted in blue.

The AlexNet image classification training architecture from November 2016. The vertices in the final three layers of AlexNet are coloured while the rest of the graph is in black and white.

The AlexNet image classification training architecture from December 2016 running on Graphcore’s IPU. The different colours relate to the type of vertex used in the computation graph. The three fully connected layers in the graph (coloured green) are shown.

An image of the ResNet-34 forward pass used for image recognition. The graph visually shows where multiple images are sent through the network in parallel. This is known as batching.

The University of Illinois is using deep learning to speed up astrophysics data analysis generated from the LIGO gravitational wave detector. If their model is executed on the Graphcore IPU, this image is generated.

An image from August 2016 of Microsoft Research ResNet-50. The image shows the inference part of the network used for image recognition. There are 50 layers to the network but fewer required on the IPU as many can be used again with different data.

Sources: Graphcore and Wired. Image Credits: Graphcore / Matt Fyles.

Scientists present the first bionic hand with the sense of touch
Cryptojacking might be the new privacy threat in 2018
Physicists developed a coldest chip runs at near Absolute Zero
Scientists prints ‘self-healing’ flexible metal circuits
A Tokyo-based Startup going to start ‘Moon Ads’
New research could develop batteries that triple’s the range of electric vehicles

No Comments

Leave a Comment.