Hello, cyborg augmented humans!
Ohh, are you saying that you are not?
So consider this scenario, say you are in a new city, and you wanted to know where this cool new water station is because you are thirsty; the first thing you would think of is to ask your secondary augmentation i.e. your phone, Google Assistant, Siri, etc. I guess you get the point (keeping aside the fact that it's a stupid example), right? But does this secondary augmentation help you to think?
No, of course not; The way your brain works and the way these programs work are entirely different.
We can try to simulate it in some way and that is neural networks.
Deep neural networks are an approach to machine learning that has brought a revolution in computer vision and speech recognition in the last few years, blowing the previous state-of-the-art results out of the water. They’ve also brought promising results to many other areas. Despite this, it remains challenging to understand what, exactly, these networks are doing.
It isn’t a matter of things being too complicated. Almost everything we do is fundamentally very simple and we will further see how but unfortunately, an innate human handicap interferes with us understanding these simple things.
Humans evolved to reason fluidly about two and three dimensions. With some effort, we may think in four dimensions. Machine learning often demands we work with thousands of dimensions — or tens of thousands or millions! Even very simple things become hard to understand when you do them in very high numbers of dimensions.
A simple explanation about neural networks
When we come across terms like Artificial Neural Networks, Machine Learning or Deep Learning, there are two types of responses:-
- The enthusiasts get super-excited and read the blog with utmost passion.
- The not-so-familiar folks get intimidated by the heft of these words and stop reading further right away.
Here we have tried to establish a middle ground so that we can indulge enthusiasts and newbies alike.
Let’s go for a simple explanation of what Neural Networks really are. By simple, I mean really simple!
A neural network (also known as ANN or an artificial neural network) is some sort of computer software, which is inspired by biological neurons!
Our human brains are, indeed, capable of solving gargantuan problems, but at the fundamental level of things, each neuron only solves a little part of the problem. In the same way, a neural network is made up of cells that co-operate together to produce the desired result, although each unique cell is responsible for solving only a teeny-tiny bit of the problem.
Neural networks are an example of machine learning, where a program can change as it learns to solve a problem. ANN’s can be trained to improve with each example, but the number of examples required for larger networks goes up to millions or sometimes even billions. This is known as Deep Learning.
Thinking of the neural network as a human brain, a network starts with an input, just like a sensory organ. Then information flows from one layer of neurons to another. An ANN has an input layer of data, one or many hidden layers called classifiers, and an output layer. The nodes are connected to each other. One node sends some amount of the data it receives to the next node. This amount is decided by a mathematical function.
Being obviously tired of killing so many people manually, Kira decides to build an AI that would ease his work. He makes a very simple neuron that will determine if a person deserves to be killed. The neuron will take it upon itself to register names in the mighty DeathNote.
The values of Sus, Bad, and Kill will be either 0 or 1, which stand for false and true. If any of the input values are 1 (i.e., the person is Bad or Sus or both), then the Neuron will give the output for Kill to be true, and the person’s death will be fixed.
Representing a more easy to see format:
function Kill (sus, bad)
if ( sus || bad )
If the input is (0,1) or (1,0) or (1,1), the neuron gives 1 as output.
And if the input is (0,0) the output is 0 itself. This is done by simple mathematical operations that work on the inputs given.
In larger neural networks, we pass the output of one such neuron to another by applying a function that limits the output between 0 and 1.
This was one neuron. A multilayered network of thousands of such neurons forms an artificial neural network!
To add up, neural networks also can learn from their inputs and outputs and there are three ways in which they may learn:- supervised learning, unsupervised learning, and reinforcement learning.
Are we going to have a new revolution that will enhance our cognitive capabilities just like the industrial revolution did to our mechanical abilities? That is yet to be seen, but it’s certain that humans definitely want to devise out a way to manufacture a thinking machine that perceives reality in a way similar to us.
Let us take our example, we learn in an unsupervised way (well at least the intuition part of it)
So imagine it’s 1952 and we have built a machine of the size of a room that could play tic-tac-toe with a human opponent, fast forward to 1996 a “supercomputer” by IBM played against the world chess champion Gary Kasparov, well Kasparov won that match, but later in 1997 he lost against this “supercomputer.” A few years later many chess engines could easily take on Deep Blue, later in the 2010s even a program in your home computers was good enough to defeat the best chess player in the world, but the fundamental way of these engines to calculate a specific position is entirely brute force. Fast forward to 2017 and a program by Google i.e. AlphaZero, housed within the DeepMind division of Google, learned chess with its neural network algorithms by playing over 1 million games against itself and played a 100 game match against arguably the best chess engine ‘Stockfish’ at that time. AlphaZero thrashed Stockfish in that match with a positive score of 28 and not losing any of the game, and mind you Stockfish is invincible against any known human chess player by miles. And for how much time did AlphaZero trained itself, you may ask, is mere 4 hours, yes it was able to understand the nitty-gritty of the game from the scratch in just 4 hours. Before that, in 2016 AlphaGo, a similar program driven by neural nets of DeepMind plays a match against the 18 times World Go champion and defeats him.
Artificial Intelligence these days can beat any human in every area where enough data or rules of certain game or problem is available such that they can be turned into numeric values or weights to form certain neural links or if it is possible to determine long term goal and desired state and output of certain nodes, as in the example of chess the long team goal is to checkmate the opponent.
But do you know where the catch is?
We still don't completely know how our brain works, or how it forms connections, or how it stores information and recalls it. Neural nets have a well-defined model which is predefined and you cant add or remove new neurons and only the weight of connections between the neurons can be tweaked.
Let’s say I call you clever or smart or intelligent it automatically means that you must be very good at handling different tasks thrown at you or you are most probably diligent or you are very curious about the world or you have a very good social IQ or you are very creative. In contrast, when we try to anthropomorphize the machines and say it is smart it just means that it can find an optimized solution when some numbers are fed into it.
Fortunately, you’re already smarter than you think. If you think otherwise, remember you just need barely enough energy to dimly light a bulb to think of something out of anyone's imagination whereas an Nvidia Geforce RTX 3090 takes up to 250 Watts.
Click here to register for Technothlon ’21