I just stumbled upon an interesting overview of the progress made, and to be made, in emulating brains capability to process data and create a meaningful understanding on how to behave in the world. Notice that I did not said an “understanding of the world”.
IBM used this simulation of long-range neural pathways in a macaque monkey to guide the design of neuromorphic chips. Credits: IBM
The article, Thinking in Silicon, is worth reading and I guess you will read it. So no point in a detailed summary here.
I just like to point out some aspects that may shape the evolution of computing in the next decade.
There is a need for dramatically reduce the power consumption of processing if we really want to create pervasive awareness in ambient. A single fly has a processing capacity that is a trifle of the one you are holding in your hand with your cell phone. It also shows, since your cell phone dissipate quite a lot of heat, a quantity that would fry the fly…
And yet, your sophisticated cell phone with its huge computation capabilities cannot react to ambient changes nor “understand” how to behave in the ambient, as a fly is obviously capable of doing.
Some different sort of computation should be going on. A fly, scientists have discovered, use about 5 thousands neurones to analyse its position in space as it flies and determine what to do (how to control the wings) to move where it want, avoiding obstacles and escaping dangers. IBM has created a chip, I reported on it, Synapses, that can mimic the working of neurones, using about 6,000 transistors per neurones. That seems a lot, but if you do the numbers, it turns out that mimicking 5,000 neurones requires just 3 million transistors, nothing if you think that a chip today can have over a billion of them. And yet, although we have the processing power we do not have the computation power to make sense of the ambient.
We have been able to make incredible progress in computation thanks to the amazing increase of processing power but at the cost of huge power consumption. Google has been able to spot a cat or a person face in an image, but to do that they are using tens of thousands of processors… and MW of power. Our brain can do the same with just 50W.
There is a growing agreement that in order to make significant progress we can no longer just improve the processing capability, we need to change our computation paradigm. And the hope is that by understanding what is the computation paradigm of the brain (any brain…) we can do that.
For the time being some progress has already been done in this direction. HRL, Hughes Research Labs, have been able to create a chip that can learn to play Pong, just by being told: “You did good”, or “you did bad”. You don’t have to program it to catch the ball by moving the bar, it works it out by itself.
Of course not everyone agrees on the direction to follow. Some, like HRL, claim that it would be enough to mimic some aspects of brain processing to create a new computation paradigm, others are saying that you would need to simulate all the interplay of molecules inside each neutrons, dendrites and synapses to get the same computation. Some are saying that in the brain there is no difference between processing (the physical support to computation) and computation (the manipulation of data).