Posts Tagged ‘AI’

Good job! Bad job. And “IT” learns…

Tuesday, December 17th, 2013 by Roberto Saracco

I just stumbled upon an interesting overview of the progress made, and to be made, in emulating brains capability to process data and create a meaningful understanding on how to behave in the world. Notice that I did not said an “understanding of the world”.

BM used this simulation of long-range neural pathways in a macaque monkey to guide the design of neuromorphic chips. Credits: IBM

IBM used this simulation of long-range neural pathways in a macaque monkey to guide the design of neuromorphic chips. Credits: IBM

The article, Thinking in Silicon, is worth reading and I guess you will read it. So no point in a detailed summary here.

I just like to point out some aspects that may shape the evolution of computing in the next decade.

There is a need for dramatically reduce the power consumption of processing if we really want to create pervasive awareness in ambient. A single fly has a processing capacity that is a trifle of the one you are holding in your hand with your cell phone. It also shows, since your cell phone dissipate quite a lot of heat, a quantity that would fry the fly…

And yet, your sophisticated cell phone with its huge computation capabilities cannot react to ambient changes nor “understand” how to behave in the ambient, as a fly is obviously capable of doing.

Some different sort of computation should be going on. A fly, scientists have discovered, use about 5 thousands neurones to analyse its position in space as it flies and determine what to do (how to control the wings) to move where it want, avoiding obstacles and escaping dangers. IBM has created a chip, I reported on it, Synapses, that can mimic the working of neurones, using about 6,000 transistors per neurones. That seems a lot, but if you do the numbers, it turns out that mimicking 5,000 neurones requires just 3 million transistors, nothing if you think that a chip today can have over a billion of them. And yet, although we have the processing power we do not have the computation power to make sense of the ambient.

We have been able to make incredible progress in computation thanks to the amazing increase of processing power but at the cost of huge power consumption. Google has been able to spot a cat or a person face in an image, but to do that they are using tens of thousands of processors… and MW of power. Our brain can do the same with just 50W.

There is a growing agreement that in order to make significant progress we can no longer just improve the processing capability, we need to change our computation paradigm. And the hope is that by understanding what is the computation paradigm of the brain (any brain…) we can do that.

For the time being some progress has already been done in this direction. HRL, Hughes Research Labs, have been able to create a chip that can learn to play Pong, just by being told: “You did good”, or “you did bad”. You don’t have to program it to catch the ball by moving the bar, it works it out by itself.

Of course not everyone agrees on the direction to follow. Some, like HRL, claim that it would be enough to mimic some aspects of brain processing to create a new computation paradigm, others are saying that you would need to simulate all the interplay of molecules inside each neutrons, dendrites and synapses to get the same computation.  Some are saying that in the brain there is no difference between processing (the physical support to computation) and computation (the manipulation of data).

An artificial brain to decode CAPTCHA

Friday, November 1st, 2013 by Roberto Saracco

Just few days ago I published a news on the invention of a new way to replace CAPTCHA as a means to distinguish human from machine access, motivated by the increasing ability of machine to detect what’s hidden in a CAPTCHA and the alternative ways of decoding them by using dedicated … human beings.

A screenshot shows software from Vicarious analyzing and solving a Yahoo Captcha.

A screenshot shows software from Vicarious analyzing and solving a Yahoo Captcha.

Now I step into a news that a US start up, Vicarious, is working to develop a software mimicking a brain to solve image detection problems and it is using CAPTCHA to test its ability. They are claiming a 90% correct identification: this is not significantly different from the one achieved by human beings…

On October 28th they have announced that their software is indeed rendering CAPTCHA ineffective as a way to distinguish humans from machines (the Turing Test).

There are already a number of applications that are trying to decode CAPTCHA code but they are based on a large set of data, working through a statistical analyses. What is new in Victorious software is that it mimics the way the brain processes images to extract elements. It does that both through experience (that in a way can be compared to a statistical analyses) and through recognition processes that are “wired” in the brain neuronal structures. This latter is what is being mimicked by Victorious. According to their report this approach is less processing intensive and does not need to rely on a vast set of data.

Vicarious is not developing this software to solve CAPTCHA, as a matter of fact they have already declared that they are not releasing it in such a way that it can be used for this purpose. They aim at, according to one of their founderD. Scott Phoenix:

“Anything people do with their eyes right now is something we aim to be able to automate”

Possible fields of applications are capturing text and numbers in Google Street images, helping in diagnosing medical condition by looking at photos of a patient and so on.

Getting smarter, but still a long way to go!

Tuesday, August 13th, 2013 by Roberto Saracco

With the advent of the industrial revolution scientists have chased the goal of creating a machine that can mimiic the human behaviour to the point of being undistinguishable from a human being. And some has set the goal to exceed human capability.

In terms of strength we have already exceeded that but in terms of “brains” we are still chasing what seems to be the end of the rainbow. The advent of electronic processing in the second part of the last century seemed to bring us close to reach the point where a machine could not be distinguished in terms of answers from a human being but so far the Turing Test has not been passed.

We have got computers that can calculate way better than us, computer that can solve mathematical problems and that can pilot an aircraft with a surer hand. But we haven’t got a computer with common sense!

Estimates of how much processing power is needed to emulate a human brain at various levels (from Ray Kurzweil, and Anders Sandberg and Nick Bostrom), along with the fastest supercomputer from TOP500 mapped by year. Note the logarithmic scale and exponential trendline, which assumes the computational capacity doubles every 1.1 years. Kurzweil believes that mind uploading will be possible at neural simulation, while the Sandberg, Bostrom report is less certain about where consciousness arises. Credits: Wikipedia

Estimates of how much processing power is needed to emulate a human brain at various levels (from Ray Kurzweil, and Anders Sandberg and Nick Bostrom), along with the fastest supercomputer from TOP500 mapped by year. Note the logarithmic scale and exponential trendline, which assumes the computational capacity doubles every 1.1 years. Kurzweil believes that mind uploading will be possible at neural simulation, while the Sandberg, Bostrom report is less certain about where consciousness arises. Credits: Wikipedia

One of the pre-requisite to create an artificial brain is to have a machine with a processing power comparing to the one of the brain. The graphic above illustrate the expected growth of processing power and the thresholds where it meets the demand for simulating a certain domain.

It should be noted, however, that being able to simulate may not be enough to replicate consciousness and the spark of intelligence (this is a matter of debate among scientists, with some saying that “if it walk like a quack and it quack like a quack it is a quack…”).

Not taking any stance in the debate, so far we can only measure the intelligence of a machine using the same tests we use to measure the intelligence of a human being.

A test made at the University of Illinois has graded  ConceptNet 44, an artificial intelligence system developed at M.I.T, au pair with a 4 years old kid. S0 far it is the smarter computer existing in a general sense (we have smarter computer but within a very specific domain, like the one that has won the context with the world chess player…).

It is interesting to notice, however, that ConcepNet 4 got an average score like a 4 year old kid but looking at the grades it got in the various set of test one psychologist would say that such a kid has problems, with some big pathological alteration of growth since it performed unevenly in the test, extremely well in some area and poorly in others.

In a nutshell it is still missing “common sense”. Will the increase in computation power be able to achieve this elusive “common sense”. Probably yes but as we get closer we realise that the common sense has just taken a step further, like the end of the rainbow.

Find me Bob, please!

Monday, July 29th, 2013 by Roberto Saracco

You can tell, you actually did, this to any of your friends at a party and that will be enough for activating them to search for Bob. But you cannot ask this simple favour to a robot. It doesn’t know who Bob is, it would not be able to recognise him in a crowd and so on.

Yet, this is what a Cambridge, US, starneurala_logot up is aiming at. Creating software that allows robot to be more autonomous, be able to learn and interact with their “changing” environment.

Neurala is a brand new company but has already developed software to make Mars rovers smarter under a funding from NASA. But what they want to do is to make “brains” for robots that can be used by people in their life. And people do not have the skill, nor the time, to program a robot. They can teach them, however, how to do something. They can show a robot a portfolio of photos of their friends so that the robot can memorise their faces and names (the idea is to show your robot Bob’s picture and tell it: Hey, this is my friend Bob! Then later on if he says: find me Bob, the robot can roam the place looking for Bob.

On its own the robot should become aware of the space it is living in, as we do. We can be brought to a new city and we will feel confuse but in a short while we learn from visual hints what is the lay of the land and we will start to be more confident.
This is what Neurala wants to achieve. Bestow on robots the capacity to learn. And they want to do that for ay kind of robots, sometimes embedding the brain (the software) in the robot itself, some other times in the robot controller that may communicate with the robot in a more operational way.

Neurala is partnering wit several companies that produce robots, including iRobot, the company probably most widely known for its robotic vacuum cleaners.

In Europe robots (with the exception of the aforementioned iRobot) are not part of our perceptive landscape as they are in South Korea and Japan. They are relegated to manufacturing processes (or transportation, like the robotic metro system in Turin and Lion). But in the coming years they will become more common in everyday life and being able to talk with them and talk them into doing something will become more and more important. According to some this decade is the decade of the brain, the next will be the decade of the aware ambient and the following one, beginning in 2030, will be the decade of autonomous robots. Get ready to it.

Smarter and cheaper?

Wednesday, October 31st, 2012 by Roberto Saracco

The progress in computers capability to analyse images, discover patterns, has been recognised long time ago and already in the nineties software was developed to help radiologists to diagnose cancers and other ailments in radiography. But so far computers have lagged behind an expert eye in detecting cancers.

In the USA an expert radiologist earns around 300,000$ a year and to decrease cost some hospitals have already outsourced to India the analyses of radiographies. The progress in computer capabilities, however, is now making computer diagnoses a good alternative to expert radiologists.

There are of course several legal issues tied to substituting a doctor with a computer but I feel that the path is already visible. As a matter of fact, blood analyses is today completely automatised, a computer takes care of everything and no-one is questioning the outcome. Radiology is just a step further on the same path towards automation of diagnoses.

Computer assisted radiology is already a common practice

At a conference in January 2013 in Zurich, doctors and scientists will make the point on progresses in the area of computer aided radiology. I am no expert in this area but it seems to me that computers are gaining the upper hand, as they did in managing many complex activities (including piloting a plane that today is mostly a computer job…).

The availability of big data (million of digitalised radiographies) is just beyond our human capability to grasp and learn, but it is the base for computers to get better and better.

An example is the EDiamond project in the UK for Mammography analyses

Today scientists have the goal of making computer becoming better and better, rather than bettering our capabilities. Not sure if we have already passed the thresholds and computers are already better than we could ever possibly be, but we are surely getting near to that point.

Mirror mirror on the wall, who is …

Tuesday, September 4th, 2012 by Roberto Saracco

Mirror mirror on the wall, who is the fairest of them all? You certainly remember the Queen in Snow White asking that. I was reminded of my youth days when I saw this news about a robot that can recognise itself (or should I say himself?) in the mirror.

The Robot Nico is looking at itself in the mirror...

The “mirror test” is used by scientists to study the level of awareness of self in animals. What they do is to colour a dot on the front of the animal while it is asleep and then take the animal in front of the mirror. If the animals react at “seeing” itself with the new dot on the front they derive the information that the animal recognises itself in the mirror.

The robot, Nico, created by researchers at the Social Robotic Lab, University of Yale, has been programmed to recognise itself and to understand reflection created by a mirror. When looking at the mirror the robot does not try to take an object “in the mirror” but rather turns around to look for the real object, showing the understanding that what it is in the mirror is just a reflection.

Now, I feel a bit uneasy to buy that Nico is “self aware” just because it understands the difference in the image of a real object and in the one reflected by a mirror, or because it understands that the movements it sees in the mirror are “its movements”. Am I self aware of myself just because I recognise myself in a mirror? My answer would be “No”.

On the other hand, how can we judge other “people” awareness if not by looking at their behaviour? If they behave as if they were aware we would deduce that they “are” aware! Why should we not apply the same to a robot?

This, in a way, is what the Turing test is about: if you cannot distinguish a man’s reaction from one of a computer than I have to assume that for all practical means the computer intelligence “is” equivalent to that of the man. A disturbing thought ….

Did you back up your brain?

Sunday, July 1st, 2012 by Roberto Saracco

Over 30 years ago, back in 1981, a nice book challenged our ideas of brain, consciousness and what might happen if we ever would succeed in moving them to a machine. It was Mind’s I, a collection of articles prepared by Douglas R. Hofstadter and Daniel C. Dennett. If you haven’t read it do it. It is worth your time. And it is a most timely reading since now, in June 2012, the Internation Journal of Machine Consciusness has published a special issue on Mind Uploading.

The very ideas that were voiced in Mind’s I as pure speculation are now becoming serious research topics by many researchers around the world.

There are a number of forces that have led to this point: a growing understanding (although very far from being complete) of how our brain works, better tools to probe inside a living brain to see it “at work”, tools that can intercept thoughts as they are formed and translate them into signals upon which a machine can act (brain – machine interfaces, some already in the mass market as game interfaces) and more sophisticated software for signal processing.

What used to be a science fiction domain is now becoming the focus of research aiming at practical application.

A non profit organization, Carboncopies, has been established to study SIM (Substrate Independent Minds, see the linked article).

The Mind Uploading Issue contains papers like “Fundamentals of whole brain emulation, state, transition and update representation“, “A framework for approaches to transfer of a Mind’s substrate”, plus several ones dealing with the tools available and required, like a paper with an eye catching title: ” Non destructive whole brain monitoring using nano robots”.

So, when do you think

- we will start seeing ads from Amazon or Network Operators urging you to back up your brain (mind) in their cloud?

- our brain, once in a cloud, will be accessible through open interfaces (API) to leverage their knowledge and provide services to third parties?

- we will be able to relax in our couch and make money because our virtual brain in the cloud is actually providing services based on the knowledge we have harvested?

- we will risk getting sued because our virtual brain has made a blunder?

And most interesting:

- when do you think the above questions will be just part of life, and not some crazy speculation?


An electronic brain to heat your home

Tuesday, March 13th, 2012 by Roberto Saracco

Your house heat signature

As fuel price goes up and people pay more attention to the environment new ways for optimizing energy use are popping up. Interesting this news of an electronic brain developed by a start up of the Ecole Polytechnique de Lausanne able to learn from your house and … from your behavior to regulate the heating system.

The device is able to measure a variety of parameters, including the way your house dissipate heat -as shown in the figure, the level of sunlight (for taking into account the irradiation and the feeling of warm induced…), your presence and your habits (it is not enough to know that you are home to increase a bit the temperature, it has to know when you will likely be home so that it can start increasing the temperature before you are home!).

This way of managing the heating system can reduce fuel consumption by an amazing 50% and more.

The computation is made through a “neuronal intelligence”. This is what let the system learn and adapt over time, so that as you change your habits the system reacts and adjust.

Would you like a few Gigs of add on memory in your Brain?

Tuesday, June 28th, 2011 by Roberto Saracco

Well, it is not exactly like that but headlines have to be catchy, don’t they?

Brain Implant Chip (rats only!)

Researchers at the Wake Forest University have developed a custom chip that can improve memory in rats. Although it is nothing like an extra storage for the brain it is indeed an interesting development since it points out that we are really starting to have a clearer view on how the brain works and we are starting to interact with its processes.

The researchers monitored the hippocampus, the area of the brain responsible for converting short term memory into long term one, of several rats and learned the mechanisms used by their brain to remember certain action, in the specific the pushing of one of two levers.

The implanted chip is able to pick up signals from the short term memory and through an algorithm stimulate the hippocampus so that such memories are stored in the long term area. There is not, at least so far, an understanding of the “language” used to store information in the short and long term memory but, as the researchers are pointing out, it does not matter since the chip acts like a translator from Chinese to Russian without understanding what it is translating.

Although this experiment demonstrates the progress we have been able to make both in the understanding of the brain and in the creation of algorithms that can mimic the brain internal communications we are extremely far for a clinical application. We can today monitor a single neuron but we know that a single memory, like the flavor of the ice cream I ate yesterday, involves millions of neurons (that are also involved in hundreds of thousands of other memories).

Hence, it will be quite some time before we can plug in some extra Gigs in our brain. Some scientists are arguing that it will never be possible to have a machine thinking like a human because human thinking is different from what can be created in silicon at a fundamental level. This goes back to the critique of Artificial Intelligence, a theme that was hot in the past and that is now being rediscussed.

However, never say never….

Guess where am I going?

Saturday, June 11th, 2011 by Roberto Saracco

Sometimes I hit into a news that just start me thinking. Its value goes much further than the news itself. Well, this is one of those.

Is this where I am going?

Ford researchers are testing a hybrid car that can guess where I will be driving (no, it is not waiting for me to fill in the info in the car navigator!).

It uses a predictive technology developed by Google and the software developed by Ford tweak the car engine to fit the predicted route (e.g. it takes decision on when using the battery power rather than the gas).

You might wonder if such a guessing is really important in terms of energy efficiency but what made me think is the fact that objects are getting smarter and this is going to change our perception of the environment and possibly the way we look at them and interact with them.

In the future more and more information (data) will be available and harvesting/processing these data will provide us with insight on what is happening and on what might be happening. If you think about it there is something very close to intelligence and even consciousness in this. We do something because we perceive a given situation AND we derive expectation out of that. Most of our actions are the result of an evaluation of possibilities that may become real in the future.

Of course, it is not “real” consciousness but is getting pretty close to that!