### How can we measure processing speed?

Thursday, May 23rd, 2013 by Roberto SaraccoIt used to be easy to compare processing capacity some 20 years ago. You took a chip and you looked at the clock. The fastest it was the more crunching capacity available. Then it came new flavour of chips, those processing at 16, 64 bits and of course even if the clock speed was the same a 16 bit would perform twice as much as an 8 bit but just one fourth of a 64 bit with an equivalent clock speed.

Then we had to consider the multi-core architecture. Microchips started to have several cores, and again two chips running at the same clock speed would have their processing capacity depending on the number of cores.

Chips got specialised too: RISC and ASIC could perform much better than normal chip, running a t the same clock speed, because they where using a special or reduced instruction set.

Parallelism percolated not just in the chip (multicore) but also in supercomputer leading to massive parallel architectures so that you needed to take into account the speed of the single chip (core) the number of chips and also the speed of the interconnecting matrix.

More specialised chips came to the fore, like the GPU, Graphic Processing Units, that were even faster, ant then bitcoin networks specialised in mining operation (not ore but data…).

Got the picture? No? Well I am not surprised. It is complex and when you see comparison of processing performance today you are likely to see comparison of apples and oranges.

In the graphic on the left you see a comparison of different kinds of apples: the evolution of processing taking as a gauging stick the fastest supercomputer according to the Top 500 ranking, and you can notice a sort of Moore’s law at work there, since each faster computer performed at twice the speed of the previous one, if they were separated by 18 months time.

The Genesis Block now reports that the latest bitcoin network has achieved a speed of 1 EFLOPS, that is a speed that is 20 times faster than the combined speed of the Top 500 computers all together (the fastest one today, Titan, has a speed of 17.59 PFLOPS, over fifty times “slower” than the bicoin network). Be careful: we are comparing apples with oranges but nevertheless this points out that there can be “apples” and then there can be “oranges”, that is different ways of approaching processing leading to amazing speed.

Today a top of the line smart phone can do as much as 200 MFLOPS, that is 50 million times less than the fastest supercomputer (Titan). On the other hand, there are 5 billions cell phones and all together they have a bigger processing capacity than that supercomputer. Were they all smart phones they would also exceed the bitcoin capacity.

Clearly the 5 billion cell phones are spread everywhere but if you are considering the hundreds of thousands within a city boundary you can appreciate the kind of processing power potentially available. Notice that of these hundreds of thousands probably over 90% are sitting idle, hence they have a processing power just waiting to be harvested!

In the future, this is my bet, they will create a processing fabric that will be used by a variety of applications. Actually they will be much more! They will have a tremendous amount of storage capacity, PB of redundant data and, most important, they will create an amazing sensors network able to harvest a variety of ambient data with many more inferred from them. As a matter of fact they will become an aware fog whose “state” will change as result of a multitude of stimuli.

And, of course, cell phones in ten years time will be just a fraction of the overall processing power, since the IoT will outnumber them at least 100 to 1.