Posts Tagged ‘Moore’s law’

How can we measure processing speed?

Thursday, May 23rd, 2013 by Roberto Saracco
The size of the bar is proportional to the processing capacity (Credit: http://www.thegenesisblock.com)

The size of the bar is proportional to the processing capacity (Credit: http://www.thegenesisblock.com)

It used to be easy to compare processing capacity some 20 years ago. You took a chip and you looked at the clock. The fastest it was the more crunching capacity available. Then it came new flavour of chips, those processing at 16, 64 bits and of course even if the clock speed was the same a 16 bit would perform twice as much as an 8 bit but just one fourth of a 64 bit with an equivalent clock speed.

Then we had to consider the multi-core architecture. Microchips started to have several cores, and again two chips running at the same clock speed would have their processing capacity depending on the number of cores.
Chips got specialised too: RISC and ASIC could perform much better than normal chip, running a t the same clock speed, because they where using a special or reduced instruction set.

Parallelism percolated not just in the chip (multicore) but also in supercomputer leading to massive parallel architectures so that you needed to take into account the speed of the single chip (core) the number of chips and also the speed of the interconnecting matrix.

More specialised chips came to the fore, like the GPU, Graphic Processing Units, that were even faster, ant then bitcoin networks specialised in mining operation (not ore but data…).

Got the picture? No?  Well I am not surprised. It is complex and when you see comparison of processing performance today you are likely to see comparison of apples and oranges.

In the graphic on the left you see a comparison of different kinds of apples: the evolution of processing taking as a gauging stick the fastest supercomputer according to the Top 500 ranking, and you can notice a sort of Moore’s law at work there, since each faster computer performed at twice the speed of the previous one, if they were separated by 18 months time.

The Genesis Block now reports that the latest bitcoin network has achieved a speed of 1 EFLOPS, that is a speed that is 20 times faster than the combined speed of the Top 500 computers all together (the fastest one today, Titan, has a speed of 17.59 PFLOPS, over fifty times “slower” than the bicoin network). Be careful: we are comparing apples with oranges but nevertheless this points out that there can be “apples” and then there can be “oranges”, that is different ways of approaching processing leading to amazing speed.

Today a top of the line smart phone can do as much as 200 MFLOPS, that is 50 million times less than the fastest supercomputer (Titan). On the other hand, there are 5 billions cell phones and all together they have a bigger processing capacity than that supercomputer. Were they all smart phones they would also exceed the bitcoin capacity.

Clearly the 5 billion cell phones are spread everywhere but if you are considering the hundreds of thousands within a city boundary you can appreciate the kind of processing power potentially available. Notice that of these hundreds of thousands probably over 90% are sitting idle, hence they have a processing power just waiting to be harvested!

In the future, this is my bet, they will create a processing fabric that will be used by a variety of applications. Actually they will be much more! They will have a tremendous amount of storage capacity, PB of redundant data and, most important, they will create an amazing sensors network able to harvest a variety of ambient data with many more inferred from them. As a matter of fact they will become an aware fog whose “state” will change as result of a multitude of stimuli.

And, of course, cell phones in ten years time will be just a fraction of the overall processing power, since the IoT will outnumber them at least 100 to 1.

Towards the age of carbon

Saturday, November 3rd, 2012 by Roberto Saracco

Our age is often called the one of silicon, since so much in our life depends on microprocessors and they are made of silicon. The evolution in the processes of microchip production has led to the tremendous advances in technology and to the Society we are today. The Information Society is enabled by silicon and the market speed by the fast evolution pace of silicon chips (Moore’s law).

Schematics of a carbon nanotube on a Hafnium bi-oxide substrata

However, we are approaching a point where further evolution based on silicon will become more and more difficult. That is why researchers are looking at other substances and here carbon is the most natural pretender.

Carbon comes in many forms, from glittering diamonds to soft lead in a pencil. To replace the silicon in a chip scientists are looking at carbon nanotubes, structure composed by 80 atoms of carbons forming a tiny tube that has the required property to manufacture transistors.

Although single transistors have been made using nanotube, the challenge lies in creating a chip with million. billions of them. This requires a manufacturing process that is cheap, fast and accurate. As accurate in fact to allow the placement of a single nanotube with a tolerance of about 2 atoms!

This is what IBM scientists have managed to do, not up to a billion but so far for tens of thousands of nanotubes, and it is clearly a significant progress from the stage of controlling just a few of them.

The nanotubes are created in an industrial process and are then placed in a solution that is sprayed on a substrata made by hafnium bi-oxide. The correct placement of them allows the creation of the chip with the desired properties.

The distance from a few tens of thousands to a billion is huge but … it is as big as moving from a few units to tens of thousands!

We can say they are half way to the target!

Cheaper and cheaper

Saturday, September 29th, 2012 by Roberto Saracco

I find this graph particularly intriguing since it is showing the amazing decrease of cost for DNA sequencing (here it is shown the decrease of cost for a complete human genome) and it compares this with the decrease in cost/increased performances according to the Moore’s law, that has been affecting the chips.

As you can see the decrease in cost has been almost matching the one predicted by Moore for the chips till 2007. From 2008 there is a sharp departure from the Moore’s law with a cost decrease much more rapid. This is due to the adoption of new sequencing techniques (second generation) and we are now on the brink, as reported in another post few days ago, of a third generation leading to a further acceleration of the evolution.

Underneath this faster progress there is electronics and information technology, both supporting the new approaches to sequencing.

New memories for your laptop, in 2014…

Saturday, May 12th, 2012 by Roberto Saracco

The new memory generation: the DDR4

Samsung has started delivery of the first batch of DDR4 memory, the next generation of memories for computers. As usual, these memories will first find application in servers and only in 2014 will find a way to our laptop.

These memories work at 1.2 V, that is 20% less than DDR3, and that means less energy usage (and so less dissipation). Their speed is also better than the present generation , reaching 3.2 billion transfer per second, and that translates into 2.4 gigabits per second.

Again, we are on track with Moore’s law. Faster, cheaper, less power hungry. Let’s enjoy till it lasts!

Looking beyond silicon

Saturday, May 5th, 2012 by Roberto Saracco

Just few days ago I posted the news of the new Intel chip that keeps the Moore’s Law in good shape, and the expectations for the following three years. However, by the end of this decade (somewhere earlier actually), the present silicon will fail and if we want to keep Moore’s law going we need to look for something different. Many are betting on graphene, a carbon based substrate that can provide speedier chips.

The silicine film

Now at the MIT a team of researchers have announced the creation of a thin film made of bismuth and antimony letting electrons flow at a speed hundred times faster than in today’s silicon.

In the researchers own statement “electrons fly like a beam of light”! Obviously this is not exactly true but still it shows the progress made.

Now, you know that the signal is not brought around in a chip by electrons (that are actually moving pretty slow, a few cm per second having to jump from one atom to the next) but by the electromagnetic signal (and this really flies at the speed of light). The fact is that the movement of electrons leads to energy dissipation (the chip gets hot) and there is only as much heat that can be dissipated before the chips stops working. So this invention is good because it radically decreases the heat generation and can therefore support ever denser transistors, hence the survival of the Moore’s Law for a few more years.

The first application of this smart material, however, are likely to be in the area of solar cells, where what is important is the flow of electrons and therefore speedier electrons make for better panels.

It is also expected to find application in several devices creating layers upon layers of this material, each one with specific property.

Other scientist, in France at the Aix Marseille University with colleagues at the Technical University in Germany, have found a way to create a layer of silicon made by a single atom. It is done by blowing silicon vapor on a silver plate. This can also lead to cheaper and faster electronics. The material is shown in the figure above (the photo has been taken with an electron microscope) and has been called “silicine”.

There are many research teams exploring new materials and among these we will probably find the successor to the silicon that has reigned in these last 60 years.

 

Why use 1 million atoms when 12 are enough?

Thursday, January 19th, 2012 by Roberto Saracco

When we store a “bit” on one of our hard disk we use the magnetic properties of atoms to keep that bit value. Actually, for every single bit we use about 1 million atoms.

IBM researchers have published the result of one of their research where by controlling individual atoms they have been able to create a magnetic substrate where 12 atoms are enough to store a bit. That is a 100 time gain in performance. Your current 1TB disk can morph, in a few years, into a 150 TB disk, at a cost that we can expect will be equivalent, or lower, than your current one.

The twelve atoms needed to store a bit (the reading head above)

The density reached is 150 times bigger than the one possible today with solid state memory. In a flash pen having that kind of density could store 20,000 HD movies plus all the music you can hear in a lifetime. Be content with 19,999 HD movies and you can store also 180,000 pictures.

In the figure,  taken from the Kurzweil commentary that you may want to read, you can see a rendering of the 12 atoms used to store the bit. The disposition is crucial to ensure the reliable storage and fast reading/write cycles. On the top left is represented the reading/writing head.

Researches like this one are ensuring that the Moore’s law will continue to hold for this decade.

 

Engineering PCM

Monday, July 11th, 2011 by Roberto Saracco

IBM chip based on PCM

Less than a month ago I posted the news of the prototype of Phase Change Memory (PCM) from the University of California and now I am reading that IBM has come out with the first “industrial” chip prototype of PCM.

The chip has a capacity of 200,000 bits (nothing!) but it demonstrates the feasibility of industrially producing chips based on this technology. For the prototype IBM has used a 90 nm CMOS etching (we are now below 40 nm in the most advanced chips) and has been able to achieve a memorization speed 10 times faster than the fastest flash memory available today.

In a few years we can expect to see PCM chips on the market driving performances up and prices down, thus ensuring the validity of the Moore’s law in this area.

According to IBM the first “users” should be cell phones and enterprise grade servers.

This kind of chips would be right on target to support the much higher transfer speed required by future Ultra High Definition Video (4k), something that today seems to be out of reach for any mass market usage, but just remember that few years ago it seemed impossible to imagine HD quality on a mass market video camera.

The magic of the Moore’s law!

Entering the Exascale Era

Monday, June 27th, 2011 by Roberto Saracco

We are now living in the PetaScale era, since the most powerful computer on Earth (a Chinese machine!) reaches the processing capacity of several PFLOPS (million of billion of floating point instructions per second).

As a matter of fact the amount of date produced every year has already got into the ExaScale (close to 200 EB).

Now Intel at the Supercomputers Conference has announced a roadmap leading computers in the ExaScale Era by 2018.

By 2020, according to Intel, the most powerful computer will have reach the 4 EFLOPS, something that will be need for the ever more precise climate change prediction upon which we are basing our economic policy in the energy field.

Take a look at what was presented at the conference for a peek into the future of processing, a future where, as far as this decade is concerned, the Moore’s law will hold.

Interestingly, one of the major stumbling block in this progress is the energy required to power these new monsters. If we were to use the technology used today by the fastest supercomputer and power it up to reach the EFLOP barrier we would need 1,6 GW of power, enough to serve  city with 2 million people.

Intel is working with some European Research Centres to find ways to decrease the amount of power required.

Is today’s deployment of NGNs future proof?

Monday, December 6th, 2010 by Roberto Saracco

Just the other day I brooded on new technologies that are bound to provide extremely fast processing and optical connectivity that can basically transform the all world into a giant distributed computer centre and also a giant distributed data base.

We have just started to tap onto photons capability to transport information

We have just started to tap onto photons capability to transport information

I observed that it would be naive to believe such evolution will be neutral with respect to the Next Generation Networks, NGNs. The new types of communications they will enable and the changing balance between the edges and the core have to have an impact on whatever we are designing and deploying today.

At the same time it is naive to imagine that the hundreds of billions of dollars that will be invested in NGNs in this decade can be forgotten and a new round of investment will take place. Of course, we know that a Next NGN will be deployed in the future but that will be in the Thirties. We are likely to spend this decade building the NGN and the next one in learning to use it.

How would it be possible to take advantage of both the NGN, as it is being deployed today with GPON and LTE (with slight variations here and there, of course), and of the new technology at the edges?

My personal take is that we are going to see a complete flattening of the physical infrastructure to let the edges take full control of the infrastructure capacity. Not a good perspective for Operators, in a way, since this implies a lose of control. On the other hand, we have seen this trend over the last 30 years. When we moved from electromechanical to electronics the hierarchy of the network became less hierarchical. Each switch became empowered of the decision on how to route traffic. Internet has further decreased the need for hierarchy.

The Web2.0 has moved the control of services at the edges of the network and Operation and Maintenance services developed by Operators have been promoting a self deployment and configuration by their main business customers.

Now, imagine we have the NGN: a complete optical infrastructure connecting communications areas. And nothing else. Radio coverage can be seen as a local area network, that uses the optical infrastructure wherever needed. There is no more the concept of a “wireless network”, a wireless is used as a fabric for local connectivity. Whether this is WiFi enhanced, 50 GHz in home network, LTE or ZigBee does not really matter: it is still about providing a cheap, effective, local area connectivity. Once data have to be moved from one area of connectivity, mostly asynchronously (there is so much storage capacity in each local area that most data is locally present) or synchronously (voice is also data, as video streaming, with just some specific requirements on latency and jitter, surely not an issue for an NGN) the local area network(s) will establish the desired virtual pipe to the required local area. In the case of asynchronous communications there will usually be plenty of local areas that can satisfy the request and therefore plenty of alternative virtual connections that can be selected and managed locally.  In case of synchronous communications only one local area will satisfy the request but the virtual path to connect the two areas may be set up in many different ways, again negotiated at the edges.

The NGN architecture, once you get enough capacity, does not really matter too much. In the past communications infrastructure architectures were designed for maximum performance given the limited resources available. Now they can be designed for operational cost efficiency sure that performance will keep outpacing demand. On that physical architecture it will be possible to design, dynamically, any specific logical architecture that best fit demands and distribution load.

The NGN being deployed today does not offer sufficient bandwidth to support this scenario but the increase of technology performance will mov the core(s) into hundreds of Tbps and the edges into hundreds of Gbps and that will be enough to have a real flat network of networks. At the current pace of evolution we are talking about 10 years time.

Will the infrastructure owners be willing to have their assets participating in this game or will they maintain control? In absence of competition the answer is obvious, but so it is in presence of competition, and this is what we are going to have, to an even higher degree of what we have today.

So what’s in for telecom Operators? The question rings similar to the one posed by Stage Coach Companies 150 years ago with the advent of railways and then paved roads and trucks and buses. And similarly, the more vehicles moved around the better the roads became. What is good with this similitude is that so much more wealth is available today than 150 years ago and communications was instrumental in that. So communications will be in the coming decades.


Lightning fast!

Saturday, December 4th, 2010 by Roberto Saracco

Computers work by moving electrons through their circuits and electrons are (relatively speaking) slow. Actually the electromagnetic field generated is lightning fast but the circuitry works on the actual movement of electrons (currents). It would be good if one can get rid of electrons and just based the computation on the electromagnetic field. Unfortunately one is related to the other.
However, light (that is photons) is an electromagnetic field and if we can use it as the bases for information computation in a chip it would lead to a tremendous leap forward in performance. Compare the difference in performance that we have been able to have moving from the ADSL (that is based on electrons) to the fibre (that is based on photons).

In the last few years we have seen some enabling technology bringing photons on silicon, that is on chip. By mixing gallium erbium with silicon (not an easy feat to marry these substances because of their different physical characteristics) we have been able to develop laser and photodetector on a silicon chip. But that is not enough. What is needed is the capability of processing light without converting it first into electrons.

It is such a complex area that it has been given its own name: nanophotonic on a chip.

Today we have reached petaflop processing capacity (one million billion instructions per second) by clustering thousands of processing chips performing parallel computation. The bottleneck is in their interconnection and in the fact that not everything can be processed in parallel, hence the need for communications among the chips.

This week IBM researchers have announced a breakthrough: a way to use waveguides instead of wires (copper connection at micro scale) within a chip.

http://domino.research.ibm.com/comm/research_projects.nsf/pages/photonics.index.html

IBM CMOS Integrated Silicon Nanophotonics Technology

IBM CMOS Integrated Silicon Nanophotonics Technology

This kind of chips will not just be faster, they will also consume much less power since photons do not generate heat (at the same level of electrons).

According to IBM researchers this technology can leapfrog present day processing capacity by a thousand fold bringing us into the exascale era (a billion billion instructions per second). How far are we from that? 8 years, according to IBM. And a 1000 folds in 8 years is faster than the Moore’s law (it would be 50 folds only at the Moore’s law pace)!

By the way, an exascale computer, according to estimate of Ray Kurzweil, would be running at 100 times faster than our brain (however speed is not everything!), whose processors numbers in the 100 billions and connections in the 1000 trillions.

As it is difficult to imagine the breath of thoughts it is difficult to imagine what such processing power would enable.

But there is more. Processing, storage and transmission are tightly meshed and the different proportion of each of them has deep effect on communications and on communication infrastructures. It would be naive to believe that Next Generation Networks will be unaffected by these changes in those basic technologies performances. A full optical communications extending within the chip itself shrinks the Earth to a ball whose points are never farther apart than 0.07 seconds. Add to this unlimited storage and you can start imagining a world where switching may no longer be needed since every place on the Earth may potentially share the global information space and meaning is no longer the result of information processing rather the state of information being used.

A really strange architecture to imagine. Luckily enough we are already studying this kind of architecture in neuroscience: our brain.