Computers work by moving electrons through their circuits and electrons are (relatively speaking) slow. Actually the electromagnetic field generated is lightning fast but the circuitry works on the actual movement of electrons (currents). It would be good if one can get rid of electrons and just based the computation on the electromagnetic field. Unfortunately one is related to the other.
However, light (that is photons) is an electromagnetic field and if we can use it as the bases for information computation in a chip it would lead to a tremendous leap forward in performance. Compare the difference in performance that we have been able to have moving from the ADSL (that is based on electrons) to the fibre (that is based on photons).
In the last few years we have seen some enabling technology bringing photons on silicon, that is on chip. By mixing gallium erbium with silicon (not an easy feat to marry these substances because of their different physical characteristics) we have been able to develop laser and photodetector on a silicon chip. But that is not enough. What is needed is the capability of processing light without converting it first into electrons.
It is such a complex area that it has been given its own name: nanophotonic on a chip.
Today we have reached petaflop processing capacity (one million billion instructions per second) by clustering thousands of processing chips performing parallel computation. The bottleneck is in their interconnection and in the fact that not everything can be processed in parallel, hence the need for communications among the chips.
This week IBM researchers have announced a breakthrough: a way to use waveguides instead of wires (copper connection at micro scale) within a chip.
This kind of chips will not just be faster, they will also consume much less power since photons do not generate heat (at the same level of electrons).
According to IBM researchers this technology can leapfrog present day processing capacity by a thousand fold bringing us into the exascale era (a billion billion instructions per second). How far are we from that? 8 years, according to IBM. And a 1000 folds in 8 years is faster than the Moore’s law (it would be 50 folds only at the Moore’s law pace)!
By the way, an exascale computer, according to estimate of Ray Kurzweil, would be running at 100 times faster than our brain (however speed is not everything!), whose processors numbers in the 100 billions and connections in the 1000 trillions.
As it is difficult to imagine the breath of thoughts it is difficult to imagine what such processing power would enable.
But there is more. Processing, storage and transmission are tightly meshed and the different proportion of each of them has deep effect on communications and on communication infrastructures. It would be naive to believe that Next Generation Networks will be unaffected by these changes in those basic technologies performances. A full optical communications extending within the chip itself shrinks the Earth to a ball whose points are never farther apart than 0.07 seconds. Add to this unlimited storage and you can start imagining a world where switching may no longer be needed since every place on the Earth may potentially share the global information space and meaning is no longer the result of information processing rather the state of information being used.
A really strange architecture to imagine. Luckily enough we are already studying this kind of architecture in neuroscience: our brain.