Dual Core, MultiCore, Massive Distributed Processing… Just a few words to say that technology has progressed to offer us multiple processors to work on a single task. Multiplying the processors you increase the overall computational power without increasing exponentially the energy required, as it would be the case if you were to speed up a single processor to match the same performances.
The challenge with this approach is to be able to write a code, a program, to take effective advantage of the parallel processing, no small feat at all!
Now a new programming language is available to ease the life of programs: ParaSail, Parallel Specification and Implementation Language.
The language has been designed by Tucker Taft, CTO of Softcheck a software company based in Boston, to overcome the problems programmers run into when dealing with multicore. There is a tradeoff when using these chips: you can stick to conservative programming but then you are not exploiting the parallelisms offered by the multiple cores or you can parallelize everything but risk creating out of sequence operation that leads to errors.
Current Dual Core on the average can increase the processing speed by 20/30% depending on the task at hand, they are not doubling it. ParaSail uses an approach of pico threading, dividing the program (automatically) in as many elemental operations as possible and threading each one on a different core, unless the programmers block parts into sequences. In other words, it assumes that everything can be parallelized unless it is told not to.
The compiler should be released this September and it has already created an interest both in the programmers community and at Intel.
In this decades we will see the number of cores reaching the hundreds. One of the reasons this has not happened so far is because with present programming languages the more cores you have the less efficient their use so it does not make sense to increase their number. If ParaSail meet its promises the situation will change.
What interests me most, however, is the fact that the concept of multicore can be “stretched” to include massively distributed processing. With UBB (UltraBroadBand) and low latency provided by end to end optical connectivity it may start to make sense to micro thread (not pico thread…) computation onto geographically dispersed computers (SETI on steroids…).
Pervasive Computing and Networking, I am convinced of it, will deeply change our view of computation. Ambients will become aware and responsive and this requires distributed computation that is based both on signal exchange and on edge sensing, just the way it happens in our body where cells respond to nervous signals and to the chemical environment surrounding them.
I also see a parallel, and a support in this vision, by the evolution of networks, where the ones provided by the Operators can be assimilated to the nervous systems and the ones being created bottom up at the edges (viral networks, sensors networks, ambient communications) can be assimilated to the local chemical soup conditioning the reaction of the cells.
There is plenty of new research needed in this area that is at the crossing point of bio, electronics, cybernetics, autonomics and semantics.