Today a smart phone has a processing power of about 200 megaflops, a laptop is offering some tenth of gigaflops, a PlayStation hundreds of gigaflops. Imagine to find the way for orchestrating millions of said Users’ devices, harnessing their idle processing and storage power: we can achieve bigger capacity than a supercomputer, like Titan (today number one, capable of 18 petaflops).
This distributed platform of edge devices can indeed create a sort of processing and storage fabric that can be used to execute any network function and to provide any sort of ICT services and applications. The components of this fabric can be seen as: CPU/GPU, SSD (Solid State Drive), HDD (Hard Disk Drive) and link (and this is perfectly in line with the “disaggregation of resources” targeted by the Open Compute Project).
One may imagine these components aggregating dynamically in an application-driven “flocking”. And, in the same way as birds with simple local behaviors are optimizing the aerodynamics of the flock (which is solving a “constraints optimization problems” by using very simple local rules), the flocking of component can follow dynamically application-driven network optimizations.
The problem is finding these local rules. Not only, but also the optimal way to allocate and dynamically migrate Virtual Machines and data (which are representing also states). Let me make an example, a very simple model. Imagine, just for didactical reasons, to consider the equivalence between the time of one CPU cycle and the time of a step in a walk. The latency in accessing a SSM (e.g., DRAMs) can be estimated as around tenths of CPU cycle, tenths of steps in our example. But if you wish estimating the latency in accessing the HDD, i.e. the stored data (also including the latency of the network links, RTT), then overall it results the time to make a walk of about 10 000 km.
I’m sure that solving this constraints optimization problem…will mean allocating processing and storing data as closer as possible to the Users!