Yesterday I was at ICT 2013 in Vilnius, a watering hole for ICT enthusiasts in Europe where various breed of researchers had the opportunity of discussing trends and research investment opportunities in the coming European Research framework, Horizon 2020.
I was part of a panel on “Unleashing the potentials of Future Internet & Cloud Computing towards a digital single market: technological challenges & essential policies”, a challenging title indeed, where my role was to look a bit into the future.
These the points I made:
Today we are looking at the Cloud mostly in terms of an architecture, supported by a reliable network infrastructure that can significantly reduce cost (within this general statement we can also include aspects like increased reliability and security that ultimately would increase the cost of a local solution).
The next step, in some cases already taken, is to add the qualification of “ubiquitous” to the network infrastructure. Hence, a Cloud is providing a “better” access to services and data since once they are in the cloud, and I have a ubiquitous network, I can access them from anywhere. I no longer need to be “co-located” with my information. In this sense we might see that a cloud solution can be better than a local solution, not because of cost savings but because of extra features. Unfortunately, so far, the selling proposition has been on cost cutting… Hence we can say that this aspect has not been leveraged (or it cannot be leveraged because, so far, it is not completely true). Of course, one could also say you don’t need a cloud for access your data remotely, a connection to your own server would do… However, if you see the Cloud not just a replicated repository of what you could have on your local server but as an aggregation point where data is being enriched by connecting one to the other (the so called 5 star information in Time Bernstein Lee classification – whilst today clouds are basically a 4 star information ensemble) then the access to a Cloud is not like accessing a local server.
A further step would be to say that the Cloud is a way to re-engineer system wide processes, using a network infrastructure as the glue to processes; in this view the Cloud plays the role of an information infrastructure. This is no easy step to take, although I feel that it should be one to be considered. Here the cost has to be seen at a system level. I might end up paying more by using a Cloud architecture, but the overall systemic cost of processes enabled by an information infrastructure in the cloud are far lower. Wait a moment. If the cost for an enterprise might be higher, even though the overall cost may be smaller, why would that enterprise do it? Here, as we take a global view, we also need to involve regulators and policy setters that have to pursue efficiency at system level, rather than local optimisation, and provide companies with incentive that would outset the disadvantage brought by local optimisation. Notice that incentive also means: you got to do it! A shift to education in the Cloud creates value at a national level but challenges the revenue streams of publishers selling paper books! Well, in this case you can both provide incentive to publishers helping them to create digital education services or you can just dictate that all books have to be digitally available in the education clouds starting next year. And that’s it.
A fourth step is the one that sees the Information Infrastructure as the real platform and considers hardware resources as occasional, incidental, location of services and data (data become information through processing). This is what I feel is going to be the Cloud in the future. And this cloud will live, mostly, at the edges of what we call network infrastructure today. It will not be accessed through a limited number of pipes but it is actually the result of a meshed network architecture including devices, like smartphones and media centres.
Indeed, if we look at where data/information are stored today we can immediately see that most of these is not in large data centres but in the billions of devices embedding storage capacity. If you multiply the 10 GB average you have in a smartphone today for the 20 million phones you have in Italy you get around 200 PB of storage capacity, much more than the one we have in the big Data Centres in Italy. And, although these Data Centres capacity is going to increase over time, both the number of smartphones will increase as well as their individual capacity. So what is going to happen is that the gap between what you have at the edges and what you call today Cloud is going to increase dramatically by a factor of millions, for Italy (you can well assume that the increase of size in the centralised DC will match the increase of size in each smartphone/media center and these collectively are millions more than the DC). We already have on the market 512 GB compact flash card and you can expect the 1TB next year. That would lead the global storage capacity of smart phones in Italy to exceed EB capacity in just a few years.
Of course one can wonder if such a fragmented capacity can be exploited. At Berkeley, few years ago, they launched the project Ocean Store: a way to exploit the storage capacity at the edges to provide a high reliability storage that looked a lot like a cloud. So the answer is Yes. We have the technology to do that. What is hampering us is the limited battery charge that today is discouraging any features that would increase the drain. But we are getting more and more close to address these issues in a satisfactory manner.
There is actually more. Each smartphone has a processing capability that will grow in synch with the services enabled by the data it contains (and it can reach). Hence the transformation of data into information is likely to be far more efficient, computationally, in the smartphone, or anyhow at the “consumption” point.
We are likely to see a host of devices, not just smartphones, having huge processing and storage capacity, such as television sets. You will see television sets rendering the information to be displayed based on the contextual/ambient data. Two actors talking and one pouring whiskey in a tumbler: this is what I see as I watch a movie. My grandson is also watching the same movie in his room with some friends. He will see the actor pouring a Coca Cola in the tumbler. The contextualisation is made by the television set that is “aware” of who is watching and the rendering is done in real time. And it is feasible with today’s technology, using MPEG 4 and MPEG 21. It requires a lot of processing capacity but that is becoming available. Today you have that sort of capacity with the new Mac desktop: it is 47,000 more performant than the Cray 1 used to be and it can process 4k video in real time.Just wait a few more years and you’ll get it in your television.
It is not just about myriads of points having huge storage and huge processing capacity: we are going to have (hundreds of) billions of tiny sliver of silicon (and carbon) with limited storage and processing capacity that will act as peripheral neurones of an aware ambient. And the data these neurones will be harvesting and processing will be likely as important as the huge storage and processing points as well as the aggregation we have come to call cloud. Notice that I am explicitly using the “neurone” paradigm, and not the “sensor” one because I feel that in the future sensors will have local processing capabilities in addition to pick up data from the environment and transmitting it to the web. They will be able to “interpret” what they sense based on previous experience and on the context created by other sensors. Just as neurones do.
In the brain connections and nodes are both supporting storage, processing and abstraction processes, from data to information.
If I am looking ahead, 20 years from now, I can see a continuum of storage and processing capability spread out in any object and any ambient. Technologies like printed electronics, ultra high speed buoys, with capacity far exceeding the Tbps (we already have a few prototypes of Tbps wireless communications) are going to support a ubiquitous communication fabric. We will no longer be talking about network infrastructure, but about a fabric that is actually a quilt, each tile independently developed by somebody and seamlessly connected to the others. The uniformity will be achieved at a higher level, through a software defined network leveraging on all processing, communications and storage capacity available.
We are therefore, I think, going to see the Cloud morphing in a fog. This is technologically becoming possible and it makes economic sense. However, there are issues, including technological ones, that need to be addressed and solved.
The issue of ownership domains can no longer be tied to a physical infrastructure. In a way today’s cloud is setting the first step in this direction. An enterprise no longer owns the piece of hardware to support its needs in processing and storage terms. These needs have to be tied to the concept of information ownership and encapsulated services.
The present approach to either have transparent access to information (e.g. nobody owns it) or to close information creating walls that can only be overcome through a contractual agreement is not conducive to a healthy leveraging of information: this needs to foster business and support remuneration in a scale of greys: ideally the more economic value you are leveraging out of the use of an information, the more you should be prepared to share it with the ecosystem in which that information was produced and made available. Easier to say than done.
Enterprises will need to change their working relationships, from process and value chains of today to a meshed sort of relationships into an ecosystem where the role of lawyers changes from setting contractual obligations to establishing a workable framework. It is this framework that I feel should be the focus of large Telecom Operators (and regulators).
I can see an evolution where processing, communications and storage no longer are seen as a separate components. It is difficult to imagine so since we have grown our ICT with the von Neumann and Turing machine paradigm where there is a clear separation of these three components. In a way today’s cloud is a tiny and timid step in this direction, the future fog will be a further and more decisive step with many devices operating as autonomous systems in an infrastructure context that creates self adaptation stimuli. And we can look at new paradigms that, matter of fact, are pretty old: the one of the information infrastructure represented by our brain where it is not possibile to separate communications from processing and from storage. I think it is not by chance that the EU decided to invest heavily on understanding the Human Brain. Their 10 year flagship project, The Human Brain, has among its motivation the goal of finding new paradigms that can move the ICT to the next step.
After the panel presentations a person from the audience came to me and told me: I hear your story, but I also heard a different one, that placed a lot less emphases on the capability of the edges (like smartphone) advocating that it was better to keep all processing and storage in the “network cloud” for efficiency resulting from scaling reason.
My answer was that I though about that considerably several years ago and although I find that solution more efficient from a technical point of view (in the end you might have that all you need is to bring your identity with you, that is yourself once biometrics is sufficiently evolved, and use any gateway in the ambient -any surface, any screen- to tap onto any processing and storage capability you might ever need) I don’t think it will come to pass, and that is because of market reasons.
The market is made by consumer and by producers (it is also shaped by regulators that hypothetically might aim for the superior good, but in practice negotiate between consumers and producers) and both of them for different reasons, don’t like such a solution, that I repeat is technically sound, and most probably more efficient.
Consumers love to “consume”, that is to buy, They protest on the price of goods just because they don’t have enough money to fulfil their “drive” to shop. They love to have alternatives, bounty of products to choose from and to buy.
Producers need to win the market, hence they strive to offer new things, they try to push consumers to get their goods.
These two forces are driving innovation at the edges and this is one of the reasons why I am confident that eventually the Cloud will move to the edges.