Archive for the ‘Pervasive Computing & Networking’ Category

Fog: the Cloud at the edges

Thursday, January 23rd, 2014 by Roberto Saracco
New software from Bittorrent can synchronize files between computers and mobile devices without storing them in a data center.

New software from Bittorrent can synchronize files between computers and mobile devices without storing them in a data center.

In other posts I mentioned the possibility for an evolution of the Cloud paradigm into a Fog paradigm. Data storage and processing rather than occurring in dedicated data centres (distributed to some extent for reliability purposes) would occur by leveraging data storage and processing capacity at the edges of the network, provided by devices.

Now BitTorrent is offering a beta version of a software developed by them to do just that. You can synchronise data, like you were using a Cloud service but without the Cloud.  The servers in the Cloud are replaced by devices that you, or your friends own.
BitTorrent provides a secure encryption to connect the two devices and store your data on them. Furthermore, it seamlessly keep them in synch.

Of course, the two devices need to be on line at the same time, since there is not “storage-and-forward”. And this is also the strong point in their offer. Nobody is storing your data on their server.

With all the buzz following the disclosure of what NSA is doing and the suspicions of (forced) collaboration by Cloud owners to let NSA peek at private citizens data, this proposal from BitTorrent may found a receptive market.

Whether you belong to the cluster of people annoyed of potential data eavesdropping or not, having the possibility of synchronising -all at the edges- your data among your devices may be an interesting thing that I am pretty sure many would like to have and use.
And this is going to take us a step further towards a decentralised, pervasive network where all devices are an integral part of it.

A “Software-Defined-*” transformation

Wednesday, January 22nd, 2014 by Antonio Manzalini

I stumbled upon this book “Software Take Command” of Lev Manovich (2008) where the author is arguing that “What electricity and the combustion engine were to the early 20th century, software is to the early 21st  century”.

Still from Cue Visualizer toolAs a matter of fact, we’re witnessing everyday how “digitalization” and ICT advances are transforming the culture and the economy of our society: we should realize that we’re already living in the “economy of information” (or knowledge), and that most (if not all) the socio-economic “processes” should/will evolve to take advantages of this deep technological transformation. This will require different business rules, and very different kinds of jobs, workers and skill than the economy of the 20th century, which was based mainly on industrial factories, manufacturing and manual work. Economic and cultural values in the “economy of information” are, and will be, placed on information, knowledge, creativity and the “mind-power” to cope with a fast-changing socio-economic environment. A transformation which is unstoppable, because it will bring cost reductions, a sort of periodic “optimization” enabled by human creativity.

Also in Telecommunication Sector we’re realizing how adoption of “software” in networks and services infrastructures will accelerate the pace of innovation (as it is doing continuously in the IT domain) and will reduce operational costs  (e.g., through optimizations exploiting big data for autonomic operations). But this will impact also the biz rules, for example, it will move “competition” from hardware to software, lowering the threshold for several Players to enter into the sector. This will bring an increase of the overall “complexity” of nature and dynamics of the market-place (related to the deeper integration of IT and networks systems and processes, and, even more, to the shift of attention from the big networks to the edge terminals, machine  and smart things).

In summary, it’s not just a matter of understanding if SDN and NFV paradigms will be advantageous or not for future networks: on one side it’s more a matter of “softwarising” (i.e., automating, optimizing) processes to adapt to an overall transformation, which is already changing the rules of the biz; on the other side, it’s also moving to faster deployments where less investments are required and revenues expected in the short term. And the “edge” is the ideal arena, where new forms of cooperation and competition will emerge, and new biz models.

Imagine an ecosystem where “trusted” network services and functions (L4 to L7) are provided as apps by different Developers are exchanged and traded like in a market-place: end-to-end Users (Business and even Residential ones…) can browse and select these virtual functions which best match their needs.

Exascale computing is on the horizon

Tuesday, January 21st, 2014 by Roberto Saracco
The K computer (credit: RIKEN)

The K computer (credit: RIKEN)

RIKEN has announced to have been awarded the task of developing an exascale supercomputer, a computer that can process in excess of one billion billion operations per second (that is 250 billion times the processing power used to bring the man to the Moon in 1969).

RIKEN is the Japanese company that developed, and it is currently maintaining, the K supercomputer that became operational in 2012, for a year the fastest supercomputer in the world (superseded in 2013 by Tianhe-2, from China).

The planned Exascale supercomputer should start crunching numbers in 2020, making it 100 times faster than the K supercomputer.

At this level of processing speed it can make design of new drugs possibile working at molecular level, by simulating the shapes of molecules, their mutual interactions and the forces that are playing. Consider that in a single cells we have roughly 100 billion molecules (100,000 billions atoms). Simulating all interactions that may take place as a drug enters the cell is a mind boggling endeavour. And yet, it is becoming within reach of computational advances.

There are also several other applications and as we are able to harvest more and more data on our ambient we are growing the possibility to understand in an analytical way the deep processes that are playing under the surface but at the same time the need for computation capabilities grows exponentially.

Computational simulations of photosystem II inside a realistic, water-containing membrane reveal the existence of previously hidden molecular pathways that play critical roles in photosynthesis. Credits: American Chemical Society

Computational simulations of photosystem II inside a realistic, water-containing membrane reveal the existence of previously hidden molecular pathways that play critical roles in photosynthesis. Credits: American Chemical Society

Home sweet home: what about privacy?

Tuesday, January 14th, 2014 by Roberto Saracco

My home used to be my castle. A place keeping myself “a million miles away behind the door” (if you still remember that song…

WiTrack’s Localization Algorithm. The time estimate from a receive antenna defines an ellipse whose foci are the transmit antenna (Tx) and the receive antenna (Rx). (a) shows that WiTrack can uniquely localize a person using the intersection of two ellipses. (b) shows that in 3D, the problem translates into an intersection of three ellipsoids. (Credit: Jason Dorfman, CSAIL)

WiTrack’s Localization Algorithm. The time estimate from a receive antenna defines an ellipse whose foci are the transmit antenna (Tx) and the receive antenna (Rx). (a) shows that WiTrack can uniquely localize a person using the intersection of two ellipses. (b) shows that in 3D, the problem translates into an intersection of three ellipsoids. (Credit: Jason Dorfman, CSAIL)

The evolution of technology is casting long shadows on this idea of a home as a cocoon. Just few days ago I posted a few thoughts on the fast evolution of technology versus the slower evolution of ethics. Now I stumbled upon a news of work that is being done at the the Computer Science and Artificial Intelligence Lab (CSAIL) at the MIT where they are perfecting a system that allows detection of people movement inside a house.

The system is based on the reflection created by a human body when moving into an electromagnetic field. The field can easily go through walls and the perturbation created by a moving body can be detected providing sufficient information for a software to reconstruct the shape and movement with a very good precision 3D tracking.

According to the researchers this system, WiTrack, is much better than tracking technology based on WiFi signals. It is very low power, some 100 times less than a WiFi field and a 1,000 times less than the signals generated by a cell phone but it can achieve much better precision thanks to the way the signal is constructed.

In presenting the system they provide some ideas for application, first of all playing games that can become more engaging by having the possibility of moving around your home (imagine playing look and seek with some friends over internet where they are hiding in their homes and you in your). Also, they envisage interesting potential as a sensing device to monitor elderly, spotting if they move and if they fall.

Surely there can be plenty of nice applications, but still, the idea that my home walls are no longer separating me from the world makes me uneasy.

Have we just created a business opportunity for some companies to develop electromagnetic shields to install in our homes to have them back to the point we used to have them?

What if…

Friday, January 10th, 2014 by Roberto Saracco

What if the idea of creating networks starting at the edges becomes reality? There have been several discussions on a paradigm shift that considers the evolution of network as driven by the edges. Speculation on the fact that devices, like smart phones, are already available to create their own halo net and potentially provide connectivity to variety of other devices (IoT) and also among themselves thus getting rid (to a certain extent) of the Network (with the capital N).

Internet inside: GM says that high-speed Wi-Fi hot spots should be a standard feature of cars. They will be available in most 2015 Chevrolet models.

Internet inside: GM says that high-speed Wi-Fi hot spots should be a standard feature of cars. They will be available in most 2015 Chevrolet models.

The stumbling block has remained the power consumption. Using your cell phone as a network node will kill its battery in a very short time, at a pace that you cannot afford. So others have started to say that in a urban environment cars might play the role of network nodes. But of course, that would require to convince automakers to embrace this evolution.

Well, at CES we have seen the announcement from GM that most of their Chevrolet model cars in 2015 will be connected to the Network (capital N) via LTE/4G through an agreement with AT&T and will provide a WiFi connectivity space “inside” and “outside” the car. In practice they will provide a network (with the lower case “n”) at the edges.

Clearly this is not the aim of the announcement, but once you are creating communication points and enabling local communications areas (halo nets) the step towards creating a real network at the edges is but a small one.

We have seen this happening with the tethering provided by OSV few years ago.  People have started to use their iPhone as a communication node to create a communication area onto which other devices can piggy back. Some Operators have implemented restriction to block this feature or have enabled it only for high spending customers. But more and more we are seeing that competition is forcing Operators to relax the constraints.

I am pretty sure this will happen for networks created by cars. What we needed were cars creating communication areas, and with this announcement we know it is just a year away.

Microrobots are copying from unicellular life forms

Sunday, January 5th, 2014 by Roberto Saracco
Simulation of euglenid movement (credit: SISSA)

Simulation of euglenid movement (credit: SISSA)

Researchers at the University of Catalonia and Trieste have got a ERC (European Research Council) grant for 1.3 M€ to explore the feasibility of creating micro robots copying unicellular organisms, in particular those of the euglenids family.

In their view future robots will look much more like an octopus tentacle or an elephant trunk than a mechanical crane. Their body will be soft and flexible and will morph into any shape to adapt to the situation.

This will be particularly true for robots that will roam our body. For this kind of micro robots researchers are looking to mimic a class of unicellular organism, the euglenids.

Their work uses simulation tools to study the nano-structures needed to allow movements and change of shaping without losing functionality. In perspectives these micro robots might be used to deliver drugs to the right cells, remove blood clotting in vessels or stop bleeding.

Surely it is basic research and we are quite far from application. Nevertheless the trend is clear: we will see in the next decades more and more active bots in the ambient and in our bodies and that will surely change the scenario: great solutions and new challenging issues, including ethical ones.

Memcomputing…beyond von Neumann

Saturday, December 28th, 2013 by Antonio Manzalini

Memristive devices are capable of integrating information storage and processing on the same physical computation architectures: this is offering an alternative computing (called memcomputing) to the conventional von Neumann paradigm. Amazingly, memcomputing, is basically how the brain operates: neurons and their interconnections (the synapses) can store and process information in the same location.

An illustration of memcomputing (credit: ORNL)

An illustration of memcomputing (credit: ORNL)

Recently, the Department of Energy’s Oak Ridge National Laboratory  experimented an unexpected behavior in ferroelectric materials capable of offering memristive properties. Ferroelectric materials are well known for their ability to spontaneously switch polarization when an electric field is applied, so, what they do has been using a scanning probe microscope, to areas of switched polarization, called domains.

Surprisingly when the distance between domains has been reduced below a certain value, the domains began forming complex and unpredictable patterns on the material’s surface. A complex behavior that could be explained through chaos theory! Look at this paper. They claim that, potentially, these properties  could be used to create a novel generation of devices for memcomputing: while a conventional technology requires several transistors to realize a logic gate, a ferroelectric materials could offer a transistor-less approach to logic, similar to what’s happening in actual brains.

Not only an alternative computing: there is also another interesting potential application of this research, which is the development of experimental systems and tools allowing to study the physics of complex dynamical systems around the phase transitions towards chaos… applicable to a number of areas from smart materials to socio-dynamics, from economics to future Internet and Artificial Intelligence.

Stanene: does it sound a little bit like graphene?

Friday, December 20th, 2013 by Roberto Saracco

It looks like scientists and researchers are focussing either on 3D (printing in 3D) or on 2D. Wait a moment. What does 2D mean?

Adding fluorine atoms (yellow) to a single layer of tin atoms (gray) should allow a predicted new material, stanene, to conduct electricity perfectly along its edges (blue and red arrows) at temperatures up to 100 degrees Celsius (212 Fahrenheit) (credit: Yong Xu/Tsinghua University; Greg Stewart/SLAC)

Adding fluorine atoms (yellow) to a single layer of tin atoms (gray) should allow a predicted new material, stanene, to conduct electricity perfectly along its edges (blue and red arrows) at temperatures up to 100 degrees Celsius (212 Fahrenheit) (credit: Yong Xu/Tsinghua University; Greg Stewart/SLAC)

Any surface is 2D so it is nothing new. What scientists are doing is trying to create surfaces (2D) without the third dimension. Which, of course, it is not possible but creating a material that is just an atom thick gets pretty close to the idea of a pure surface, an object having just 2 dimensions.

Graphene is the material that first comes to mind. The possibility of creating a surface made by a single layer of carbon atoms opens up new opportunities for inventing new electronic components, new super capacitors, new sensors, super strength materials..

Part of these new characteristics are related to carbon atoms but most of them are a consequence of the physical structure of the material, a surface one atom thick.

Hence, one might wonder what would happen if we replace carbon atoms by some other types of atoms.

A team of theoretical physicists at the SLAC at Stanford University have studied the characteristics that tin could have if it were layered in one atom thick sheet. According to the results, published in a paper on Physics Review Letter, such a sheet would be able to conduct electricity with 100% efficient, in other terms with no heat generation, at room temperature 8actually their studies indicate such  perfect conductivity up to 100° Celsius).

In practice this means to have a superconductor working at ambient temperature opening up the doors to amazing applications, in chips to start with. Such a tin sheet would provide an ideal conductor to be used in chips decreasing significantly the power consumption (notice that you cannot process information at zero power budget: the ugly head of the second law of thermodynamics and its twin face of entropy growth make this impossibile).

Moving from theory to practice is what researchers at Stanford are now busy doing. And once you know what to look for it gets easier to stumble onto it.

However, the reason for publishing this news is not just the intriguing possibility of using tin as a superconductor (by the way the name stanene derives from the Latin root of tin, Stannum, and graphene with the “ene” used to identify a single atom layer). It is because this shows that we can now design characteristics of material in the lab, on a computer and then move to the lab to test the design.

This is the magic on nanotechnology. Being able to invent new materials by composing atoms in specific structures.

Everything will learn in 5 years

Thursday, December 19th, 2013 by Antonio Manzalini

In this year’s “5 in 5” (5 technology innovations impacting life within 5 years) IBM argued that a new era of cognitive systems is about to happen, where basically “everything will be able to learn, reason and engage with humans”.

Credit: IBM

Credit: IBM

We’ve started working on similar ideas on beginning 2013 in the Activity “Smart Networks at the Edge”, funded by EIT ICT Labs. In fact, the Activity motto has been “everything will be a node”, capable of reacting to the environmental context conditions and able of self-configuring and self-adapting dynamically, to support  automated provisioning of ICT services. In this sense, one may probably better says “every network around the Users will learn” (and the learning capabilities of the brain neurons networks is the best metaphor one may take).

Imagine terminals and edge nodes (deployed around Users) very simple (just like neurons, metaphorically) and low cost, adopting some of the principles of SDN/NFV, whereby all network functions will be developed in open source s/w executed on (Virtual Machines hosted on) standard h/w. These nodes will be also able to aggregate or disaggregate dynamically their logical resources (e.g., Virtual Machines), literally flocking on demands, from the very Edges up to the Data Centres. Out of these local interactions, which have to be automatically managed and orchestrated, a sort of collective intelligence will emerge, which will be able to complement/replace human-driven, centralized provisioning/management processes.

One may argue that this vision will create an enormous “complexity” making the provisioning of communications, or any other ICT services, almost impossible. Well, it’s true that’s a change of paradigm, but at the end of the day, it’s about designing s/w nodes embedding local simple rules (bottom-up) meeting global automated orchestrations (top-down). It’s “guided self-organization”.

Network Operators have the opportunity of developing new biz models taming this “complexity” up to the Users, and the Users represent the biggest asset they have today. New ecosystems could be developed: just imagine as a simple example, a manufacturer having the chance of transforming a product into a mean to provide (or enable the access to) any CIT services, or developing a symmetrical channel to remain in touch with the User of the product…

Good job! Bad job. And “IT” learns…

Tuesday, December 17th, 2013 by Roberto Saracco

I just stumbled upon an interesting overview of the progress made, and to be made, in emulating brains capability to process data and create a meaningful understanding on how to behave in the world. Notice that I did not said an “understanding of the world”.

BM used this simulation of long-range neural pathways in a macaque monkey to guide the design of neuromorphic chips. Credits: IBM

IBM used this simulation of long-range neural pathways in a macaque monkey to guide the design of neuromorphic chips. Credits: IBM

The article, Thinking in Silicon, is worth reading and I guess you will read it. So no point in a detailed summary here.

I just like to point out some aspects that may shape the evolution of computing in the next decade.

There is a need for dramatically reduce the power consumption of processing if we really want to create pervasive awareness in ambient. A single fly has a processing capacity that is a trifle of the one you are holding in your hand with your cell phone. It also shows, since your cell phone dissipate quite a lot of heat, a quantity that would fry the fly…

And yet, your sophisticated cell phone with its huge computation capabilities cannot react to ambient changes nor “understand” how to behave in the ambient, as a fly is obviously capable of doing.

Some different sort of computation should be going on. A fly, scientists have discovered, use about 5 thousands neurones to analyse its position in space as it flies and determine what to do (how to control the wings) to move where it want, avoiding obstacles and escaping dangers. IBM has created a chip, I reported on it, Synapses, that can mimic the working of neurones, using about 6,000 transistors per neurones. That seems a lot, but if you do the numbers, it turns out that mimicking 5,000 neurones requires just 3 million transistors, nothing if you think that a chip today can have over a billion of them. And yet, although we have the processing power we do not have the computation power to make sense of the ambient.

We have been able to make incredible progress in computation thanks to the amazing increase of processing power but at the cost of huge power consumption. Google has been able to spot a cat or a person face in an image, but to do that they are using tens of thousands of processors… and MW of power. Our brain can do the same with just 50W.

There is a growing agreement that in order to make significant progress we can no longer just improve the processing capability, we need to change our computation paradigm. And the hope is that by understanding what is the computation paradigm of the brain (any brain…) we can do that.

For the time being some progress has already been done in this direction. HRL, Hughes Research Labs, have been able to create a chip that can learn to play Pong, just by being told: “You did good”, or “you did bad”. You don’t have to program it to catch the ball by moving the bar, it works it out by itself.

Of course not everyone agrees on the direction to follow. Some, like HRL, claim that it would be enough to mimic some aspects of brain processing to create a new computation paradigm, others are saying that you would need to simulate all the interplay of molecules inside each neutrons, dendrites and synapses to get the same computation.  Some are saying that in the brain there is no difference between processing (the physical support to computation) and computation (the manipulation of data).