Posts Tagged ‘future networks’

Power Law: from neurons to edge networks

Tuesday, June 26th, 2012 by Antonio Manzalini

Neuroscientists of University College London (UCL) have found that there is a simple pattern modeling the tree-like shape of brain’s neurons. They have shown how a simple computer program connecting points with links as little as possible can produce tree-like shapes very similar to the ones of real neurons. These shapes follow a power law, which is a mathematical law quite common across the natural world, and underlying complex structures. This is the law:

L = (3/4π)1/3 × V1/3n2/3

where n is the number of dendritic sections to make up the tree, L is the total length of these sections, and V is the total volume

Neuron shape model: target points (red) distributed in a spherical volume and connected to optimize wiring in a tree (black) (credit: H. Cuntz et al./PNAS)

Similar theories about neurons networks have been already published in the past. This time, UCL Neuroscientists tested this theory by examining neurons in the olfactory bulb, a part of the brain where new brain cells are constantly being formed, and found that the growth of these neurons indeed also follows the power law, providing further evidence to support the theory. “The ultimate goal is to understand how the impenetrable neural jungle can give rise to the complexity of behavior” said the UCL Neuroscientists.

Why these results might be very interesting for us ?

Many communication and social networks have power-law link distributions, containing a few nodes that have a very high degree and many with low degree. The high connectivity nodes play the important role of hubs in communication and networking, a fact that can be exploited when designing efficient search algorithms. It has been shown that the Internet backbone and web page hyperlinks have a power-law distributions. And the same distributions might be applicable to future edge networks.

In fact, imagine edge networks evolving towards ensembles of huge numbers of interacting lightweight nodes capable of abstracting communications, processing and storage resources. Millions of nodes, like neurons, embedding simple “hard-wired rules”, will be capable of interacting, self-adapting and self-adjusting to cope with dynamic contexts (e.g. Users’ requests and business goals).

This is very much similar to what’s happening in the brain networks…

Networks as “Optimizers” just emerging from micro-scale rules ?

Thursday, June 14th, 2012 by Antonio Manzalini

In previous posts, we’ve elaborated how a network can be seen as an emergent property of a complex ecosystem. Networks emergence is generally describes as macro-scale properties resulting from micro-scale rules. In the prior-art analysis of network layering, we often find that layers networks can be defined as optimizers maximizing specific composed/aggregated utility functions. If we integrate these two perspectives, in a broader sense, we can define networks as emergent property of sets of optimizers gaming to maximize specific utility functions.

I can see this for natural ecosystems. Imagine a swarm of fishes or even better a nest of ants: the “communication” networks between ants is an emerging property (producing self-organization) to optimize both the “life” of any single ant (which may have its own local utility function) and the organization overall community (having an aggregated utility function for the nest). Actually, I’ve never heard about any risk of “Tragedy of Commons” in ants’ nets: evolution has selected those autonomic behaviors to avoid “breakdowns”, by keeping a delicate equilibrium of local vs global utility functions. A lesson from Nature on self-organization to be learnt.

Emerging collective behaviour (in a swarm of fishes) from simple micro-scale rules

Can we approach future networks, as ecosystems of resources, in the same way ? Which are the utility functions we can design for controlling a communication network ? One can image functions about sources rates, useful information, delay, energy consumption…etc.

In this direction, F. Kelly, has shown that TCP/IP protocol is a perfect example of optimizer: its objective is to maximize the sum of source utilities (as functions of rates) with constraints on resources. And actually, each variant of congestion control protocol can be seen as a distributed algorithm maximizing a particular utility function. The exact shape of the utility function can be reverse engineered from the given protocol. Similarly, Border Gateway Protocols (BGPs) can be seen as a solution to the Stable Path Problem, and contention-based Medium Access Control (MAC) protocols as a game-theoretic selfish utility maximization. Other utility functions could be User satisfaction (e.g. User-generated pricing following end-to-end principle), resource allocation efficiency or different network economics fairness.

Then, modeling networks as emergent property of a sets of optimizers, means considering management-control based on interacting controllers maximizing a combination (e.g. weighted sum) or an aggregation (e.g. in multiplicative form) of several utility functions. Or, alternatively, we may say that management-control should look for the network Pareto optimality, or it should play an uncooperative dynamic game. In any case, this would imply looking at future networks management-control with a different perspective, through the glasses of a deep vertical and horizontal network decomposition.

The emerging paradigm of Software Defined Network (SDN) is about having a fully decoupled network control plane, so it can be seen from this broader perspective, at least at the edge of current infrastructures. In a SDN, control intelligence is (logically) centralized in software-based controllers. Said controllers provide visibility and control over the network, they can ensure that access control, routing, traffic engineering, QoS, security, and other policies are enforced consistently across the network infrastructures. Governing the interactions of these controllers would allow managing and optimizing a SDN according to certain policies, or utility functions.

In other words, approaches like SDN seems paving the way to look at future networks in a different way, as ecosystems of resources, where top down governance (e.g. playing the role of evolution ?) of sets of controllers could meet emergent properties from local bottom-up autonomic behaviors (e.g. via local utilities).

But let’s go even beyond this (partly) engineered approach: can we build only on bottom-up emergent properties (just based on micro-scale rules) to get self-stabilizing future networks ecosystems, indeed like in Nature ? A great challenge towards 0-Capex, 0-Opex networks.

Beyond Shannon…legacy

Monday, March 26th, 2012 by Antonio Manzalini

Information permeates everything: from electrochemical information exchanged in networks of neurons, to biological information stored, and processed in living cells, to business information, etc.

Our current understanding of information communication is still based on Claude Shannon’s seminal work in 1948 resulting in a general mathematical theory for reliable communication in the presence of noise..

Claude Shannon

Frederick P. Brooks, Jr., wrote in “The Great Challenges for Half Century Old Computer Science”: “Shannon performed an inestimable service by giving us a definition of Information and a metric for Information as communicated from place to place. We have no theory however that gives us a metric for the Information embodied in structure. . .”

Traditional information theory considers the communication studying the capacity of channels connecting two endpoints. This approach should be enhance when considering  wireless networks (e.g. for example see the posts on Edge Networks) where nodes which relay information in a multi-hop manner and time-varying topology.

In this direction, interestingly, this paper introduces the concept of the spatio-temporal relaying: information is carried from a mobile transmitter (space) in its past (time) to a mobile receiver (space) in its future (space). Nodes that forms a path in a spatio-temporal space of information transfer: the quality of the transmission depends on the respective spatio-temporal positions of the transmitter and receiver. So a grand challenge is to extend Shannon capacity formula to multi-source wireless networks.

This may have impactful applications: recent researches on MANETs has led to definition of the so-call “space-time capacity paradoxes”. Theoretically, the capacity of a multi-hop wireless network increases with node density and node mobility in spite of the apparently effect of transmission interference.

Moreover, it has been shown that the theoretical capacity of a multihop wireless network is proportional to the square root of the network size (number of nodes). This  promises enormous wireless capacity for ultra-dense networks ! On the other hand if you try testing this on WiFi networks, capacity has a tendency to decrease with the number of nodes, rather than increase as theoretically predicted. This reflects the fact that the WiFi medium access protocol, primarily designed for wireless LANs, does not scale to multihop networks. A breakthrough seems to be possible here.

In these areas of study, National Science Foundation has established the Science and Technology Center for Science of Information to advance science and technology through a new quantitative understanding of the representation, communication and processing of information in biological, physical, social and engineering systems.

How to mitigate the “hidden risk of meltdown…”

Wednesday, March 21st, 2012 by Antonio Manzalini

Network are becoming more and more complex and dynamic, capable of interconnecting large numbers of resources (e.g., routers, switches, transport nodes, servers…), Users’ devices (e.g., smart phones, etc) and, in the future, any machines (e.g. sensors, smart things, etc) embedding communication capabilities.

Future networks will be similar to complex systems where global properties and effects can emerge abruptly at a critical level of interactions between their components. In these dynamics, there is the hidden risk of instabilities. Overall, instability may have primary effects both jeopardizing the network performance  and compromising an optimized use of resources. In the worst case, an instability may create even a meltdown of a portion of network.

This is the main problem which I’ve proposed for study (more or less on year ago) in the EU project Univerself as part of the Telecom Italia participation in the project. Basically, we’re looking for methods and systems able to ensure network stability through local self-adaptation of nodes and, if-when not sufficient, via centralized policy based control.

This morning I’ve been very pleased to read this interesting paper Icebergs in the Clouds: the Other Risks of Cloud Computing addressing the risk of instabilities on the Cloud, which is essentially a metaphor for a network of computing and storage entities in which tasks and resources can be shared.

Example instability risk from unintended coupling of independently developed reactive controllers

Paper points out that complex systems can fail in many unexpected ways and outlines various simple scenarios. In the worst case, a cloud could experience a full meltdown that could seriously threaten any business that relies on it. Well, this is very much the same for future networks!

A growing number of researchers are beginning to see this problem: unpredictable behaviors often emerges in systems made up of “networks of networks”.

Paper concludes with the following: “We should study [these unrecognised risks] before our socioeconomic fabric becomes inextricably dependent on a convenient but potentially unstable computing model.”

Neurotransmitters for … Future Networks (1 of 2)

Wednesday, February 22nd, 2012 by Antonio Manzalini

Neurotransmitters are chemicals produced by the nervous systems in order to relay a nerve impulse from one cell to another cell. In the brain, neurotransmitters have a central role in shaping memory, learning, mood, behaviors, sleep, pain perception, etc. Basically they operate at the junctions between neurons, allowing communications: when an impulse arrives at the end of an axon, neurotransmitters are released, diffusing across a gap to the next neuron; each neurotransmitter binds only to specific receptors on the postsynaptic membrane.

There are many types of chemicals that act as neurotransmitters. For example, serotonin plays a major role in emotions and judgment, and also sleep. Endorphins are neurotransmitters that relieve pain and induce euphoria.

Neurons interconnections and Neurotransmitters

So brain self-organization is determined basically by two main phenomena: local reactions (firing) of neurons, due to the exchange of electrical signals through neurons’ interconnections, and the global influence of the neurotransmitters.

Imagine taking this picture for managing future networks. As it made no sense in Nature managing or controlling the behavior of a neuron, in the same way we should not expect a centralized management in charge of the hundreds of billions of electronic devices, machines, smart things connected with each other and to the Internet (but on the other hand, we dream to have the Net well self-organized as a … brain).

Therefore, learning from Nature, let’s imagine future network nodes capable of local reactions to the context (as neurons do through their interconnections) and then a global harmonization (as neurotransmitters do) of all these local reactions through the viral propagation of context information (a sort of reaction-diffusion process). Can you see it ?

 In a next post, I’ll make a proposal for a concrete proof-of-concept with today technologies.

Let’s look at networks with different eyes to “simplify” the future !

PageRank in the Science of Links

Thursday, February 16th, 2012 by Antonio Manzalini

Imagine a library containing billions of books without any centralized organization and librarians. Anyone may add a document at any time. How would you access a piece of information in a few seconds ? It looks like the search on the WWW.

Search engines, like Google, have computer programs retrieving pages from the web, indexing the words in each document, and storing this information in an efficient format. This means that, for most searches, the result will be a huge number of pages. What is needed is a means of ranking the importance of the pages so that the pages can be sorted. One way to determine the importance of pages is to use a human-generated ranking. This is what “PageRank” does.

“PageRank” is a link analysis algorithm used by the Google that assigns a numerical weighting to each element of a hyperlinked set of documents with the purpose of “measuring” its relative importance within the set (source, wikipedia.org).

Amazingly PageRank can be used also in Computational Chemistry.

Researchers at Washington State University have realized that the interactions between molecules are similar to links between Web pages. They have adapted Google’s PageRank to understand how molecules interact. PageRank algorithm is particularly efficient and capable of looking at a massive amount of Web pages at once; similarly, it has been used to characterize quickly the interactions of millions of molecules and help Researchers predict how various chemicals will react with one another.

PageRank adapted for Computational Chemistry

This is a nice example of Industrial Mathematics cross-fertilization: an algorithm, invented for search on the Web, is adapted and used by Computational Chemistry.

Any further cross-applications ? Well, Chemistry (loosely speaking) is studying the dynamics of atomic units, self-aggregating by attractions and bonds, in a constant flurry of motion and change. Replace the atomic units with nodes (devices, smart objects, sensors, machines,…) of future networks and think about their self-aggregation for fleeting into networks in a highly dynamical environment…

Handling complexity with reflexive communications

Thursday, February 9th, 2012 by Antonio Manzalini

They have estimated that, in less than ten years, there will be a few hundreds of billions of electronic devices connected with each other and to the Internet. Services and data will be virally delivered through multiple devices, machines, objects interconnected by dynamically emerging networks. This raises many important techno-economic issues for Stakeholders to consider: capturing the “simplicity” behind this scenario will determine great advantages. Welcome again to world of complexity. How handling it ? Let me use once more the metaphor of a termites nest.

Have you ever thought of managing the behavior of a single termite ? We know it is not possible. On the other hand, even without centralized control, we realize that a termites’ nest is a wonderful example of self-organization. Adaptation emerge from a myriad of interconnected simple behaviors.

When we imagine future networks at the edge, we see a myriad of nodes, devices, machines and smart objects interconnected through embedded communication capabilities. In principle these network entities will be simple (metaphorically like termites) and we cannot expect to manage them. Networks and their properties will emerge dynamically as results of a myriad of interactions (like in the termites’ nest).

Obviously it will be still important for Stakeholders to keep a certain level of communication and control with these self-organized networks (e.g. for meeting overall business and operational objectives). If will not be impossible controlling the behavior of each single entity of the network, on the other hand it will be possible to guide the network evolution by altering the context and interacting with all the factors which contribute to shape it (which is like altering the physical context of the termites’ nest).

This is a sort of reflexive communications. One means for handling complexity is context steering: a reflexive, decentralized steering of the context conditions of nodes enabling self-referential internal control of each individual node (which have to be sensitive to the context).

Drosophila embryo interpreting morphogens (communicating positional information to individual nuclei) with very simple physical principles. T. Gregor, D. W. Tank, E. F. Wieschaus, and W. Bialek. Probing the Limits to Positional Information, Cell (2007).

 As another example (even more complex), consider the morphogenetic field, proposed by experimental embryologists to account for the self-regulative behavior of embryos: it is based on the concept of diffusion of chemical signals or “morphogens” which are altering the cells context and as such steering cells evolution (one of the most investigated issues by A.Turing).

Learning from Nature: networks emerge from simple rules

Monday, February 6th, 2012 by Antonio Manzalini

Have you ever read how ants build trail networks in their nests ? It’s amazing. There is no planner, no centralized control but the network emerges from self-organized feedback mechanisms: ants leave small amounts of a chemical compound -a pheromone- as they move across space. What’s more, these networks are highly efficient for searching and transporting food!

One question has always fascinated scientists: what’s the algorithm that governs the way ants respond to pheromones. In the past, it has been assumed that a trail can only be reinforced if ants have a disproportionately higher probability to follow a trail with higher pheromone concentration: i.e. the way an ant tend to turn towards a pheromone deposit is related in a non-linear fashion to the concentration (even if this is conflicting with the Weber’s Law, which relates the perceived intensity of a stimulus to its physical magnitude).

In this paper, surprisingly they have developed an entirely new perspective: Non-linearity does not reside in the perceptual response of the ants, but in the noise associated with their movement.

Evolution of the pattern formed by one colony over time.

This is how simply in Nature a random system (as noise) is transformed into a coherent one.

Now, let’s imagine an application for distributing contents across a network whose capacity is highly variable, according to the dynamics of traffic flows. Exploiting what we’ve learnt from ants’ behaviors would mean that an initially ”planned” network could adapt autonomically through the noise associated to its traffic dynamics. Simple like that. It’s Nature.

Networks between Order and Chaos

Wednesday, January 25th, 2012 by Antonio Manzalini

I wish resuming from the nice piece of comment of Roberto to the post Network Science.

 Actually, in most cases, there is no network! The spontaneous, autonomous interactions taking place are forming a fleeting network without it having to exist in a physical sense. It is just in our perception. This is crucial when we come to think about future telecommunications networks. The CAPEX for such networks may be 0,  since it is hidden in the nodes. It is the collection of autonomously interacting nodes that creates the network. This latter is an emergent property of the set.

 That’s true, I see it : in most cases the network is just a creation of our mind in order to understand and explain certain complex phenomena (I mean  not only when studing a cell but also when looking at a telecommunications network)

The intricate network of microtubule (yellow) and actin filament (purple) fibers that builds a cell's structure. Credit: Torsten Wittmann, UCSF.

 In 1948 Claude Shannon published his paper “ A Mathematical Theory of Communication”. He argued that “the fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point.”

 In this paper about Order and Chaos of networks, they borrow from Shannon arguing that every process is a communication channel. Again this is an abstraction to model a process or a system. Any node of a network is like a web of channels communicating its past to its future through its present.

 Actually, the state of a system or a node in a given moment of time can be characterized by values of state variables (at that moment). The minimum number of independent state variables which are necessary to characterize a state is called the number of degrees of freedom and it can be represented in an n-dimensional space (phase space). In a node’s phase space, a process is a series of gradual changes (a trajectory).

So, we may conclude that:

Any node is like a channel and the nodes of a network interact with each other again through other channels.

 In this sense a “network” is an abstraction of our mind: it is a web of communication channels, but it might not exist physically, being potentially embedded, hidden, into the inter-, intra- nodes processes.

 Time to change our perception of the network ?

Network Science

Monday, January 23rd, 2012 by Antonio Manzalini

In this paper “The network takeover”, Albert-László Barabási elaborates how data-based mathematical models applied to complex systems are creating a new rapidly developing discipline: Network Science.

We will never understand the workings of a cell if we ignore the networks through which its proteins and metabolites interact.

Understanding a cell through the networks of its proteins and metabolites

I would add, we will never understand the workings of an ecosystem if we ignore the networks through which its components interact. And despite the many differences in the nature of the nodes and the interactions the networks behind most complex systems are governed by a set of fundamental laws. Universality is one of them.

Welcome to Network Science: the aims is to understand the characteristics of networks that hold together the components in various complex systems.

Today, the huge amounts of data collected through sensors and smart devices are creating a new way to understand the inner behavior of many complex systems, and the networks behind them. I’m not just talking about communications: consider for example the proteomic tools allowing to collect data on human proteins networking.

No understanding of a cell, of social media or of the Internet can ignore the network fundamental laws. Data-based mathematical analysis will pave the way to this understanding.

Imagine what we may exploit from understanding these networks.