Posts Tagged ‘Autonomics’

Software Networks: a “Blue Ocean” of control loops…

Thursday, October 4th, 2012 by Antonio Manzalini

Current “ossification” of IP (over optical transport) networks is creating limitations for Operators in the development and deployment of new network functionality, services, protocols, security designs, management policies and approaches…and other element that are essential to cope with the increasingly complexity of future networks.

Today to launch a new network services is becoming complex and expensive, often inhibiting the rapid roll out of new revenue earning. Looking at the future, this is an important bottleneck to be overcome. A part this, future networks should also be able to reduce operational and capital expenditures (OPEX and CAPEX): OPEX reduction can be achieved by easing human operators (and reducing human mistakes) in managing and configuring automatically equipment and network functionality; CAPEX reduction can be achieved by postponing network resources investments (e.g. optimized use of available resources), for example by exploiting multiple “constrained optimizations“ (i.e. practically this can be achieved by deploying into the network equipment – and/or in the management control systems – several “control loops”) capable of load balancing, traffic engineering, optimized allocations, resources negotiations, etc.)… This means a lot of software, as I’ve pointed out in former post.

Well, in a few words, future networks (which are including Users’ devices and “Things”) will look like ecosystems of pieces of software interacting with each other, and – just like in any ecosystem – self-organization will be the result of sets of “constrained optimizations“ and dynamic games.

One may ask if the emerging Software Defined Network (SDN) paradigm can be seen as a step in this direction. Yes, it is likely to be, even if – in my opinion – the major disruptiveness of SDN will be played at the (and beyond) the edge. This is where autonomics and self-organization have to play a major role: in fact traditional management is failing due to pervasivity and complexity.

Concerning management, have a look at this recent post (from a well-known Expert in network management, who I know personally) addressing how current SDN proposals are not (fully) taking account of the network management functions, specifically the relationships between the logically centralized control plane and the management plane. Interestingly the Author argues that the control plane (even if very advanced) can not replace the management plane. What I like here is the problem formulation: what we’ll have to face in future software network will be the orchestration of multiple controllers! Future networks will have multiple control loops that must be situated in order to accommodate multiple data formats in multiple languages at multiple levels of abstraction.

Imagine the edge: a sheer number of nodes, devices, “things” will interact each other competing, cooperating and negotiating resources. 

Sailing towards “blue oceans” ?

 I believe that, in contrast to today where competition exists only at the application level, future network at the edge will open new business’ dimensions (blue ocean): negotiations, incentives, cooperation and competition will boost the long-term value of the network architectures — like in ecosystems, where evolution select the winning species, winning services will succeed, grow, and promote further investments, while losing ideas will fade away.

On the other hand introducing sets of control loops into (both in fixed and mobile, think about SON) might potentially bring to inconsistencies and non linearities in the network behavior (e.g. due unwanted couplings or interactions, or missing coordination) thus creating instabilities and abrupt phase transitions, which can rapidly propagate.

This is a risk which each ecosystem is running: singles species may have feedback mechanisms that would ensure the population’s stability were them alone, but when together global state transitions to instability regions may occur as the number and strength of interactions among species increase.

The real challenge I see is designing nodes and devices local rules and control/management planes (i.e. creating an ocean of control loops beyond the edge) in a way to enable thriving and stable ecosystems.

Looking for Ultra Dense Networks emerging, morphing and disappearing

Monday, May 28th, 2012 by Antonio Manzalini

“Networks are everywhere”. Network can be seen also from different perspective than bunches of nodes and links: simply imagine a network as an emergent property of an ecosystem of communication entities exchanging data. Actually, we see networks emerging through the electrochemical exchanges in ensembles of neurons, in living cells interactions, in the communications supporting self-organization of an ants’ nest, in the information exchanges in social networks…like Facebook.

Imagine ensembles of simple entities (e.g. Consumers’ electronic devices, sensors, actuators, smart things, etc.) embedding communication capabilities (e.g. radio/wireless connectivity); these entities have a simple logic coded through autonomic rules, making them capable of self-configuration and self-adaptation to dynamic changing conditions (e.g. like a Mac in your home). Now imagine multiple dynamic interactions between these simple entities letting network of networks emerging, morphing and disappearing, at the edge (of today’s infrastructures). Data and information will be virally propagated. As Roberto mentioned, the concept of traditional network infrastructure will fade away, substituted by the concept of communication fabric (with different space-time scales).

Understanding the implications of this network evolution is rather challenging. Traditional information theory based on Claude Shannon’s seminal work (1948) will not be applicable anymore: for example new definitions of information embodied in space structures (e.g. a 3D ensemble of neurons) and new metrics will probably be necessary to understand its creation, exchange (e.g. transmission) and evolution (e.g. processing, de-re generation).

Analysis of these challenges will (probably) situate best at the intersection between non-linear dynamics and statistical thermodynamics, an ideal place where hopefully multi-disciplinary (e.g. biology, physics, mathematics, neuro-science) approaches will converge in the future. We should be there as well.

What is sure is that this network transformation will impact profoundly our lives: it will change perceptions and interactions with the environment, paving the way to new socio-economic models and business. Not only: a better understanding of these ultra-dense networks, we’ll allow revolutionizing genomics, proteomics and medicine.

“Self-Governance is possible”: an interplay between Complexity and Stability

Thursday, April 19th, 2012 by Antonio Manzalini

 “Tragedy of the Commons” was published for the first time by Garrett Hardin in 1968: theory concerns a dilemma in which multiple individuals, acting independently according to their self-interest, ultimately destroy shared limited resources (commons) even if it is clear that it is not in anyone’s interest to happen. However, when the economists start looking at biological ecosystems (humans a part), he then discovered that they work very well.

So let’s focus for a while on biological ecosystem.

Ecosystems: a delicate interplay between Complexity and Stability

When talking about ecosystems we know that an assembly of a certain number of species, each of which with feedback mechanisms that would ensure the population’s stability were it alone, can show sharp transition from overall stability to instability as the number and strength of interactions among species increase. In fact, this is a systemic behavior happening in complex systems, of which biological ecosystems are a category.

Well, this is the same kind of systemic risk of the recent financial crisis. In this paper “Systemic Risk in Banking Ecosystems” they draw analogies with the dynamics of ecological food webs and viral networks to explore the interplay between complexity and stability in simplified models of financial networks. They conclude the paper with some lessons that can be drawn for minimizing financial networks meltdowns. First more efforts are needed for assessing the system-wide characteristics of the financial network (e.g. risk management models). Another important aspect is need of having modular configurations to prevent instability contagion infecting a whole network: in fact, this is limiting the potential for cascades.

Now the third piece of the puzzle. Prof. Elinor Ostrom, (Indiana University) was awarded with the 2009 Nobel Prize in Economic Sciences (shared with Oliver E. Williamson) for the results she achieved in analysing how ecosystems communities are managing resources to their advantage. Prof. Elinor Ostrom and her collaborators have argued that consensual, self-generated governance can limit use of resources to sustainable levels, maintaining ecosystems in equilibrium. To learn more have a look at this paper about the Polycentric Governance of Complex Economic Systems. Self-governance, in ecosystems with humans, is then possible.

Well, we’ve elaborated several time about the vision of future networks being the communication fabric of ecosystems, where different Players are competing and cooperating. Let’s apply above reasoning to this. The (Nobel) conclusion is that future networks’ self-governance is possible provided we find and apply those autonomic rules governing the delicate interplay between complexity and stability. And these rules may have far reaching implications and impacts from a socio-economic viewpoint.

In this, Nature is ahead of us.

Collective Intelligence of (Enactive) Networks (part two)

Tuesday, September 13th, 2011 by Antonio Manzalini

In the last post we’ve elaborated about the evolution of networks (at the edge) as becoming a large nonlinear complex fabric interconnecting dynamically a huge amount of communication entities (nodes, machines, Users’ devices, sensors-actuators, etc.).

A way to interpret such a large dynamical fabric is that input-output maps depends on states of states, with nonlinear relations, whose control is highly complicated. We’ve also argued the traditional management and control techniques should be supplemented with a variety of novel adaptive control solutions. And Nature can help us on this.

We’ve read the enaction theory (by F. Varela) modeling the behavior of living systems (e.g. termites) in terms of three control-loops: the amazing collective intelligence of a termites colony appears to be the results of the interactions of these three control loops. The networks of the nest behave like a dynamically changing complex system around stability points. But this is what we’re looking for future networks. So, in principle, we may think designing the self-*control mechanisms of entities (nodes, devices, smart things, etc embedding communication capabilities) which will populate future networks in terms of the three enaction control loops.

While the first two control loops are quite simple to be understood  – they are the two control loops used in today autopilots – (the former in charge of pre-defined automatic actions, the latter in charge of learning adaptation during unexpected situations), the third one concerning the structural coupling of each node with the overall network is more complicated (see Roberto’s comment).

F. Varela argued that the connections between termites’ micro cognitive domains happen through a sort of overall structural coupling with the environment (the colony), using the so called “network field”. This is sort of common space with gradients of chemical and electromagnetic fields, intertwined with tactual and metabolic information capable of triggering (when crossing certain thresholds) collective reactions, thus integrating the micro cognitive worlds of the living entities for the common survival.

Applying this metaphor for future networks would mean thinking about this third control loop as a dynamic game of controllers. In the usual formulation of game theory there is an equilibrium state that can arise: loosely speaking this equilibrium state is in some sense analogous to thermal equilibrium and reflects the static nature of the game itself. If the game is allowed instead to be dynamic, with the rules able to change due to the states of the controllers, then there could also be dynamic equilibria, analogous to a non-equilibrium steady state. Such a game could be described again by using dynamical systems theory. Under learning, chaotic dynamics can arise, and the game may fail to converge to a Nash equilibrium (following paper highlights this). Understanding these dynamics is essential.

http://www.santafe.edu/media/workingpapers/01-09-049.pdf

Let me make a more concrete example. Currently available controllers for resource allocation (e.g. congestion control mechanisms like TCP are example of large distributed control loops) are deriving the state for a desired equilibrium point and they don’t take account of transient behaviors typical of closed-loop system.

Have a look at this brilliant paper: http://www.statslab.cam.ac.uk/~frank/PAPERS/PRINCETON/pcm0052.pdf

The Internet’s TCP implicitly maximizes a sum of utilities over all the connections present in a network: this function shows the shape of the utility function for a single connection (by the way, in economics there is a similar concept of a utility function, basically describing how much satisfaction a person receives from a good or service). But transient behaviors are not taken into account, so even if we have a globally asymptotically stable equilibrium points (corresponding to the maximization of the utility function), it is not clear how the network operates during the transients (instabilities or even phase transitions may occur).

Interestingly it has been demonstrated (CalTech) that, in order to take into account the real-time performance of the network, congestion control problem should be solved by maximizing a proper utility functional as opposed to a utility function. Actually it is known that developing a trajectory in a state space or determining other properties of a dynamical system requires dealing with functional equations, which are often quite unpopular… as hard to handle. At least up to yeasterday. Today, there are novel neural networks frameworks capable of solving functional equations (I’ll elaborate in a next post on this). This adaptive control based on maximizing a proper utility functional enables the nodes to continuously to learn the network field, to adapt to changes of conditions.

The three control loops of an enactive node of future networks

My conclusion is that one way to design the so called “third control loop”, capable of coupling nodes with overall network dynamics (think about the network field of F. Varela), should be formulating it as a set of utility functionals (not just functions) to be maximized by the nodes.

Collective Intelligence of (Enactive) Networks (part one)

Thursday, September 8th, 2011 by Antonio Manzalini

I’ve been very fascinated by the holistic vision of Roberto’s post (on 16th August). Future networks will hook such a large number of highly dynamic nodes and devices that the number of connections will require going beyond today’s architectural paradigm. The emergence of such an ubiquitous self-adaptive connectivity space will change drammatically the traditional concepts of network architectures as well. For example, at the edges we expect to see the emergence of dynamic games of sub-networks (belonging to the same, or different Operators), supporting any sort of services by using local processing and storage resources. So future networks will look like large distributed complex system, as such characterized by dynamic, and chaotic behavior. This vision will exacerbate problems (which are still open today) impacting scalability and stability of end-to-end connectivity, such as: dynamic assignment of addresses, dynamic routing and new strategies for admission control and congestion control of flows. 

It will not be possible anymore adopting traditional management and control (declared objectives and observed behavior) for future networks. Dynamic or static modeling for (open or closed loop) control will become very complicated and unstable if not supplemented with a variety of novel control techniques, including (nonlinear) dynamic systems, computational intelligence, intelligent control (adaptive control, learning models, neural networks, fuzzy systems, evolutionary and genetic algorithms), and artificial intelligence.

Out of this “chaos” of interactions, a collective network intelligence will emerge. This reminds me the collective intelligence emerging in termites nest. This is like to say that this future network fabric will behave like a dynamically changing complex system (around stability attractors) where (autonomic) nodes will look like simple living systems (e.g. termites).

I wish elaborating a little bit about the role of autonomics in this future self-adaptive network fabric, taking a direct inspiration from the modeling of adaptive cognitive capabilities of simple living systems. It helps resuming the theory of “enactive” behavior of living systems by F. Varela (see “The Embodied Mind: Cognitive Science and Human Experience”, Cambridge, MA: MIT Press).

This theory (which is also considered in Artifical Intelligence) argues that adaptive behavior of simple living system is based on two interrelated points: 1) perception consisting of perceptually guided action and 2) cognitive structures, emerging from the recurrent sensori-motor patterns, enabling action to be perceptually guided. In particular, it is argued that simple living systems cross several and diverse cognitive domains (or micro-worlds) which are generated from their interactions with the external environment: within a micro-world the behavior is simply determined by pre-defined sensori-motor loops, simple and fast ; from time to time breakdowns occur which are unexpected disruptive situations determining the need to change from a cognitive domain (i.e. from a micro-world) to another one. Importantly, this bridging (during breakdowns) is assured by the “intelligence” of the nervous system (allowing a new adaptation and the consequent learning of new sensorimotor loops). This behavior has been successfully exploited by Nature with three control loops (which I’ve tried representing in the following picture).

The three control loops of the enactive behavior

I’ve been surprised discovering that these same principles (adaptive control based on three control loops, supported by learning models and neural networks) are being adopted for designing the auto-pilot of airplanes, which are by the way complex systems.

In next post I’ll elaborate about this and how enaction could be exploited in autonomic nodes and devices.

My take is that enactive networks are just around the corner. Let’s be ready.

ParaSail and beyond

Sunday, August 14th, 2011 by Roberto Saracco

Dual Core, MultiCore, Massive Distributed Processing… Just a few words to say that technology has progressed to offer us multiple processors to work on a single task. Multiplying the processors you increase the overall computational power without increasing exponentially the energy required, as it would be the case if you were to speed up a single processor to match the same performances.

Parasail ... a new programming language for parallel processors

The challenge with this approach is to be able to write a code, a program, to take effective advantage of the parallel processing, no small feat at all!

Now a new programming language is available to ease the life of programs: ParaSail, Parallel Specification and Implementation Language.

The language has been designed by Tucker Taft, CTO of Softcheck a software company based in Boston, to overcome the problems programmers run into when dealing with multicore. There is a tradeoff when using these chips: you can stick to conservative programming but then you are not exploiting the parallelisms offered by the multiple cores or you can parallelize everything but risk creating out of sequence operation that leads to errors.

Current Dual Core on the average can increase the processing speed by 20/30% depending on the task at hand, they are not doubling it. ParaSail uses an approach of pico threading, dividing the program (automatically) in as many elemental operations as possible and threading each one on a different core, unless the programmers block parts into sequences. In other words, it assumes that everything can be parallelized unless it is told not to.

The compiler should be released this September and it has already created an interest both in the programmers community and at Intel.

In this decades we will see the number of cores reaching the hundreds. One of the reasons this has not happened so far is because with present programming languages the more cores you have the less efficient their use so it does not make sense to increase their number. If ParaSail meet its promises the situation will change.

What interests me most, however, is the fact that the concept of multicore can be “stretched” to include massively distributed processing. With UBB (UltraBroadBand) and low latency provided by end to end optical connectivity it may start to make sense to micro thread (not pico thread…) computation onto geographically dispersed computers (SETI on steroids…).

Pervasive Computing and Networking, I am convinced of it, will deeply change our view of computation. Ambients will become aware and responsive and this requires distributed computation that is based both on signal exchange and on edge sensing, just the way it happens in our body where cells respond to nervous signals and to the chemical environment surrounding them.

I also see a parallel, and a support in this vision, by the evolution of networks, where the ones provided by the Operators can be assimilated to the nervous systems and the ones being created bottom up at the edges (viral networks, sensors networks, ambient communications) can be assimilated to the local chemical soup conditioning the reaction of the cells.

There is plenty of new research needed in this area that is at the crossing point of bio, electronics, cybernetics, autonomics and semantics.

Self-management and survivability in harsh operational environments

Tuesday, July 19th, 2011 by Antonio Manzalini

NASA is currently exploring autonomous and autonomic systems concepts directed towards enhancing future space-mission self-management and survivability in harsh operational environments. There are several publications about this research avenue and the related applications: for instance, NASA ANTS missions are targeted by this research.

Recently, I’ve read this paper describing the ASSL (Autonomic System Specification Language) as a framework for formally specifying and generating autonomic systems.

ASSL seems allowing modelling autonomic properties of system of systems through the specification of self-managing policies and service-level objectives.

Is this area of research so far away from designing autonomous and autonomic system of systems in future networks? I see striking similarities specifically when looking at the network edges, where we’ll see in the future swarm of autonomous and autonomic nodes supporting any sort of services by using local processing and storage resources.

Are Bigger Network Nodes Better?

Friday, March 4th, 2011 by Antonio Manzalini

Variation in brain volume (or mass) across animals may range from a whale’s brain (up to 9 kg with over 200 billion neurons) to human brain (1,4 kg with about 85 billion neurons), to bee’s brain (1 mm3 volume for a million neurons). Nevertheless brain size might be less related with cognitive capabilities than generally assumed.

A bee, for example, can visit about one hundred flowers in a day: all combinations of colours, shapes, and odours of the flowers are associated with rewards (nectar). The association process is dynamical and the stored information is updated by a wonderful and simple neuron machinery capable of learning fast (faster than human brain) and reliably. Amazingly, 1mm3 of brain can control about 60 hard-wired behaviour patterns.

Please have a look at this nice paper.

Lars Chittka and Jeremy Niven “Are Bigger Brains Better?”

Current Biology 19, R995–R1008, November 17, 2009

http://www.cogs.indiana.edu/spackled/2010readings/Chittka_Big%20brains_2009.pdf

Larger brains, in summary, are mostly a consequence of larger neurons, which are necessary in bigger animals for biophysical constraints. Normally, they also contain greater replication of basic neuronal circuits, adding precision to sensory activities, more parallel processing and greater storage capacity. “Bigger sense organs necessitate larger amounts of neural tissue to evaluate the information, providing more sensitivity and detail, but not necessarily higher intelligence”.

Modularity and interconnectivity are likely to be more important.

This is how I like to see future networks: swarms of lightweight modular autonomic nodes highly interconnected.