Posts Tagged ‘Collective Intelligence’

Let’s trade the networks !

Monday, September 9th, 2013 by Antonio Manzalini

Group buying is becoming more and more popular today. It’s about a large number of buyers who aggregate to make a purchase of great amounts of products or services, so they can get them (often directly from Producers) at significantly reduced prices. This is also known as collective buying (in Italian, Gruppi di Acqusito Solidali).

Group Buying

Group Buying

They say that the diffusion of group buying is simply a consequence of the crisis. Well, that might surely be true, but I would add that this is a very concrete sign that economy is changing thanks to technology progresses towards new more sustainable models, emerging from the bottom. Plenty of web sites are offering group buying services, combined with social networking tools. That’s a way how the Internet in the hands of millions of people can change radically the economy. So, I like more to see this phenomenon as the use of collective intelligence in buying products or services, a collective intelligence which is enabled by the web2.0.

In a sentence, it’s about collective intelligence (of millions of Users) using the collective information (emerging from the Internet).

Group buying has already reached not only products but also a lot of services, for example energy services allowing the consumers joining the communities to save on the electricity bills. And for Producers or Providers this is seen often as a way to recover lost margins through extra volumes. Imagine tomorrow large groups buying negotiating with Network and Service Provider for the purchase of virtual networks, functions and services.

I believe that the technology advances (e.g. standard h/w performance, embedded communications, device miniaturization, etc.) and the costs reductions are enabling this “economy of collective intelligence”: an incredible amount of processing, storage, communications-networking capabilities are accumulating at the edge of traditional networks, i.e., very close to the end Users. The edge is likely to become like a Data Centre fabric ! As a matter of fact, models such as SDN and NFV are steps towards networks where all L3-L7 functions and service will be developed in software, virtualized and eventually decoupled from underneath hardware.

This is coming change of paradigm: I think it’s just a matter of “when” not “if”: it will be when software networks performance will be good enough (according perhaps to new QoS models in the upper layers) to meet Users’ need and application performance requirements. This “softwarization” is lowering the entry threashold of many other Providers, thus increasing competition and opening the huge impact of “collective intelligence” also in the Telco-ICT ecosystems. Eventually, in contrast to today, incentives, cooperation and competition will boost the long-term value of the network: this is just like in ecosystems, where evolution select the winning species, while losing species will fade away. It’s collective intelligence optimizing the economy of the ecosystem.

I’m sure that Operators and Providers capable of riding this industrial r-evolution (for example, by exploiting these “virtual fabrics” at the edge, operated with very lightweight management processes and capable of supporting these new fast and dynamical transactions) will be able to take huge biz benefits.

We are presenting this vision (Manifesto of Edge ICT Fabric), with a number of Partners, at the Conference ICIN2013.

Does cooperation require any intelligence?

Thursday, April 11th, 2013 by Roberto Saracco

It would seem a moot question. Of course in order to cooperate people, or things, would require intelligence, of some sort. Look at swarms or at flocks; the hundreds (of thousands sometimes) of animals are able to coordinate their flight and aim at a given spot indicated (?) by some sort of sentinel or guide.

Well, actually scientists have discovered that there is not such a thing as a guide, nor a communications across the group. Quite simply there is no single intelligence at all, the intelligent behaviour is something that is emerging from the group and it is perceived by an external observer, not by any single entity within the group.

It might seem unbelievable, but that’s what it is!

A robot developed at Sheffield. Hundreds of them create an intelligent global behaviour

A robot developed at Sheffield. Hundreds of them create an intelligent global behaviour

The trick is based on very simple rules upon which each member of the community bases its behaviour. All together these micro behaviours result in an emergent behaviour that appears to be the result of an intelligent -and sentient- being. This does not just happen with flocks and swarms, it is happening right now in your body. Individual cells are busy at doing things (making proteins, releasing fluids, closing or opening membrane to sodium and potassium ions…) and there you are, reading this post and thinking, intelligently, about it.

Well, in a way this is what scientists at the University of Sheffield and at the Sheffield Hallam University are trying now to replicate to create intelligent behaviour. Rather than setting up an intelligence to control what should be done they are studying what set of very very simple rules can be assigned to nano robots in such a way that once there are many of them this result in an intelligent behaviour.

Their idea of the future is one where there will be thousands of these nano-robots in the home, at the office, in hospitals and so one. Each one very cheap to manufacture, able to perform simple tasks. And able to cluster in swarms that will be able to carry out much complex assignments without having been “programmed” for that. Their global behaviour will be an emergent property of the set.

I feel this is really the way to the future, mimicking billion of years of evolution in natural ecosystems. And this will apply to the future of networks too and will bring us to the age of “semantic networks”. More on this to follow…

Collective Intelligence of (Enactive) Networks (part two)

Tuesday, September 13th, 2011 by Antonio Manzalini

In the last post we’ve elaborated about the evolution of networks (at the edge) as becoming a large nonlinear complex fabric interconnecting dynamically a huge amount of communication entities (nodes, machines, Users’ devices, sensors-actuators, etc.).

A way to interpret such a large dynamical fabric is that input-output maps depends on states of states, with nonlinear relations, whose control is highly complicated. We’ve also argued the traditional management and control techniques should be supplemented with a variety of novel adaptive control solutions. And Nature can help us on this.

We’ve read the enaction theory (by F. Varela) modeling the behavior of living systems (e.g. termites) in terms of three control-loops: the amazing collective intelligence of a termites colony appears to be the results of the interactions of these three control loops. The networks of the nest behave like a dynamically changing complex system around stability points. But this is what we’re looking for future networks. So, in principle, we may think designing the self-*control mechanisms of entities (nodes, devices, smart things, etc embedding communication capabilities) which will populate future networks in terms of the three enaction control loops.

While the first two control loops are quite simple to be understood  – they are the two control loops used in today autopilots – (the former in charge of pre-defined automatic actions, the latter in charge of learning adaptation during unexpected situations), the third one concerning the structural coupling of each node with the overall network is more complicated (see Roberto’s comment).

F. Varela argued that the connections between termites’ micro cognitive domains happen through a sort of overall structural coupling with the environment (the colony), using the so called “network field”. This is sort of common space with gradients of chemical and electromagnetic fields, intertwined with tactual and metabolic information capable of triggering (when crossing certain thresholds) collective reactions, thus integrating the micro cognitive worlds of the living entities for the common survival.

Applying this metaphor for future networks would mean thinking about this third control loop as a dynamic game of controllers. In the usual formulation of game theory there is an equilibrium state that can arise: loosely speaking this equilibrium state is in some sense analogous to thermal equilibrium and reflects the static nature of the game itself. If the game is allowed instead to be dynamic, with the rules able to change due to the states of the controllers, then there could also be dynamic equilibria, analogous to a non-equilibrium steady state. Such a game could be described again by using dynamical systems theory. Under learning, chaotic dynamics can arise, and the game may fail to converge to a Nash equilibrium (following paper highlights this). Understanding these dynamics is essential.

http://www.santafe.edu/media/workingpapers/01-09-049.pdf

Let me make a more concrete example. Currently available controllers for resource allocation (e.g. congestion control mechanisms like TCP are example of large distributed control loops) are deriving the state for a desired equilibrium point and they don’t take account of transient behaviors typical of closed-loop system.

Have a look at this brilliant paper: http://www.statslab.cam.ac.uk/~frank/PAPERS/PRINCETON/pcm0052.pdf

The Internet’s TCP implicitly maximizes a sum of utilities over all the connections present in a network: this function shows the shape of the utility function for a single connection (by the way, in economics there is a similar concept of a utility function, basically describing how much satisfaction a person receives from a good or service). But transient behaviors are not taken into account, so even if we have a globally asymptotically stable equilibrium points (corresponding to the maximization of the utility function), it is not clear how the network operates during the transients (instabilities or even phase transitions may occur).

Interestingly it has been demonstrated (CalTech) that, in order to take into account the real-time performance of the network, congestion control problem should be solved by maximizing a proper utility functional as opposed to a utility function. Actually it is known that developing a trajectory in a state space or determining other properties of a dynamical system requires dealing with functional equations, which are often quite unpopular… as hard to handle. At least up to yeasterday. Today, there are novel neural networks frameworks capable of solving functional equations (I’ll elaborate in a next post on this). This adaptive control based on maximizing a proper utility functional enables the nodes to continuously to learn the network field, to adapt to changes of conditions.

The three control loops of an enactive node of future networks

My conclusion is that one way to design the so called “third control loop”, capable of coupling nodes with overall network dynamics (think about the network field of F. Varela), should be formulating it as a set of utility functionals (not just functions) to be maximized by the nodes.

Collective Intelligence of (Enactive) Networks (part one)

Thursday, September 8th, 2011 by Antonio Manzalini

I’ve been very fascinated by the holistic vision of Roberto’s post (on 16th August). Future networks will hook such a large number of highly dynamic nodes and devices that the number of connections will require going beyond today’s architectural paradigm. The emergence of such an ubiquitous self-adaptive connectivity space will change drammatically the traditional concepts of network architectures as well. For example, at the edges we expect to see the emergence of dynamic games of sub-networks (belonging to the same, or different Operators), supporting any sort of services by using local processing and storage resources. So future networks will look like large distributed complex system, as such characterized by dynamic, and chaotic behavior. This vision will exacerbate problems (which are still open today) impacting scalability and stability of end-to-end connectivity, such as: dynamic assignment of addresses, dynamic routing and new strategies for admission control and congestion control of flows. 

It will not be possible anymore adopting traditional management and control (declared objectives and observed behavior) for future networks. Dynamic or static modeling for (open or closed loop) control will become very complicated and unstable if not supplemented with a variety of novel control techniques, including (nonlinear) dynamic systems, computational intelligence, intelligent control (adaptive control, learning models, neural networks, fuzzy systems, evolutionary and genetic algorithms), and artificial intelligence.

Out of this “chaos” of interactions, a collective network intelligence will emerge. This reminds me the collective intelligence emerging in termites nest. This is like to say that this future network fabric will behave like a dynamically changing complex system (around stability attractors) where (autonomic) nodes will look like simple living systems (e.g. termites).

I wish elaborating a little bit about the role of autonomics in this future self-adaptive network fabric, taking a direct inspiration from the modeling of adaptive cognitive capabilities of simple living systems. It helps resuming the theory of “enactive” behavior of living systems by F. Varela (see “The Embodied Mind: Cognitive Science and Human Experience”, Cambridge, MA: MIT Press).

This theory (which is also considered in Artifical Intelligence) argues that adaptive behavior of simple living system is based on two interrelated points: 1) perception consisting of perceptually guided action and 2) cognitive structures, emerging from the recurrent sensori-motor patterns, enabling action to be perceptually guided. In particular, it is argued that simple living systems cross several and diverse cognitive domains (or micro-worlds) which are generated from their interactions with the external environment: within a micro-world the behavior is simply determined by pre-defined sensori-motor loops, simple and fast ; from time to time breakdowns occur which are unexpected disruptive situations determining the need to change from a cognitive domain (i.e. from a micro-world) to another one. Importantly, this bridging (during breakdowns) is assured by the “intelligence” of the nervous system (allowing a new adaptation and the consequent learning of new sensorimotor loops). This behavior has been successfully exploited by Nature with three control loops (which I’ve tried representing in the following picture).

The three control loops of the enactive behavior

I’ve been surprised discovering that these same principles (adaptive control based on three control loops, supported by learning models and neural networks) are being adopted for designing the auto-pilot of airplanes, which are by the way complex systems.

In next post I’ll elaborate about this and how enaction could be exploited in autonomic nodes and devices.

My take is that enactive networks are just around the corner. Let’s be ready.

0-Configuration Networks – Part 2 (of 3)

Wednesday, February 9th, 2011 by Antonio Manzalini

In part 1 of this post, we’ve mentioned that virtualization will allow designing and running on the same physical network different coexisting logical architectures (fitting different network-wide goals and biz demands): to achieve this Operators will have to struggle to “program” their networks. Practically this will require translating network (e.g., connectivity matrix, load-balancing, traffic engineering goals, survivability requirements, etc) into low-level configuration commands (mostly hand-made) on the individual network elements (e.g., forwarding table, packet filters, link-scheduling weights, and queue-management parameters, as well as tunnels and NAT mappings).

 

This task is becoming more and more time consuming and fragile. There is a main reason. IP (and Ethernet) originally embedded path-computation logic in distributed protocols whose complexity incrementally grew with the growth of Internet. High complexity and dynamicity of future networks are now posing serious doubts about the efficiency in extending further said distributed control protocols to support network-level objectives and business goals (in future dynamic and complex environments).

 

It makes more sense, in principle, that 1) the decision logic is not such hardwired in protocols and that 2) its elaboration results (i.e. forwarding table, packet filters, link-scheduling weights, queue-management parameters, tunnels, network address translation mappings, etc) is directly actuated through fast and simple self-configuration automatic control-loops into the network elements.

 

A kind of decision logic plane (e.g. it could be implemented on the Cloud) it is needed which uses a set of algorithms to turn network-level objectives directly into the packet-handling states that have to be configured into the nodes. This is achieved by using a global knowledge field overlooking the network. This Knowledge field is a sort of virtual representation of the network.

 

Each nodes contribute to create this global knowledge field for example with self-discovery mechanisms; in particular self-discovery makes automatic discovering of all physical elements and components in a network and creating logical identifiers to represent them (today almost hand made). So, this global knowledge field will represent the updated state of the data plane, including all information about each network element (its name, resources, and physical interfaces) and all nodes interactions.

 

So, the other way that I’ve mentioned in part 1, is to make nodes as intelligent (or stupid) as termites and the coupling decision plane and global knowledge field will assume the form of the emerging super-organism behind a termitary, a collective intelligence. The key concept is that the behaviour of the network can now be flexibly modelled and programmed by considering very simple interactions between nodes or the interaction between a nodes and the context (dictating goals, adaptations) without having to consider other complexities.

 

This is a way to exploit in future networks the Thompson-Varela theory on “upward” and “downward” causation. Emergence has two directions: first, there is local-to-global “upward causation”, as a result of which novel processes emerge (e.g. the collective knowledge). Then, there is a global-to-local “downward” determination, whereby global characteristics have local impacts (e.g. on nodes self-configurations).