In the last post we’ve elaborated about the evolution of networks (at the edge) as becoming a large nonlinear complex fabric interconnecting dynamically a huge amount of communication entities (nodes, machines, Users’ devices, sensors-actuators, etc.).
A way to interpret such a large dynamical fabric is that input-output maps depends on states of states, with nonlinear relations, whose control is highly complicated. We’ve also argued the traditional management and control techniques should be supplemented with a variety of novel adaptive control solutions. And Nature can help us on this.
We’ve read the enaction theory (by F. Varela) modeling the behavior of living systems (e.g. termites) in terms of three control-loops: the amazing collective intelligence of a termites colony appears to be the results of the interactions of these three control loops. The networks of the nest behave like a dynamically changing complex system around stability points. But this is what we’re looking for future networks. So, in principle, we may think designing the self-*control mechanisms of entities (nodes, devices, smart things, etc embedding communication capabilities) which will populate future networks in terms of the three enaction control loops.
While the first two control loops are quite simple to be understood – they are the two control loops used in today autopilots – (the former in charge of pre-defined automatic actions, the latter in charge of learning adaptation during unexpected situations), the third one concerning the structural coupling of each node with the overall network is more complicated (see Roberto’s comment).
F. Varela argued that the connections between termites’ micro cognitive domains happen through a sort of overall structural coupling with the environment (the colony), using the so called “network field”. This is sort of common space with gradients of chemical and electromagnetic fields, intertwined with tactual and metabolic information capable of triggering (when crossing certain thresholds) collective reactions, thus integrating the micro cognitive worlds of the living entities for the common survival.
Applying this metaphor for future networks would mean thinking about this third control loop as a dynamic game of controllers. In the usual formulation of game theory there is an equilibrium state that can arise: loosely speaking this equilibrium state is in some sense analogous to thermal equilibrium and reflects the static nature of the game itself. If the game is allowed instead to be dynamic, with the rules able to change due to the states of the controllers, then there could also be dynamic equilibria, analogous to a non-equilibrium steady state. Such a game could be described again by using dynamical systems theory. Under learning, chaotic dynamics can arise, and the game may fail to converge to a Nash equilibrium (following paper highlights this). Understanding these dynamics is essential.
Let me make a more concrete example. Currently available controllers for resource allocation (e.g. congestion control mechanisms like TCP are example of large distributed control loops) are deriving the state for a desired equilibrium point and they don’t take account of transient behaviors typical of closed-loop system.
Have a look at this brilliant paper: http://www.statslab.cam.ac.uk/~frank/PAPERS/PRINCETON/pcm0052.pdf
The Internet’s TCP implicitly maximizes a sum of utilities over all the connections present in a network: this function shows the shape of the utility function for a single connection (by the way, in economics there is a similar concept of a utility function, basically describing how much satisfaction a person receives from a good or service). But transient behaviors are not taken into account, so even if we have a globally asymptotically stable equilibrium points (corresponding to the maximization of the utility function), it is not clear how the network operates during the transients (instabilities or even phase transitions may occur).
Interestingly it has been demonstrated (CalTech) that, in order to take into account the real-time performance of the network, congestion control problem should be solved by maximizing a proper utility functional as opposed to a utility function. Actually it is known that developing a trajectory in a state space or determining other properties of a dynamical system requires dealing with functional equations, which are often quite unpopular… as hard to handle. At least up to yeasterday. Today, there are novel neural networks frameworks capable of solving functional equations (I’ll elaborate in a next post on this). This adaptive control based on maximizing a proper utility functional enables the nodes to continuously to learn the network field, to adapt to changes of conditions.
My conclusion is that one way to design the so called “third control loop”, capable of coupling nodes with overall network dynamics (think about the network field of F. Varela), should be formulating it as a set of utility functionals (not just functions) to be maximized by the nodes.