We know that a throughput of a router is mainly limited by the routing processing, which is impacting the maximum number of packets that the router can process at each time: as a consequence there is an inevitable tradeoff between the number of ports (node degree) and speed of each port (bandwidth per connection) of a router. Router Vendors cannot make a router that has both a large degree and a large bandwidth per connection mainly due to the limitation of the routing processing.
Normally nodes in the core network have large bandwidth per connection, and thus small degree, and vice versa for the edge: typically the degree of an edge router is almost five times larger than the one of a core router.
On the other hand, consider that processing technology advances will make possible (very soon) to build a 100 (or even more) Gbps software router. Or, simply software router architectures capable of parallelizing routing functionality both across multiple servers and across multiple cores within a single server (e.g. RouteBricks). It will be possible to build high-speed software routers using low-cost, commodity hardware. This means that it will be possible overcoming routing processing limitation by using the huge amount of processing power made available in large data centres (if you prefer we’ll bring the control plane of the s/w router – separate from the h/w – in the Cloud).
Imagine an Operator running a virtual network of such powerful s/w routers on a Cloud and using a low cost physical infrastructure (based on standard hardware) for simply forwarding the packets ? This would change – in principle – the (economic) equation of the network: overprovisioning connectivity rather than just overprovisioning bandwidth. Overprovision connectivity pays off better than overprovision capacity: it is possible creating very large numbers of topologies to choose, even almost randomly (like VL2 and Bit Torrent), or programming and controlling the QoS at higher levels. Up today, overprovision connectivity in a network is more expensive than overprovision capacity, but tomorrow the equation may change.
In a data center, we have already overprovisioning of connectivity, but the story is different: network covers a relatively small fraction of the cost, compared to server, electricity and cooling costs. So overprovisioning connectivity makes economic sense (by the way, in data centers, traffic demands are quite volatile and not well understood, so it is strictly necessary to overprovision connectivity; on the other hand, traffic fluctuation on a network is over time rather than space, thus today is mitigated by capacity overprovisioning).
In principle, that would mean for an Operator building a competitive advantage by developing Virtual Data Centre, where using big data to control the network and to overprovision connectivity in Virtual Networks… on top of a (very) low cost (Opex and Capex) standard h/w infrastructure (with unlimited bandwidth).
A step further ? Integrate the s/w control of IP and Optical Networks…