On Enterprises and Service Providers

Network Virtualization
“Service provider” and “enterprise” are often seen as opposites in networking circles. (For the purposes of this article, “enterprise” means “business” rather than “large business.”) I’m fortunate to have worked closely with both service providers and enterprises. The contrast is indeed sharp. Service provider networks are the product to be sold; they need to be fast, responsive, and connected above all else. The way a service provider network consumes equipment and services is fundamentally different from the way an enterprise does. This has more of an impact on how network function virtualization (NFV) works for the service provider than it does for the enterprise. Of course, all service providers also have an enterprise network for their back-office functions.

The usual enterprise network is divided into two distinct classes of traffic. We define these as “north/south” and “east/west,” as that is the direction of traffic flow in most network diagrams. It is normal to have “the internet” at the top (north) of the diagram. As we drop down toward the bottom of the page, we pass through firewalls (in all but the smallest network, multiple layers of firewall at each step south), intrusion detection systems (IDSs), the demilitarised zone (DMZ), front-end systems, and application servers. Finally, deep within the security layers of the network, we get to the databases, where the most valuable assets of any modern business reside. This style of layered network is so prevalent that the jargon around networking assumes it. “Out” of the network is “north”—no exceptions. The aim of these networks is to filter out malign traffic and allow that which is safe to penetrate. Security is just about the most important aspect of the design. In most cases, the bandwidth and capabilities of the internal network far outstrip those of the internet connection.
The service provider network is the antithesis of this. Service providers directly peer with multiple other service providers, so there are many “outs.” The speed at which traffic can be manipulated and passed through the network is the most important aspect. Whereas enterprises can struggle to saturate internal bandwidth, service providers are always hungry for more bandwidth. The north/south–east/west divide is almost nonexistent. This focus on bandwidth over direction leads service provider networks to be heavily focused on routers. While the enterprise is happy to have appliances running IDSs or firewall functions, for the service provider each of these appliances is a bottleneck. Running firewall functions or IDSs within routers means that fewer hops are required for traffic, which improves transit speed.
For enterprises, the biggest cost within a network is usually the compute system. That is, after all, where the data is stored and primarily manipulated. The idea of shifting a network function from an appliance in a rack down to a virtual machine, or even the hypervisor, is a fairly sensible one once the technology exists and the compute is powerful enough. For the service provider, though, the main network consists almost solely of switches and routers and fibre termination. While there may be the odd time server or web proxy, the idea of investing in compute in order to implement vital functions is very strange. Moving from custom-designed ASICs that can process dedicated functions with extreme speed to a general-purpose CPU would be a backward step. It hasn’t helped that the operating systems used on general-purpose CPUs (usually Linux-based) were historically not all that good at switching and routing, compared to the dedicated operating systems on switches and routers.
However, for the modern ISP, two factors are driving a move from dedicated appliances and routers to NFV. The first is that as the general-purpose CPUs within switches have become quicker, using Linux as their OS has become viable. Firewalls such as Cisco’s ASA are now Linux based, and they have been for a few years. In addition, many white-box manufacturers produce “open” switches that run Cumulus Linux and similar open-switch OSs. These changes have made the NFV idea less scary and more familiar.
The second factor is that with the raw increase in power, and in particular the common adoption of multicore CPUs over the past few years, the performance gap between a general-purpose CPU and an ASIC in terms of raw speed is narrowing. For simple functions such as raw switching, an ASIC will always outcompete a general-purpose CPU. But for adding new features, such as novel overlay technologies, the general-purpose CPU wins. CPUs are fast enough now to route at multiple 10 GbE speeds even while passing traffic through more complex functions such as intrusion detection and firewalling. With networking evolving at a faster rate than ever before, investing in compute no longer looks as inadvisable for service providers as it once did.
Posted in SDDC & Hybrid CloudTagged