Network Virtualization

It’s All about Microsegmentation

Network Virtualization

For the last eighteen months, VMware has been pushing NSX as the third pillar of its software-defined data center (SDDC). NSX has three big selling points that VMware promotes: taking control of the network, automation and orchestration, and microsegmentation. The first two are standard SDDC fare: first, pull the function into software, abstract where necessary, and orchestrate to bring operational advantage; second, break down silos and allow a more agile approach. But the last, microsegmentation, is a good place to focus for a moment.

The term “microsegmentation” is taken from the marketing world, where it is used to mean a more advanced form of market segmentation that groups a number of customers of the business into specific segments based on various factors including behavioral predictions, according to Wikipedia. Microsegmentation has an analogous definition in the networking world, where it is used to mean an advanced form of segmentation of groups of servers based on various factors. Microsegmentation is in many ways a crossover, or a subset, of both network functions virtualization (NFV) and software-defined networking (SDN), focused on the data center. The aim is to reduce and control east-west traffic in a way that hasn’t been possible before. But whats the point?

Microsegmentation is, in effect, the ability to virtualize network functions that have traditionally sat at the edge: the firewall, the load balancers, the intrusion detection. All of these would, in an ideal “three-tier” world, exist at multiple points within the network. Due to the cost, and the sheer physicality of the devices that implement these, they have traditionally sat at the perimeter of the network. The idea of running these services in VMs is not new: vShield Edge has been running like that for a few years, and vendors such as F5 have been creating virtual versions of their products. It has been possible to run software firewalls and load balancers within VMs for at least five years, no matter what virtualization technology is used. But just P2Ving the appliance gets us only so far. The complexity of the underlying network has still held things back. Having no scalable way to route outside the hypervisor meant that traffic still had to jump from the compute hosts to the network core and back to the virtual appliance. This could lead to much more traffic on the LAN, with its own set of costs.

Cisco attacks this problem using ACI, allowing the switch fabric to be programmed to react to specific flows and to place the services required in the path of the flow, in a flexible and scalable way. Cisco combines this with a vendor-neutral set of virtual machines that handle some of the network functions in a flexible way.

Nicira (Nicira, of course, was bought by VMware, and the product became NSX) and Midokura took the approach of moving the network functions into the hypervisor, thereby taking advantage of the huge dedicated networking capabilities of modern Intel hardware. CPUs are getting more and more dedicated switching capability, and Intel has quoted 100 GB per core switching with no CPU overhead in its roadmap. Whether this gives the NFV approach the boost in power it appears to remains to be seen. But it seems safe to say that at least some network functions in the SDDC will be virtualized over the long term.

What are the drawbacks? Microsegmentation focuses on east-west traffic optimization and security, especially fine-grained security. These work well in the virtual world. But what about the rest of the estate? What happens when we need to secure that old SPARC application server, or the database server that is too big to sensibly virtualize? ACI will still help to some extent here, but the other NFV systems hit a sticking point, usually bottlenecking all traffic down a limited number of east-west routes. Finally, north-south traffic usually struggles in the same fashion. Most of the benefits of microsegmentation are in the proximity of the network functions to the workload. Where this isn’t possible, the solutions look less attractive.

So, is it really all about microsegmentation? Well, not yet. There are limitations in every system, and some will feel the pinch of those limitations more than others. But the idea of moving more functions into the software stack, automating them, and moving to ever more fine-grained access permissions is certainly an attractive one and—in many, many casesa promising one.

Share this Article:

The following two tabs change content below.
Anthony Metcalf
Anthony Metcalf (vantmet) has been in IT for over 10 years, working with UK firms in industries from Engineering to Law, along with service providers. Anthony works in all areas of the data centre, from networking to automation, and has recently been blogging the VCP-NV experience at PlanetVM.net.
Anthony Metcalf

Latest posts by Anthony Metcalf (see all)

Related Posts:

Leave a Reply

Be the First to Comment!

wpDiscuz