Cisco VM-FEX Limitations

The week before VMworld on 8/25 was the Virtualization Security Podcast featuring Greg Ferro (@etherealmind), CCIE to discuss Cisco VM-FEX and its impact on virtualization and cloud security. VM-FEX is a method by which the fabric of a UCS top of rack switch is extended to the VM, but only if the VM is using VMDirectPath.  So does this impact Virtualization and Cloud Security in any way?

The long and short of it is no, VM-FEX does not really change the virtualization and cloud security footprint from what is already known. Why is this? Because all Cisco VM-FEX does is provide a multi-root IO virtualization device that presents itself as multiple single-root IO virtualization cards so that vSphere can take advantage of VMDirectPath from within the VMs directly through to the top of rack switch within a Cisco UCS cabinet.

But, I could already do that within a Cisco UCS deployment? Yes, you could but the VM-FEX offloads much of the high order processing and binds a VMDirectPath port on the VM direct to a switch port on the top of rack switch. That part is new, with just the Palo adapter, their is no binding of ports to the top of rack switch. Which means that with the help of VMDirectPath and VM-FEX, Cisco’s UCS hardware and switches will know which VMs are connected to it at all times and would bypass the virtual switch entirely.

While this is not stated within the podcast as possible, bypassing the virtual switch entirely, vSphere provides the capability and with Cisco VM-FEX, you now have a way to connect a VM directly to the hardware with better overall performance. However, this support is limited to 4 VMDirectPath devices per VM, with a support of up to 8 VMDirectPath devices per host. In essence, you now have a limit of a maximum of 8 VMs that can make use of VMDirectPath and therefore VM-FEX. At least that is how I interpret the vSphere 5 Configuration Maximums. This is up from the 4 devices available in vSphere 4.x.

Are there more things to consider when using VM-FEX, yes there are, you now have to worry about the addition risks associated with the VM-FEX hardware if there are any, but also the number of VMs that can take advantage of the new functionality. The VM-FEX hardware adds yet another layer to the networking stack within the UCS device, a much needed layer. However, with consolidation rations exceeded 20 per blade, it would have been nice if more VMs could take advantage of the VM-FEX capabilities for performance reasons.

All in all a very intriguing concept with limited usage except for those 8 single vNIC VMs that need wire speed performance without the virtual network layer getting involved. Having non-VMDirectPath VMs and VMDirectPath VMs on the same blade may also be a bit confusing to the administrators, so ensure you keep good network diagrams that include the virtual network.

* The travelogue video was produced by Lars Troen

Posted in SecurityTagged , , , ,

Leave a Reply

9 Comments on "Cisco VM-FEX Limitations"

newest oldest most voted
Hi Ed, I apologize for not being able to respond sooner. One of the key differentiators with VM-FEX is that it does not use VMDirectPath; instead it makes use of the Virtual Ethernet Module (VEM) from the Nexus 1000V which as you know replaces vSphere’s Distributed vSwitch. No licensing is required for Nexus 1000V as you are not using its Virtual Supervisor Modules (VSM) but instead the Cisco UCS Fabric Interconnect’s (UCS 6xxx) are acting as the switch control plane. This allows VM-FEX to fully support vMotion, etc. without the usual limitations of VMDirectPath. The maximum number of vNICs per… Read more »
Hello Andrew, Thank you for the update. The Cisco VM-FEX documentation states that VMDirectPath is required to extend the fabric to the VM. By extending the fabric the virtual switch you are still short of the VM, sort of a last mile issue I would think. But still extending the fabric the vSwitch is a very good thing, so a port on the Nexus is similar to a port on the hardware switch, which may be equivalent as saying extended to the VM but there is still quite a bit between the N1KV switch port and the VM, such as… Read more »

Ah I see, Cisco is now doing both. What I described they now call VM-FEX “Emulated mode” vs. “PCIe Pass-Through or VMDirectPath mode”, the latter seemingly being a high-performance option. Originally VM-FEX (formerly VN-Link) was marketed as a management benefit, manage the network ports connected to both your UCS physical blade ports and any VMs running inside of UCS from the same switch (the UCS 6xxx fabric interconnect’s) bypassing the additional network management layer imposed by virtualization – I would expect any spin on the security benefits to be just that – marketing spin.


Hello Andrew,

However, configuring the vSwitch port from one location does not configure the security associated with the vNIC of the VM, specifically the vShield App, Edge, Zones, and Endpoint security that sits in some cases between the VM vNIC and the port on the vSwitch. So vSwitch/pSwitch security settings do not set everything and therefore do not control all aspects of virtual network security. If you use VMDirectPath then this is the case.

Best regards,
Edward L. Haletky


Nice post, it is good to see some information around VM-FEX. I am confused about the “VM-FEX hardware adds yet another layer to the networking stack within the UCS device” statement. Doesn’t it remove a layer? I mean vNICs already exists at the hypervisor switch. Essential the hypervisor switch layer is removed. I see this as a VM directly attached to the physical switch. Just curious about your point of view.

Hello David, VM-FEX is an additional piece of hardware as such it is at least a new interface. However, from what I understand the hypervisor switch layer is not removed, the FEX ends at the Cisco N1KV, so that the ports on the N1KV are ports on the 6100. If it removed a layer it would terminate at the VM and not require the N1KV or any dvSwitch. This is the concern I have, I am looking at physical (and logical within the hypervisor) layers, and not the logical fabric extension that could be happening. This is also not clear… Read more »
Hi Ed, It appears there is some special sauce with VM-FEX and the VEM that allows you to have more then eight VMs utilizing DirectPath I/O (I.E. VM-FEX High Performance Mode). I was able to get ten VMs on a single host in VM-FEX high performance mode and all ten show DirectPath I/O as active. You can seem more on my testing and additional information on VM-FEX at my blog . As far as I can tell, ESXi doesn’t even really see the vNICs used for VM-FEX. They do not show up under the network interfaces configuration section. Additionally they… Read more »

Hello Joe,

You are correct, VM-FEX will switch from VMDirectPath to vSwitch during vMotions. There are up to 31 possible VMDirectPathc connections. There is more on this subject coming, but there are still limitations if I have over 31 vNICs not everything can use VMDirectPAtch and there is a limit to 4 per VM. The Nexus 100K vSwitch is still involved.

— Edward


HI All
I have a full suite of VM-FEX training videos at the below link
They explain all the intricasies and config requirements of VM-FEX in both emulated and VMDirectPath Modes.

Check out