The week before VMworld on 8/25 was the Virtualization Security Podcast featuring Greg Ferro (@etherealmind), CCIE to discuss Cisco VM-FEX and its impact on virtualization and cloud security. VM-FEX is a method by which the fabric of a UCS top of rack switch is extended to the VM, but only if the VM is using VMDirectPath.  So does this impact Virtualization and Cloud Security in any way?

The long and short of it is no, VM-FEX does not really change the virtualization and cloud security footprint from what is already known. Why is this? Because all Cisco VM-FEX does is provide a multi-root IO virtualization device that presents itself as multiple single-root IO virtualization cards so that vSphere can take advantage of VMDirectPath from within the VMs directly through to the top of rack switch within a Cisco UCS cabinet.

But, I could already do that within a Cisco UCS deployment? Yes, you could but the VM-FEX offloads much of the high order processing and binds a VMDirectPath port on the VM direct to a switch port on the top of rack switch. That part is new, with just the Palo adapter, their is no binding of ports to the top of rack switch. Which means that with the help of VMDirectPath and VM-FEX, Cisco’s UCS hardware and switches will know which VMs are connected to it at all times and would bypass the virtual switch entirely.

While this is not stated within the podcast as possible, bypassing the virtual switch entirely, vSphere provides the capability and with Cisco VM-FEX, you now have a way to connect a VM directly to the hardware with better overall performance. However, this support is limited to 4 VMDirectPath devices per VM, with a support of up to 8 VMDirectPath devices per host. In essence, you now have a limit of a maximum of 8 VMs that can make use of VMDirectPath and therefore VM-FEX. At least that is how I interpret the vSphere 5 Configuration Maximums. This is up from the 4 devices available in vSphere 4.x.

Are there more things to consider when using VM-FEX, yes there are, you now have to worry about the addition risks associated with the VM-FEX hardware if there are any, but also the number of VMs that can take advantage of the new functionality. The VM-FEX hardware adds yet another layer to the networking stack within the UCS device, a much needed layer. However, with consolidation rations exceeded 20 per blade, it would have been nice if more VMs could take advantage of the VM-FEX capabilities for performance reasons.

All in all a very intriguing concept with limited usage except for those 8 single vNIC VMs that need wire speed performance without the virtual network layer getting involved. Having non-VMDirectPath VMs and VMDirectPath VMs on the same blade may also be a bit confusing to the administrators, so ensure you keep good network diagrams that include the virtual network.

* The travelogue video was produced by Lars Troen

Share this Article:

Share Button
Edward Haletky (384 Posts)

Edward L. Haletky, aka Texiwill, is the author of VMware vSphere(TM) and Virtual Infrastructure Security: Securing the Virtual Environment as well as VMware ESX and ESXi in the Enterprise: Planning Deployment of Virtualization Servers, 2nd Edition. Edward owns AstroArch Consulting, Inc., providing virtualization, security, network consulting and development and The Virtualization Practice where he is also an Analyst. Edward is the Moderator and Host of the Virtualization Security Podcast as well as a guru and moderator for the VMware Communities Forums, providing answers to security and configuration questions. Edward is working on new books on Virtualization.

[All Papers/Publications...]

Connect with Edward Haletky:


Related Posts:

9 comments for “Cisco VM-FEX Limitations

  1. September 12, 2011 at 12:24 PM

    Hi Ed, I apologize for not being able to respond sooner. One of the key differentiators with VM-FEX is that it does not use VMDirectPath; instead it makes use of the Virtual Ethernet Module (VEM) from the Nexus 1000V which as you know replaces vSphere’s Distributed vSwitch. No licensing is required for Nexus 1000V as you are not using its Virtual Supervisor Modules (VSM) but instead the Cisco UCS Fabric Interconnect’s (UCS 6xxx) are acting as the switch control plane. This allows VM-FEX to fully support vMotion, etc. without the usual limitations of VMDirectPath.

    The maximum number of vNICs per Palo adapter remains the same and depends on the model and number of uplinks to the 2104XP FEX modules in the UCS chassis. In the case of VM-FEX both the physical ESX host and the VM guests will all consume vNICs carved out of the Palo. Assuming all 4 uplinks are connected and you’re using the older M81KR card, this is a maximum of 56 vNICs per Palo card – with the newer 1280 card its 116 vNICs per Palo card. Lots of room to grow beyond the 20 VM/host average you mentioned.

  2. September 12, 2011 at 12:28 PM

    Hello Andrew,

    Thank you for the update.

    The Cisco VM-FEX documentation states that VMDirectPath is required to extend the fabric to the VM. By extending the fabric the virtual switch you are still short of the VM, sort of a last mile issue I would think. But still extending the fabric the vSwitch is a very good thing, so a port on the Nexus is similar to a port on the hardware switch, which may be equivalent as saying extended to the VM but there is still quite a bit between the N1KV switch port and the VM, such as VMsafe-Net. Which could change the behavior or content of the packets, etc.

    Best regards,
    Edward L. Haletky

  3. September 12, 2011 at 12:39 PM

    Ah I see, Cisco is now doing both. What I described they now call VM-FEX “Emulated mode” vs. “PCIe Pass-Through or VMDirectPath mode”, the latter seemingly being a high-performance option. Originally VM-FEX (formerly VN-Link) was marketed as a management benefit, manage the network ports connected to both your UCS physical blade ports and any VMs running inside of UCS from the same switch (the UCS 6xxx fabric interconnect’s) bypassing the additional network management layer imposed by virtualization – I would expect any spin on the security benefits to be just that – marketing spin.

  4. September 12, 2011 at 1:01 PM

    Hello Andrew,

    However, configuring the vSwitch port from one location does not configure the security associated with the vNIC of the VM, specifically the vShield App, Edge, Zones, and Endpoint security that sits in some cases between the VM vNIC and the port on the vSwitch. So vSwitch/pSwitch security settings do not set everything and therefore do not control all aspects of virtual network security. If you use VMDirectPath then this is the case.

    Best regards,
    Edward L. Haletky

  5. September 12, 2011 at 4:05 PM

    Nice post, it is good to see some information around VM-FEX. I am confused about the “VM-FEX hardware adds yet another layer to the networking stack within the UCS device” statement. Doesn’t it remove a layer? I mean vNICs already exists at the hypervisor switch. Essential the hypervisor switch layer is removed. I see this as a VM directly attached to the physical switch. Just curious about your point of view.

  6. September 17, 2011 at 8:43 AM

    Hello David,

    VM-FEX is an additional piece of hardware as such it is at least a new interface. However, from what I understand the hypervisor switch layer is not removed, the FEX ends at the Cisco N1KV, so that the ports on the N1KV are ports on the 6100. If it removed a layer it would terminate at the VM and not require the N1KV or any dvSwitch. This is the concern I have, I am looking at physical (and logical within the hypervisor) layers, and not the logical fabric extension that could be happening. This is also not clear in any of the documents I have seen.

    Best regards,

  7. October 15, 2011 at 7:06 PM

    Hi Ed,

    It appears there is some special sauce with VM-FEX and the VEM that allows you to have more then eight VMs utilizing DirectPath I/O (I.E. VM-FEX High Performance Mode).

    I was able to get ten VMs on a single host in VM-FEX high performance mode and all ten show DirectPath I/O as active. You can seem more on my testing and additional information on VM-FEX at my blog .

    As far as I can tell, ESXi doesn’t even really see the vNICs used for VM-FEX. They do not show up under the network interfaces configuration section. Additionally they do not show up if you look at devices configured for DirectPath I/O, either in the GUI or in PowerCLI.

    Now a full memory reservation is needed for each of the VMs that you would like to have utilize DirectPath I/O, so there is somewhat of a limit on the number of VMs you can support in High Performance mode. But it seems it’s not limited by the max number of DirectPath I/O devices supported by ESXi.

    Joe

  8. October 16, 2011 at 9:18 AM

    Hello Joe,

    You are correct, VM-FEX will switch from VMDirectPath to vSwitch during vMotions. There are up to 31 possible VMDirectPathc connections. There is more on this subject coming, but there are still limitations if I have over 31 vNICs not everything can use VMDirectPAtch and there is a limit to 4 per VM. The Nexus 100K vSwitch is still involved.

    — Edward

  9. August 10, 2012 at 4:45 AM

    HI All
    I have a full suite of VM-FEX training videos at the below link
    They explain all the intricasies and config requirements of VM-FEX in both emulated and VMDirectPath Modes.

    Check out ucsguru.com/category/vm-fex/

    Regards
    Colin

Leave a Reply

Your email address will not be published. Required fields are marked *


4 × = sixteen