In this GigaOm article, Steve Herrod, the CTO of VMware, explained, “Software defined data centers are “generation-proof.” They collapse disparate systems into a singularity built atop commodity x86 processors and other gear. Software provides everything that is needed to adapt the data center to new situations and new applications, and to manage everything from storage to switches to security. Although VMware will always work with hardware partners, Herrod said, “If you’re a company building very specialized hardware … you’re probably not going to love this message.”

The Promise of the Software Defined Data Center

If we deconstruct Steve’s message, several very important promises on behalf of the software defined data center (SDDC) pop out:

  1. The only data center hardware architecture you are going to need is a set of commodity x86 processors and their supporting gear. The SDDC will be smart enough to configure the hardware and allocate its resources so that each workload executes properly.
  2. SDDCs are “generation-proof”. That means that they can run not only today’s workloads, but any workloads that may show up in the future and that may have different characteristics than current workloads. The SDDC will be able to do this since you will be able to configure the entire data center in software (VMware’s software) to meet the needs of this new workload.
  3. Obviously, if the SDDC can run today’s workloads and tomorrow’s workloads, it must be able to run all of the today’s workloads.
  4. Finally “you will not need specialized hardware.” Workloads with widely varying characteristics and needs will all run successfully on one set of hardware, with the SDDC managing everything for the correct result.

What About VDI in Today’s Data Centers?

The last two promises in the list above are interesting when it comes to running VDI workloads in today’s data centers. It turns out that the characteristics of VDI workloads are such that today’s vSphere cannot run VDI workloads very successfully on today’s commodity hardware. Several very interesting new companies have brought to market new hardware offerings that often included specialized software as well:

  • Astute Networks ViSX storage appliances allow VMware administrators to add a tier of flash storage to their environment simply by plugging in an appliance and configuring it in vCenter.  This can boost tier 1 and VDI application performance by a factor of up to 10x. This is achieved by leveraging both flash storage and improved TCP and ISCSI protocol processing
  • Tintri VMstore seeks to reinvent storage by integrating performance management into the storage appliance, and by allowing storage to be manged directly by the VMware adminstrator. For VDI implementations the relevant files are automatically placed in flash storage dramatically improving performance and eliminating boot storms.
  • Nutanix Compute Cluster collapses CPU, memory and storage into a single unit that includes two layers of flash and that can scale out horizontally to build a vSphere cluster. Tiers of local storage in each compute node assure good performance and the use of flash especially assists with the elimination of VDI boot storms.
  • Pivot3 vSTAC collapses CPU, memory, two tiers of flash storage and local disk into scale out nodes that leverage local disk and flash performance.
  • V3 Appliance combines CPU, memory and SSD storage into a scale out appliance that offloads the heavy VDI storage workloads to the local flash in the appliance, but still leverages the rest of the enterprise storage subsystem for the rest of the storage needs.

VMware has also recognized issue this with its VMware View Rapid Desktop Program, which focuses upon bundles of VMware software with hardware that contains specific features that allow VDI to perform well at scale. The various hardware offerings are profiled in detail in Simon Bramfitt’s post, “Appliance Makers Simplify VDI Adoption“. Suffice it to say that the VMware program and specialized hardware offerings from AMAX, Cisco, Computex, Intel, Nutanix, Pivot3, Pogo, Presidio, RackTop, and Storcom would not be necessary and would not be commercially successful if VDI was just another workload that could be run on standard back end hardware. Further evidence exists in the form of monitoring tools that have VDI unique features from vendors like LiquidWare Labs and Xangati. Such tools would not be necessary if VDI did not have certain unique characteristics that required unique monitoring functionality.

Reconciling the Software Defined Data Center and VDI

So given that today’s embryonic software-defined data center (the one that manages CPU and memory but not yet networking or storage) cannot run VDI or standard hardware, how will this likely evolve in the future? Here are the likely scenarios:

  1. The software-defined data center of the future will, in fact, not be comprised of just one set of hardware infrastructure. In order for it to work, it will have to include specialized pools of hardware that can perform certain specialized tasks demanded by certain workloads. If this is how it turns out, then this will simply mean that any data center that wants to run VDI at scale is going to have to have a specialized pool of hardware that has the things that VDI uniquely needs (things like tiers of flash and intelligent memory of that flash). The SDDC is going to have to be smart enough to automatically place VDI workloads in the resource pools that have the VDI focused features. The problem with this scenario is that it dilutes one of the promises of the SDDC, which is that the software is good enough to allow workloads with widely varying requirements to run on one set of hardware.
  2. Somehow adding software-defined networking and software-defined storage (and therefore making vSphere into a full SDDC solution) will allow vSphere to run VDI on standard commodity hardware. It is hard to see how adding network configuration to vSphere is going to help this. It is equally hard to see how adding control of today’s storage subsystems to vSphere is going to change much, as well. In other words, adding networking and storage management features to vSphere is not going to magically allow vSphere to run well on such infrastructure the VDI workloads that do not run well on commodity infrastructure today.
  3. The features of VDI-competent hardware become pervasively deployed (included) in commodity hardware. The most likely scenario here is that flash memory becomes much more broadly implemented and supported by various hardware vendors and more robustly managed as a storage layer within vSphere itself. If this is sufficient to allow VDI to run at scale, and the definition of “commodity hardware” means this new class of servers and storage devices that include this new level of intelligent flash management, then the promise of the SDDC will be kept. But that promise will only be kept if you replace all of the hardware that you own with this new class of hardware.
  4. The last scenario is that this may turn out not to matter, because in the global scheme of things, VDI may turn out not to matter. In a world of smart phones and tablets, with primarily locally installed applications and locally resident data, VDI may turn out to be such a niche environment that the failure of the SDDC to manage it may be inconsequential to the success or failure of the SDDC.

Summary

The software defined data center promised to run all current and future workloads on commodity hardware. However, VDI currently requires unique hardware to perform at scale—a need met by a set of vendors like Astute Networks, Tintri, Nutanix, Pivot3, and V3. VMware is also addressing the unique requirements of VDI adoption in the VMware View Rapid Desktop Program. Therefore VDI currently constitutes an exception to the every workload promise made by the SDDC.

Share this Article:

Share Button
Bernd Harzog (335 Posts)

Bernd Harzog is the Analyst at The Virtualization Practice for Performance and Capacity Management and IT as a Service (Private Cloud).

Bernd is also the CEO and founder of APM Experts a company that provides strategic marketing services to vendors in the virtualization performance management, and application performance management markets.

Prior to these two companies, Bernd was the CEO of RTO Software, the VP Products at Netuitive, a General Manager at Xcellenet, and Research Director for Systems Software at Gartner Group. Bernd has an MBA in Marketing from the University of Chicago.

Connect with Bernd Harzog:


Related Posts:

3 comments for “The Software Defined Data Center and VDI

  1. October 2, 2012 at 3:09 PM

    I think the best way to look at this is to accept that unless you are Google, Facebook, or Amazon, your data center will be heterogeneous and the extent of the software defined data center will be limited by the capabilities of the hardware.

    Inside the data center there will be multiple service and availability zones with different capabilities, one of them may be associated with high I/O loading, another with general purpose loads, another with hardware fault tolerance, depending on both the hardware capability and high level functional needs Any SDDC management infrastructure must accommodate these separate zones and work with them rather than attempting to impose services on them.

    In many respects though the more advanced VDI implementations are further down the road towards SDDC than the majority of virtualized workloads. VDI workloads are better suited in many respects to delivery on a software defined data center the cost of both their number (usually many hundreds), consistency, and relatively low resource utilization. Moving this type of workload from device to device (within certain boundaries) is very easy to achieve, and many of the aspects of a SDDC such as network and storage assignment are already performed by the VDI management layer. The main difference between running VDI on a specialist appliance or on a standard volume server, is that doing so eliminates the uncertainty around system sizing, and so provide a much needed low risk path towards implementation.

  2. October 4, 2012 at 6:28 PM

    “In other words, adding networking and storage management features to vSphere is not going to magically allow vSphere to run well on such infrastructure the VDI workloads that do not run well on commodity infrastructure today.”

    In order to get to the future state of the SDDC, a new software approach is required that is purpose-built to remove the inefficiencies, scalability, and performance challenges inherent with the current native file systems (VMFS or NTFS) and the way they handle storage for VMs.

    If a new file system can be added to these platforms that is flexible and scalable enough to handle any virtualized workload, on existing block storage, and bring per-VM level management to vSphere and System Center 2012, the power of software defined storage can be realized on existing storage architectures. New proprietary hardware and dedicated appliances are not the only way to get optimized VDI performance.

    VDI is especially challenging for existing storage due to the severe IO profile. Virsto was purpose built to efficiently handle IO, especially writes, while bringing to the hypervisor very space-efficient, high performance, per VM snapshots & clones, integrated as a native management option within vCenter and VMM existing workflows, so no new software layer to learn or manage.

    When you start with a clean sheet of paper and develop a new file system from scratch, designed to make storage as efficient in virtualized environments as servers, software defined storage is no longer a future. Virsto is shipping it today as 100% software.

Leave a Reply

Your email address will not be published. Required fields are marked *


− three = 4