In this GigaOm article, Steve Herrod, the CTO of VMware, explained, “Software defined data centers are “generation-proof.” They collapse disparate systems into a singularity built atop commodity x86 processors and other gear. Software provides everything that is needed to adapt the data center to new situations and new applications, and to manage everything from storage to switches to security. Although VMware will always work with hardware partners, Herrod said, “If you’re a company building very specialized hardware … you’re probably not going to love this message.”
The Promise of the Software Defined Data Center
If we deconstruct Steve’s message, several very important promises on behalf of the software defined data center (SDDC) pop out:
- The only data center hardware architecture you are going to need is a set of commodity x86 processors and their supporting gear. The SDDC will be smart enough to configure the hardware and allocate its resources so that each workload executes properly.
- SDDCs are “generation-proof”. That means that they can run not only today’s workloads, but any workloads that may show up in the future and that may have different characteristics than current workloads. The SDDC will be able to do this since you will be able to configure the entire data center in software (VMware’s software) to meet the needs of this new workload.
- Obviously, if the SDDC can run today’s workloads and tomorrow’s workloads, it must be able to run all of the today’s workloads.
- Finally “you will not need specialized hardware.” Workloads with widely varying characteristics and needs will all run successfully on one set of hardware, with the SDDC managing everything for the correct result.
What About VDI in Today’s Data Centers?
The last two promises in the list above are interesting when it comes to running VDI workloads in today’s data centers. It turns out that the characteristics of VDI workloads are such that today’s vSphere cannot run VDI workloads very successfully on today’s commodity hardware. Several very interesting new companies have brought to market new hardware offerings that often included specialized software as well:
- Astute Networks ViSX storage appliances allow VMware administrators to add a tier of flash storage to their environment simply by plugging in an appliance and configuring it in vCenter. This can boost tier 1 and VDI application performance by a factor of up to 10x. This is achieved by leveraging both flash storage and improved TCP and ISCSI protocol processing
- Tintri VMstore seeks to reinvent storage by integrating performance management into the storage appliance, and by allowing storage to be manged directly by the VMware adminstrator. For VDI implementations the relevant files are automatically placed in flash storage dramatically improving performance and eliminating boot storms.
- Nutanix Compute Cluster collapses CPU, memory and storage into a single unit that includes two layers of flash and that can scale out horizontally to build a vSphere cluster. Tiers of local storage in each compute node assure good performance and the use of flash especially assists with the elimination of VDI boot storms.
- Pivot3 vSTAC collapses CPU, memory, two tiers of flash storage and local disk into scale out nodes that leverage local disk and flash performance.
- V3 Appliance combines CPU, memory and SSD storage into a scale out appliance that offloads the heavy VDI storage workloads to the local flash in the appliance, but still leverages the rest of the enterprise storage subsystem for the rest of the storage needs.
VMware has also recognized issue this with its VMware View Rapid Desktop Program, which focuses upon bundles of VMware software with hardware that contains specific features that allow VDI to perform well at scale. The various hardware offerings are profiled in detail in Simon Bramfitt’s post, “Appliance Makers Simplify VDI Adoption“. Suffice it to say that the VMware program and specialized hardware offerings from AMAX, Cisco, Computex, Intel, Nutanix, Pivot3, Pogo, Presidio, RackTop, and Storcom would not be necessary and would not be commercially successful if VDI was just another workload that could be run on standard back end hardware. Further evidence exists in the form of monitoring tools that have VDI unique features from vendors like LiquidWare Labs and Xangati. Such tools would not be necessary if VDI did not have certain unique characteristics that required unique monitoring functionality.
Reconciling the Software Defined Data Center and VDI
So given that today’s embryonic software-defined data center (the one that manages CPU and memory but not yet networking or storage) cannot run VDI or standard hardware, how will this likely evolve in the future? Here are the likely scenarios:
- The software-defined data center of the future will, in fact, not be comprised of just one set of hardware infrastructure. In order for it to work, it will have to include specialized pools of hardware that can perform certain specialized tasks demanded by certain workloads. If this is how it turns out, then this will simply mean that any data center that wants to run VDI at scale is going to have to have a specialized pool of hardware that has the things that VDI uniquely needs (things like tiers of flash and intelligent memory of that flash). The SDDC is going to have to be smart enough to automatically place VDI workloads in the resource pools that have the VDI focused features. The problem with this scenario is that it dilutes one of the promises of the SDDC, which is that the software is good enough to allow workloads with widely varying requirements to run on one set of hardware.
- Somehow adding software-defined networking and software-defined storage (and therefore making vSphere into a full SDDC solution) will allow vSphere to run VDI on standard commodity hardware. It is hard to see how adding network configuration to vSphere is going to help this. It is equally hard to see how adding control of today’s storage subsystems to vSphere is going to change much, as well. In other words, adding networking and storage management features to vSphere is not going to magically allow vSphere to run well on such infrastructure the VDI workloads that do not run well on commodity infrastructure today.
- The features of VDI-competent hardware become pervasively deployed (included) in commodity hardware. The most likely scenario here is that flash memory becomes much more broadly implemented and supported by various hardware vendors and more robustly managed as a storage layer within vSphere itself. If this is sufficient to allow VDI to run at scale, and the definition of “commodity hardware” means this new class of servers and storage devices that include this new level of intelligent flash management, then the promise of the SDDC will be kept. But that promise will only be kept if you replace all of the hardware that you own with this new class of hardware.
- The last scenario is that this may turn out not to matter, because in the global scheme of things, VDI may turn out not to matter. In a world of smart phones and tablets, with primarily locally installed applications and locally resident data, VDI may turn out to be such a niche environment that the failure of the SDDC to manage it may be inconsequential to the success or failure of the SDDC.
The software defined data center promised to run all current and future workloads on commodity hardware. However, VDI currently requires unique hardware to perform at scale—a need met by a set of vendors like Astute Networks, Tintri, Nutanix, Pivot3, and V3. VMware is also addressing the unique requirements of VDI adoption in the VMware View Rapid Desktop Program. Therefore VDI currently constitutes an exception to the every workload promise made by the SDDC.
Share this Article: