In part one of Cost to Build a New Virtualized Data Center, we discussed the basic software costs for a virtualized data center based on VMware vSphere 6.0, Citrix XenServer 6.5, Microsoft Hyper-V 2012 R2 and 2016, and Red Hat. If you missed that, please click here to review before continuing.
Part 2a of this series concentrated on Hyper-V 2012 R2 and 2016 as well as vSphere 6.0 regarding the addition of a local distributed storage solution: DataCore Virtual SAN in the case of Hyper-V 2012 R2, Storage Spaces Direct with Hyper-V 2016, and VSAN 6.2 with vSphere 6.0. You can review that article here.
This article continues from that second article of the series and finishes the addition of a local distributed storage stack to XenServer and RHEV. Once again, our compute unit of choice is the Dell 730xd with two 10-core CPUs and 256 GB of RAM. As stated in the previous post, we need to add some local storage in each node. These compute nodes can, depending on the choices made during the configuration, take up to twenty-four disk drives. For the purposes of this article, we are assuming that data locality is required for performance and that there is a need for an all-flash array. We chose to go with two 400 GB SLC drives for cache and four 800 MLC drives for capacity, giving a total raw capacity per node of 4 TB. There may be further hardware requirements depending on the chosen solutions for each hypervisor, but that will be called out in the relevant vendor sections.
Data Center Based on Citrix XenServer
With XenServer, we hit our first issue: there is currently no integrated method of presenting local storage as a VM host target that allows for resilience. Due to XenServer’s Linux core, services such as GlusterFS or Ceph may make viable alternatives. However, it must be said that these options are not supported. The only alternative is DataCore’s Virtual SAN, although this is not a truly integrated solution in the form of VSAN running on vSphere or Storage Spaces Direct. Because it runs on Windows Hyper-V 2016, it meets our stated requirements. Citrix did have a potential solution here with its Melio Sanbolic, but that was discontinued on January 15, 2016.
GlusterFS or Ceph would be perfect, but due to the lack of support for them when installed locally on XenServer, plus a difficult installation, they will have to be thrown out the window. To be fair, even if they were a supported option, we at The Virtualization Practice feel that getting these to successfully work for the majority of companies would be too much of a science experiment to be cost effective. Further, the performance of the resulting cluster may not be good enough for production workloads.
This leaves the VSA approach as the only viable option for creating a datastore based on local storage. As HPE’s StoreVirtual is not supported when running on XenServer, this leaves DataCore.
Unlike with Hyper-V, under XenServer, DataCore’s Virtual SAN cannot be installed in Dom0 on the host servers; it has to be installed as VMs running Windows server on each XenServer host providing local storage to the ten-node cluster. This will add a slight overhead in terms of performance and also make a reduced amount of storage available to run your virtual machines. You will need to deduct the size of your virtual appliance from your useable capacity.
From the perspective of DataCore, the costs were outlined in the Windows 2012 R2 Hyper-V section, but they are shown below for completion:
|Product||Description||Number||Unit Cost (USD)||Sub-Unit|
|DataCore Virtual SAN||Perpetual Server node license for 4 TB||10||$2,000||$20,000|
|DataCore Service and Support||3 years service and support||10||$1,000||$10,000|
This takes the overall costs for a XenServer deployment to $61,382. That said, this is nowhere near as performant or flexible as Hyper-V or vSphere solutions, due to the use of a VSA to manage the storage traffic.
Data Center Based on RHEV
Red Hat Enterprise Virtualization is based on KVM and owned by Red Hat. Here, we have a number of options for providing distributed storage from locally attached disk. The first potential solution we will look at is DR:DB. This looks promising. As you can see from the diagram below, it is a kernel loaded driver that allows local storage to be presented as a shared volume.
However, DR:DB won’t give us locality, as it only achieves active passive HA; for the Windows folks out there, this parallels a failover cluster. As this fails our performance requirement, it needs to be discarded.
The next option is GlusterFS. This also looks promising, and it has the benefit of being “owned” by Red Hat, too. Again, this is a kernel-level device, and on paper it could fulfill all our requirements. However, RHEV support for locally stored GlusterFS datastores is still an RFP for version 3.6 that is still in beta. Currently, due to limitations on Red Hat’s version of oVirt, the stack that manages KVM on a Red Hat host, the ability to natively install Gluster on a RHEV host is blocked. You can do a workaround, but this puts you in a nonsupported position with Red Hat. So, once again, we have to discard a solution.
Red Hat is looking at this as a supported option for version 3.6, as there is an RFC in place, but as of yet 3.6 is still in beta, and the RFC has not been actioned.
Therefore, here we are again with the VSA options, these being DataCore’s Virtual SAN, HPE StoreVirtual, and StorMagic SvSAN. Unfortunately, StorMagic is only supported in vSphere and Hyper-V, so we are back to the old favorites: StoreVirtual and DataCore Virtual SAN. Both are viable solutions with benefits and constraints: for example, DataCore can only be deployed in HA pairs, thereby cutting down potential storage capacity by 50%.
HPE StoreVirtual Appliance can be deployed to each node in our ten-node cluster in a RAIN 5 or 6 deployment scenario, thereby increasing the potential capacity of the overall usable storage.
Licensing for StoreVirtual is interesting. Unlike the standard capacity-based license driven by raw device storage, HPE bases its own on what is visible to the hypervisor. For example, 20 TB for raw storage consisting of ten times 2 TB drivers configured as a RAID 1 mirror will only require a single 10 TB license. This is different from the vast majority of VSA providers, who base their license on raw capacity. We think that this is a much fairer way to license.
The raw cost to add an HPE StoreVirtual VSA-based solution to our RHEV-based environment is shown below:
|Product||Description||Number||Unit Cost (USD)||Sub-Unit|
|HPE StoreVirtual VSA||Perpetual Server node license for 10 TB||10||$3,500||$35,000|
|Service and support||3 yrs service and support (cost included above)||10||0||0|
This brings the running total for RHEV to $82,970. However, you will still need to purchase Windows licenses to run Windows-based virtual machines, which will add to the final cost.
With the release of 2016, Hyper-V will be able to create shared storage out of local storage with Storage Spaces Direct. However, this is a 1.0 feature, and its performance and stability are yet to be tested in a production environment. Red Hat may or may not have a working solution with the supported coupling of GlusterFS and RHEV when it releases version 3.6, but this will be a 1.0 feature, not tested. XenServer is becoming increasingly irrelevant in the market. Citrix’s decision to open-source the entire product has effectively killed the Citrix release version 6.5, but it is so far behind the curve with VMware and Hyper-V that nothing less than a miracle will allow it to recover. We feel that it may become the next cut in Elliott Management’s cut-and-burn of the once-eminent Citrix.
Now, once again, this article is entirely focused on cost as a differentiating factor. No consideration is given to performance, any differentials in features, or the like.
Share this Article:
Latest posts by Tom Howarth (see all)
- It’s OK—You Just Configure a Reverse Proxy, and You’re Good to Go. Simples! - March 7, 2017
- That Was the Year That Was: 2016 - January 16, 2017
- Docker Has Been in an Acquisitive Mood Again, This Time Pulling in Infinit - January 9, 2017