Going to vSphere — The Need to Upgrade

I have been preparing my virtual environment for a VMware vSphere upgrade. Specifically I have been going over my existing hardware with an eye towards running all aspects of vSphere including VMware Fault Tolerance (FT), NPIV, Cisco Nexus 1000V, and well everything.

But first a little about my environment. It is not all that big. Currently it has 2 DL380 G5s with dual quad port E5345 CPUs, 16GBs of memory, 2 146GB SAS drives, dual port Intel GigE dual port adapters, Emulex Dual Port FC-HBAs. In addition, there are 2 DL380 G3s both with 6GBs of memory and 6 146GB drives. One of the DL380 G3s is my VMware vCenter Server and Backup Server attached to a USB DISC BluSafe of which I blogged about previously. The other DL380 G3 is a test box for the virtual environment.

So what will going to vSphere entail?

First the DL380 G3s will not work with VMware vSphere so they either need to be replaced or retired from service. I already retired 1 ML370 G2, DL380 G2, and 4 DL360 G3s by donating them to Joesph P. Keefe Technical School which offers Cisco courses into which they would like to interject some aspects of virtualization. With the hardware I donated they have a ready made VI3 environment. They also gained my old SAN (MSA1000) and Compaq 2/8 EL fabric switches. Retirement of the DL380 G3s will once more mean donating to this worthy local institution.

But now I have a concern about virtualizing my back up server. In order for that to work I will either need to invest in a USB over IP device or wait until vSphere properly supports USB Passthru for my new DISC Blu-Safe. Either more hardware, or wait? Seems to be the biggest question.

The other item that comes up is that my E5345 CPUs will not allow me to run VMware FT. So now I need to perform one more upgrade to E5450 CPUs.

With this upgrade I will also want to use NPIV which means going with 4Gb FC-HBAs instead of the 2Gb I have now. Plus, I really need more networking to give full redundancy as well as run the VMware FT network. This also implies I need more PCI-e slots as it is easier to find PCI-e devices than PCI-X, so I will also have to replace my PCI-X risers to PCI-e risers.

What is the end result of all this?

  • Replace 2 DL380 G3s with a single DL380 G5
  • Implement USB over IP or wait for vSphere ESX 4 to support passthru USB?
  • Upgrade existing CPUs to E5450s
  • Upgrade existing 2Gb FC-HBAs to 4Gb FC-HBAs
  • Add more network ports by taking existing duo-port adapters and upgrading to Quad Port Adapters
  • Replace existing PCI-X Riser cages with PCI-e Riser Cages

Going to vSphere is not going to be inexpensive from a hardware perspective, if you want to run everything. In addition, I may need to wait until functionality, that was promised in the .0 release, is actually available. Now extrapolate this small environment to one that is 10, 30, or 100 times the size and costs for hardware that runs vSphere could require rethinking the entire architecture.

Will this translate instead to going to Hyper-V or Citrix XenServer? Perhaps, but minimally I would have to upgrade the DL380 G3s as well, to run Hyper-V and the latest versions of Citrix XenServer effectively. I am also not sure about how well the old PCI-X hardware is supported with these environments. Yet, I know they are supported with VMware vSphere.

If I did not want to implement VMware FT; the CPU and Network upgrades would NOT be needed.

If I did not want to implement NPIV; the fibre upgrades would not be necessary.

However, even if I went with Hyper-V or Xen, I would still need a 64-bit server like the DL380 G5 instead of the G3s.

This is always why I say Plan then Plan some more to ensure a good upgrade or implementation of virtualization.

Posted in SDDC & Hybrid CloudTagged , , , , , , , , ,