Veeam has, after what has seemed to be the longest beta program ever, released to general availability Veeam Availability Suite version 9, which includes all-new versions of Veeam Backup & Replication and Veeam ONE. Having reached the venerable version number of 9, are these new editions revolution or evolution?
They say history tends to repeat itself, I am going to take that statement in another direction and apply that towards technology. Virtualization Technology Practices and Tendencies tend to flip flop over time. That in itself is a pretty general statement but I saw this video on YouTube 16 Core Processor: Upgrade from AMD Opteron 6100 Series to Upcoming “Interlagos”” and this really got me thinking about one of the very first questions presented to the Virtualization Architects when planning and designing a new deployment, for as long as I have been working with virtualization technology. To scale up or scale out, that is the question and philosophy that has flip flopped back and forth as the technology itself has improved and functionality increased.
When I first started in virtualization the processors were only single core and vCenter was not even an option yet to manage and/or control the virtual infrastructure. At the start, any server that was on the HAL would be great to get started and then VMware came out with Symmetric Multiprocessing (SMP) virtual machines, with single or dual virtual CPUs. This was great news and changed the design thought process with the new idea of getting the biggest host server with as many processors and as much memory that you could get and/or afford.
Technology then made an advance with the introduction of multi-core processors and now you could buy smaller boxes that still had the processing power of the bigger hosts but in a much smaller and cheaper package. As the technology changed the idea to scale-out seemed to overtake the idea of scale up, at least until the next advancement happened from VMware and/or the CPU manufacturers creating a see-saw effect back and forth between the two different areas of technology.
The see-saw will go back and forth over the years and if we fast forward to today we have a lot of exciting technologies that have been added to the mix. The introduction of blade servers a few years back was one of those key technology moments that helped redefine the future of server computing. Now, blade technology has taken a another big step with the release Cisco’s Unified Computing System (UCS). UCS has now taken the blade technology and turned it into the first completely stateless computing technology which currently is able to hold more memory than any other blade system and gives you the ability to run two quad-core processors in the half height blades and the four quad-core processors in the full height blade. Intel has invested time and money in the UCS platform and will remain the only processor available in the UCS chassis but as much as things have flip-flopped with the scale-up and scale-out question, the competition between AMD and Intel has been an exciting race with several back and forth’s going on between the two companies. With the video of AMD’s sixteen core processor making its way around the internet it is a safe bet to think that Intel’s equivalent or even better might not be that far behind.
Where do you think we are in the scale-up and scale-out question? In my opinion, I believe the scale-out option is the best way to go. As virtualization has been accepted as the way to move forward in the Data Center and more and more mission critical as well as beefier servers are now virtualized the need for 32 or 64 cores available per host becomes more and more prevalent to have the resources available for the next advancement that comes in play. Also to support the scale-out opinion it is worth considering VMware’s High Availability (HA) when deciding the number of virtual machines per host. In my years of designing systems and given the choice, I would want HA to be able to recover from a host failure in less than five minutes from the time the host goes down and all the virtual machines running on that host have been restarted and fully booted up. When you have too many virtual machines per host the recovery time during a host failure and the boot storm that comes with it tends to be dramatic and extreme.
That is my opinion and thoughts on the scale-up and scale-out question, so now let’s hear your thoughts and ideas to share with the class.
When I first started with virtualization, the only option you had at the time was single core processors in the hosts. Scale up or scale out was the hot debatable topic when designing your infrastructure. On one side of the coin the idea was to scale up in that it was best to get a few of the biggest servers you could find and load them up with as much memory and processors that you could fit in the box. The end result were some very expensive servers able to run a lot of virtual machines for its time. The other side of the coin presented the idea that it was better to scale out with more, smaller servers to make up the cluster. I have worked in both type of environments and attitudes over the years and as for me, personally, I aligned myself with the scale out philosophy. The simple reason for aligning with the scale out group was host failure. When you have sixty to eighty virtual machines per host and lose that host it was really a lot of eggs in one basket and took some time to recover. When you have more, smaller servers, then the shock of losing a host was not as severe because there were not as many virtual machines running on single host and it would take less time to recover. This was during the time before vCenter, vMotion, HA and DRS when it was just you and the VMware ESX hosts.