VMworld 2012: A Contrarian Position on VMware’s New Licensing Model

Given the level of applause that greeted the announcement of VMware’s new pricing model, I know this will open me up to criticism, but was the old VMware licensing model really all that bad? It certainly wasn’t perfect, and there’s an awful alot to like about the new model, but was the old license pricing model so bad that it could only be fixed by ripping it up and replacing it with something so very different?

To answer that, a better question to ask might be how sustainable is this new model? Socket-based licensing doesn’t measure what VMware delivers – the ability to run VMs, or more completely the ability to run workloads, at lower cost, more effectively, more efficiently, with greater benefit to business operations, than bare metal systems (not entirely accurate depictions of VMware’s major worth, but it will do for this arguement). Neither does memory use for that matter, but as processors get faster (have more cores) the work done per socket increases, meaning for a given workload, over time the number of sockets will decrease, or for any given workload VMware’s income will decrease over time. Which will result either in a reduction in income for VMware which it will have to address through continued growth or increase the unit price of the each license to compensate. VMware now reports that over 60% of server workloads are now virtualized, meaning that there are only a few more years or growth in this core part of its business. As cloud services increase in popularity,  the number of ‘data center’ vCloud Suite licenses bought will decline and VMware will have to compete with the larger cloud ecosystem. The importance of VMware joining OpenStack can’t be overlooked here.

Consider vRAM as the basis of an alternative licensing model, if nothing else it offers the benefit of stability. In contrast to sockets, a new generation of server memory doesn’t store more information than the generation before it. It may be a littler faster or consume less power, but 1 GB of memory will always hold 1 GB of data – that doesn’t change. A single application might need more memory or processing power from year to year, release to release, as needs change, and so cost more to host, but that is true whether or not the server is physical or virtual. The flaw then with the vRAM model wasn’t that it was based on charging for vRAM, but that it resulted in too many customers paying too much more for the same thing. Put simply, it was wrongly priced. VMware could have introduced vRAM-based pricing without a murmur if it had assessed the impact of the change more closely before going ahead. It was also needlessly complex, looking more like a cable TV bill than a software product. No one ever thanks their cable company for offering clear easy to understand billing, and in the same way a licensing scheme based on virtual memory entitlements, core entitlements, and multiple management products was never going to be popular. Faced with the choice of fixing a bad pricing model or throwing it away completely VMware took the easy way out and killed vRAM opening the way for something new.

The new pricing model is breathtaking in its simplicity and a model for the industry today. The question is not “is it good”, but “is it sustainable”. Can VMware stick with this model without having to increase licensing costs over time, that’s hard to say, but I doubt it.

Posted in IT as a Service, SDDC & Hybrid CloudTagged , , , ,