The use of the cloud is not governed by technology so much as it is governed by cost: the cost of on-premises management, support, expertise, and environment vs. the cost of cloud services and outsourced expertise, management, etc. The cost differential must be high enough in the short term to allow it to become valid in the long term. There are lots of cloud calculators out there. Since Apple, Dropbox, and others have changed clouds or moved to their own data centers, what does this tell us about the future of cloud?
SDDC & Hybrid Cloud
Cloud computing has evolved from focusing only on how to construct, secure, manage, monitor, and utilize IaaS, PaaS, and SaaS clouds. As the paradigm matures, it is moving from a pure resource management paradigm to a data and resource management paradigm. (Read More)
SDDC is the next evolution in on-site data center technology. It has taken the knowledge gained from the server virtualization revolution and blended it with software-defined storage and networking to create a data center defined and managed by software running on invisible hardware.
Hybrid Cloud covers the technologies and operational processes, both technical and business, for deploying, consuming, and utilizing this paradigm.
Major areas of focus include barriers to adoption; progress on the part of vendors in removing those barriers; where the lines of responsibility are drawn between the cloud vendor and the customer for IaaS, PaaS, SaaS, and hybrid clouds; and management tools that are essential to deploying and managing the cloud, ensuring its security and the performance of applications.
In a previous article, I wrote that customers don’t care whether a hyperconverged solution uses a VSA or runs the storage cluster in-kernel. I stand by that assertion. One of the comments pointed out that I had missed an area of discussion: that of the resource requirements of the VSA itself. I still don’t think that customers care, but for completeness, I’ll examine them. The point here is that the VSA that most HCI vendors use to provide shared storage is usually a fairly beefy VM. The resources allocated to the VSA are not available to run workload VMs. This logic says that the VSA-based HCI can run fewer VMs than an in-kernel-based HCI. The problem with this argument is that most of the VSA resources are doing storage cluster work. Moving the same storage cluster into the kernel requires almost the same resources. The big difference with in-kernel resource usage is that there isn’t something you can easily point to as taking up these resources. VSA resource usage is all assigned to the VSA; in-kernel resource usage can’t be accounted to a single object. There is no smoking gun of resource usage.
IT operations analytics (ITOA) is the new language that incorporates analytics as a part of IT operations. This is a requirement for today’s environments, as even small labs generate terabytes of data a day: terabytes of logs from applications, network sensors, security devices and products, automation tools, and more. The list of possible streams of data is endless. It is up to the IT operations folks to make sense of this never-ending stream of data. Into this steps analytics. Analytics without knowledge often leads to chasing rabbits down holes, as there can be a large number of false positives.
In part one of Cost to Build a New Virtualized Data Center, we discussed the basic software costs for a virtualized data center based on VMware vSphere 6.0, Citrix XenServer 6.5, Microsoft Hyper-V 2012 R2 and 2016, and Red Hat. If you missed that, please click here to review before continuing.
This post will take that original premise and expand it to include storage with a view to moving the entire environment toward a software-defined data center.
Building a private cloud was a high priority for a number of organizations in 2014. This priority carried over into 2015 because it is hard to execute. For many organizations, it has carried over again into 2016. Of course, the definition of a private cloud has changed in that time, too. Some organizations are happy simply to have consistent VMs deployed in response to a helpdesk ticket. Other organizations aspire to have the AWS in their own datacenter. One significant trend is the use of public cloud services to manage on-premises private clouds. The other trend is OpenStack in the enterprise, rather than only in academia and hyperscale where it started.
Over the last couple of weeks, I have been thinking about costs relating to a building a new virtualization-based data center. “What?” I hear you say. “Everywhere is virtualized—there is no such thing as a greenfield site anymore!” I would have said that myself, but in the last month I have come across three, one of which is a company worth over a billion pounds.