With the myriad cases of cyber-theft and security breaches that headline the news every day, it’s no wonder that system improvements are taking a back seat to security items within most IT organizations. While many vendors highlight new products or features as being better, cheaper, and/or faster, those items are having limited success compared to those that address being secure.
Welcome to the second part of my conversation on security in our modern times. In my last article, I concluded with a mention of the US government’s court order compelling Apple to develop a solution bypassing the security on the San Bernardino terrorists’ phone.
On March 21, 2016, we lost Andy Grove, a founding father of our industry. Andy was a first-generation Hungarian immigrant who became employee number one at Intel. After earning his PhD at Berkeley, he worked with Robert Noyce and Gordon Moore at Fairchild Semiconductor until Moore and Noyce co-founded Intel; Grove joined them there on the day of Intel’s incorporation.
Recently, we upgraded our cloud environment. This raises the question, “What is wrong with the environment after an upgrade?” As tools improve, we get new warnings, messages, and analytics. This often leads to a decision to ensure that after the upgrade, all monitoring, alerts, and other diagnostics show green across the board. Is this required, desirable, and even warranted? Wouldn’t it make sense to understand a change between releases first, before blanket acceptance?
The use of the cloud is not governed by technology so much as it is governed by cost: the cost of on-premises management, support, expertise, and environment vs. the cost of cloud services and outsourced expertise, management, etc. The cost differential must be high enough in the short term to allow it to become valid in the long term. There are lots of cloud calculators out there. Since Apple, Dropbox, and others have changed clouds or moved to their own data centers, what does this tell us about the future of cloud?
In a previous article, I wrote that customers don’t care whether a hyperconverged solution uses a VSA or runs the storage cluster in-kernel. I stand by that assertion. One of the comments pointed out that I had missed an area of discussion: that of the resource requirements of the VSA itself. I still don’t think that customers care, but for completeness, I’ll examine them. The point here is that the VSA that most HCI vendors use to provide shared storage is usually a fairly beefy VM. The resources allocated to the VSA are not available to run workload VMs. This logic says that the VSA-based HCI can run fewer VMs than an in-kernel-based HCI. The problem with this argument is that most of the VSA resources are doing storage cluster work. Moving the same storage cluster into the kernel requires almost the same resources. The big difference with in-kernel resource usage is that there isn’t something you can easily point to as taking up these resources. VSA resource usage is all assigned to the VSA; in-kernel resource usage can’t be accounted to a single object. There is no smoking gun of resource usage.