How much private cloud do you really need? A private cloud is all about the IT department getting out of the way of its internal customers, enabling business units and individual developers to provision their own VMs and get on with doing their jobs. But building and operating a private cloud is a complex, and therefore expensive, task. There needs to be a large payoff before there is a real business benefit. Some businesses don’t really need a private cloud platform. Often, their business processes will prevent real self-service on their private cloud. For these organizations, there may be simpler ways to achieve their desired business outcomes.
SDDC & Hybrid Cloud
Cloud computing has evolved from focusing only on how to construct, secure, manage, monitor, and utilize IaaS, PaaS, and SaaS clouds. As the paradigm matures, it is moving from a pure resource management paradigm to a data and resource management paradigm. (Read More)
SDDC is the next evolution in on-site data center technology. It has taken the knowledge gained from the server virtualization revolution and blended it with software-defined storage and networking to create a data center defined and managed by software running on invisible hardware.
Hybrid Cloud covers the technologies and operational processes, both technical and business, for deploying, consuming, and utilizing this paradigm.
Major areas of focus include barriers to adoption; progress on the part of vendors in removing those barriers; where the lines of responsibility are drawn between the cloud vendor and the customer for IaaS, PaaS, SaaS, and hybrid clouds; and management tools that are essential to deploying and managing the cloud, ensuring its security and the performance of applications.
The WLAN, or wireless LAN, sector is pretty hot at the moment, as user endpoints break free from their previously wired existence. A wireless LAN links devices together over a spread-spectrum or OFDM (orthogonal frequency-division multiplexing) network within a limited area: your home, school, or office building, for example. From their humble beginnings, when they were not very stable, WLANs have become a staple of our always-on lifestyle. We now have connected cities, in which you can walk from one end to the other and always be connected to a Wi-Fi link.
Recently, a number of marketing campaigns have seemed to be inventing complexity to try to give products the appearance of having some sort of competitive advantage. The invented complexity involves real-world items that many folks just do not use, or even care about, in order to make products look like something different. We have spoken about in-kernel vs. VSA in the past, but now we are seeing invented complexity within the mainstream storage world.
I, like most in the modern IT industry, have spent most of my working life installing, configuring, and maintaining Microsoft products, ranging from Active Directory and Exchange through Terminal Services and MSSQL Server. Most of these products have had extra layers of third-party software on top (Citrix MetaFrame, anyone?) or blended in to make them work better. In many cases, they were not a best-in-class product, although this has improved over time. Apache far outstrips IIS, and vSphere is still a good way ahead of Hyper-V, feature-wise. The gaps are closing, though, and Microsoft’s product set is maturing. Microsoft’s products often have been the more expensive option. There are numerous UNIX mail servers that outperform Exchange for raw message transport functions. However, there has always been one killer feature, one tie that has bound all of the systems together, making the Microsoft option the only option.
On March 21, 2016, we lost Andy Grove, a founding father of our industry. Andy was a first-generation Hungarian immigrant who became employee number one at Intel. After earning his PhD at Berkeley, he worked with Robert Noyce and Gordon Moore at Fairchild Semiconductor until Moore and Noyce co-founded Intel; Grove joined them there on the day of Intel’s incorporation.
Recently, we upgraded our cloud environment. This raises the question, “What is wrong with the environment after an upgrade?” As tools improve, we get new warnings, messages, and analytics. This often leads to a decision to ensure that after the upgrade, all monitoring, alerts, and other diagnostics show green across the board. Is this required, desirable, and even warranted? Wouldn’t it make sense to understand a change between releases first, before blanket acceptance?