A few days ago, Stevie Chambers tweeted about the evolution from mainframe to container: “Why is it a surprise that VMs will decline as things miniaturise? Mainframes → Intel → VMs → Containers, etc. Normal, I’d say.” By “Intel” here, I’m going to take Stevie to mean “rackmount servers.” I’m also going to assume that by “decline” he meant “decline in importance, or focus” rather than decline in raw numbers of units sold. It would be easy to argue that there have been fewer rackmount servers sold in the last few years than would have been the case without virtualization, due to the consolidation of servers onto fewer, more powerful boxes. It’s also arguable that virtualization has brought us options that would simply be unavailable without it and have led to more volume of sales. Either way, Intel’s profits seem to be doing ok.
How much private cloud do you really need? A private cloud is all about the IT department getting out of the way of its internal customers, enabling business units and individual developers to provision their own VMs and get on with doing their jobs. But building and operating a private cloud is a complex, and therefore expensive, task. There needs to be a large payoff before there is a real business benefit. Some businesses don’t really need a private cloud platform. Often, their business processes will prevent real self-service on their private cloud. For these organizations, there may be simpler ways to achieve their desired business outcomes.
The WLAN, or wireless LAN, sector is pretty hot at the moment, as user endpoints break free from their previously wired existence. A wireless LAN links devices together over a spread-spectrum or OFDM (orthogonal frequency-division multiplexing) network within a limited area: your home, school, or office building, for example. From their humble beginnings, when they were not very stable, WLANs have become a staple of our always-on lifestyle. We now have connected cities, in which you can walk from one end to the other and always be connected to a Wi-Fi link.
Recently, a number of marketing campaigns have seemed to be inventing complexity to try to give products the appearance of having some sort of competitive advantage. The invented complexity involves real-world items that many folks just do not use, or even care about, in order to make products look like something different. We have spoken about in-kernel vs. VSA in the past, but now we are seeing invented complexity within the mainstream storage world.
Every day, IT professionals live and breathe applications, yet our focus for operational tools is a single container, virtual machine, database, etc. How do these items map to the application in use? Even the monolithic-looking applications of yesterday were actually made up of services. Those services will be reborn as microservices within the applications of tomorrow. How do we make this transition? Is it possible with a container as a service model? Or should we scratch the past and start over?
Microsoft’s turnaround over the two years since Satya Nadella became CEO has been nothing short of phenomenal. During the Ballmer years, Microsoft had become increasingly sidelined and irrelevant, focused on aggressive and negative marketing techniques. Anybody remember the painful Microsoft Mythbusters video featuring then–Microsoft executive David Greschler and Hyper-V product manager Edwin Yuen? Not that you can find it anymore; all references I have located now link to the Microsoft store (even Microsoft is too embarrassed).