DockerCon 2017 was about modernizing traditional applications, or MTA. MTA is the lifting and shifting of traditional Microsoft Windows base applications into Docker containers. Its approach is reminiscent of 2009. For Docker to grow into brownfield data centers, this is a must. However, could it be doing more? If so, what is it doing that could be improved? MTA is a must for many organizations looking to Docker to manage everything, but not everything uses the same approach. Containers are about agility, with workloads being treated like cattle. Can traditional applications be treated this way? We shall see.
Articles Tagged with Windows
Simon Bramfitt’s recent article on this site, Windows 10: The Last Operating System You’ll Ever Need, offers a view into the next generation of the Windows desktop platform. Of particular interest is the revelation that Windows 10 will most likely see Microsoft move to a continuous release schedule, ending the cycle of 18-month releases that began to stagnate with the success of Windows XP SP2.
In true Microsoft fashion, details on its new servicing regimen have not been terribly forthcoming, but it seems that a three-tiered system will be available. The first tier, “consumer” tempo, offers updates as soon as they become publicly available, putting Microsoft on the same track as Apple and Google. “Long-term” tempo covers what Microsoft refers to as “lockdown for mission-critical environments.” And in between the two, with a four-month deployment window allowed, sits the tempo termed “near-consumer.” This seems to be similar to that of the release for Windows 8.1 Update 1, which shipped in April 2014 and gave users a limited time frame in which to install it (later extended, rather reluctantly). Some commentators believed that this was done deliberately to gauge what sort of tolerance enterprises would have for a forced deployment window.
“Consumer” tempo looks to be in step with the way Google develops. Its main software product, Chrome, is at version 37 (at the time of this writing), although not many users are aware of that fact, given that it is updated so regularly. Essentially, it seems possible that Microsoft will have large “baseline” cumulative releases that include feature updates as well as security updates, but presumably without making a big fanfare about these releases. Whether the releases will come in patch form (such as service packs) or as actual full new versions of Windows (Windows 11 and up) remains to be seen. The Internet and technology innovation has sped up at the software level—and Microsoft’s existing product lifecycle has been causing them to become somewhat irrelevant.
Apple has a similar release cycle: OS X (10.0) debuted in 2002, and the latest release, Yosemite, is still only branded OS X 10.10 under the hood. So, Microsoft’s bringing the “consumer” tempo into line with two of its main competitors makes a certain amount of sense. Especially given that Apple has signed an alliance with IBM, there is a huge chance for Apple to start biting into the Microsoft enterprise share. IBM has long seethed over the betrayal of the Microsoft OS/2-Windows partnership. Apple has traditionally not pushed into the business arena, but IBM clearly and demonstrably has the ability to take Apple there. In moving closer to the rapid development models of Apple and Google—at least for some of its enterprise customers—Microsoft may be directly responding to a perceived threat to its most prized assets.
But Microsoft’s plans also leave some details untouched, such as just what would enterprises be letting themselves in for via the “long-term” or “near-consumer” tempos. According to some sources, it will merely allow opt-out of features updates, but not security updates, a split that Microsoft (apparently) identified in the NT4 days. To quote:
“Businesses will be able to opt-in to the fast-moving consumer pace, or lock-down mission critical environments to receive only security and critical updates to their systems.”
That seems to indicate clearly that the only lockdown will be for features. So will security updates be mandatory no matter what?
“And for all scenarios, security and critical updates will be delivered on a monthly basis.”
It does indeed seem that the security updates will be enforced. If this is correct, I’d like to see some more information from Microsoft on exactly how this is intended to work, as I can’t imagine hospitals (to pluck one example from possibly thousands) being particularly pleased about this. Indeed, this would almost seem to assume that security updates have never caused a problem…yeah, right!
If this approach is in fact adopted, then it could be a disaster waiting for an opportunity to happen. And it might well convince many that the Windows 7 systems they’ve deployed in their enterprises could be good for another five or six years—leading to a repeat of the Windows XP “end of support” debacle that still rumbles quietly on today. Microsoft may fall into the trap of trying to impose its will onto its customers—and those customers that are paying Microsoft a lot of money may elect to talk with their feet and walk away. After all, Windows 8 was an attempt to force a new paradigm onto users, and how did that turn out?
On the other hand, there may be good points to this new servicing model as well. Updates being mandatory, either straight away or inside a deployment window, may certainly force a lot of vendors to step up—something that very rarely happens today. Imagine how much pain would be avoided if Java updates were released and enforced in the same way (or inflicted, depending on how good the vendors were)!
What is clear is that full clarification about how Windows 10’s model is going to work is needed in order for businesses to make informed decisions around it. SCCM, Intune, WSUS, and MDT are all examples of technologies that enterprises use to provide servicing and deployment for their estates, and any change in how this is delivered will impact them greatly. With regard to the consumer, effectively letting Microsoft manage patches will undoubtedly be a good thing—think for a second about how many problems are caused by unpatched machines in homes across the globe—but for those with systems that need to be running flawlessly in order to keep the wheels of industry turning, there are a lot of questions that need to be answered.
The DaaS (Desktop as a Service) market is maturing, and more great products are being released every day to facilitate DaaS functionality. But just like the foundation of a house affects what you can build on it, Microsoft’s unwillingness to offer VDI licensing for the desktop operating system still presents a major challenge to the stability and growth of this market.
In the world of virtualization storage it seems all we talk about lately is flash and SSD. There is a good reason for that. Traditionally, storage capacity and storage performance were directly linked. Sure, you could choose different disk capacities, but in general you needed to add capacity in order to add performance because each disk, each “spindle” could only support a certain number of I/Os per second, or IOPS. This was governed by the mechanical nature of the drives themselves, which had to wait for the seek arm to move to a different place on disk, wait for the seek arm to stop vibrating from the move, wait for the desired sector to rotate underneath the read head, etc. There’s only so much of that type of activity that can be done in a second, and in order to do more of it you needed to add more drives. Of course that has drawbacks, like increased power draw, more parts so more chance of failure, and increased licensing costs since many storage vendors charged based on capacity.
Flash memory takes most of what we know about the physics of storage and throws it away. Because there are no moving parts, the act of seeking on a solid state disk is a completely logical one. There are no heads, no sectors, no rotation speeds. It’s all the speed of light and however fast the controller can go. As such, flash memory can do enormous numbers of IOPS, and if implemented well, it decouples storage performance from storage capacity. You save power, you save data center space, you save money in licensing fees, and your workloads run faster.
Desktop security start-up Bromium announced the general availability of vSentry, at the Gartner Security and Risk Management management Summit in London today. Their first product to be based on the Bromium Microvisor designed to protect from advanced malware that attacks the enterprise through poisoned attachments, documents and websites.
One year after announcing that he and XenSource co-founder Ian Pratt were leaving Citrix to launch Bromium with former Pheonix Technologies CTO Gaurav Banga; Simon Crosby was back at the GigaOM Structure conference in San Francisco today to unveil Bromium’s micro-virtualization technology together with its plans to transform enterprise endpoint security. Bromium, despite the occasional blog post calling into question the security limitations of current desktop virtualization solutions and despite today’s announcement of the Bromium Microvisor, has very little to do with desktop virtualization. Desktop virtualization whether it be VDI, or IDV or anything in between, is a management technology, a means of getting an appropriately specified endpoint configuration in front of the user. Bromium has set itself a bigger challenge, one that is applicable to every endpoint and every operating system – the extension of the precepts of trustworthy computing to mainstream operating systems.