Tag Archives: Intel

Hardware Is Dead, Long Live Hardware

DataCenterVirtualizationThere is a growing movement to abstract hardware completely away, as we have discussed previously. Docker with SocketPlane and other application virtualization technologies are abstracting hardware away from the developer. Or are they? The hardware is not an issue, that is, until it becomes one. Virtualization may require specific versions of hardware, but these are commonplace components. Advanced security requires other bits of hardware, and those are uncommon; many servers do not ship with some of this necessary hardware. Older hardware may not deliver the chipset features needed to do security well. This doesn’t mean it can’t be done, but the overhead is greater. Hardware is dead to some, but not to others. This dichotomy drives decisions when buying systems for clouds or other virtual environments of any size. The hardware does not matter, until it does!

Continue reading Hardware Is Dead, Long Live Hardware

4 Reasons The Calxeda Shutdown Isn’t Surprising

DataCenterVirtualizationHP Moonshot SystemThe board of Calxeda, the company trying to bring low-power ARM CPUs to the server market, has voted to cease operations in the wake of a failed round of financing. This is completely unsurprising to me, for a few different reasons.

Virtualization is more suited to the needs of IT

Calxeda’s view of the world competed directly with server virtualization in many ways. Take HP’s Project Moonshot as an example. It is a chassis with hundreds of small ARM-based servers inside it, each provisioned individually or in groups, but with small amounts of memory and disk. The problem is that this sort of model is complicated, fragile, inflexible, and not standards-based. At the end of the day, organizations want none of these things. Calxeda’s solution may save an enterprise money by consuming less power, but it spends that money with increased OpEx elsewhere. In contrast, virtualization of larger, more powerful CPUs is more flexible on nearly every level, reduces the amount of hardware an enterprise must manage, and can help contain both capital and operational expenses while solving actual problems.

There are diminishing performance returns in extreme multi-core applications

Originally stated to convey the increasing value of a network as more nodes joined, another way Metcalfe’s Law can be expressed is that the communications overhead in a network grows as the square of the number of nodes in that network. This is also true in multi-threaded applications, where the amount of interprocess communication, locking, and other administrative work to coordinate hundreds of threads ends up consuming more CPU time than the actual computational work. Calxeda’s vision of hundreds of CPU cores in a single system was ambitious, and needed computer science and the whole industry to catch up to it. Enterprises don’t want research projects, so they choose fewer, faster cores and got their work done.

A limited enterprise market for non-x64 architectures

ARM isn’t x86/x64, so while there are increasing numbers of ARM-based Linux OS distributions, mostly thanks to the immense popularity of hobbyist ARM boards like Raspberry Pi and the BeagleBoard, none are commercially supported, which is a prerequisite for enterprises. On the Windows side there is Windows RT, which runs on 32-bit ARM CPUs, but it is generally regarded as lacking features and underpowered compared to other Atom-powered x86 devices that run full installations of Windows 8. Windows RT isn’t a server OS, either, and there is very little third-party software for it due to the complexity of developing for the platform and the lack of ROI for a developer’s time and money. Why put up with all the complexity and limitations of a different architecture when you can get a low-power x86-compatible Atom CPU and a real version of Windows?

A limited market for 32-bit CPUs

On the server front, which is what Calxeda was targeting, enterprises have been consuming 64-bit architectures since the release of AMD’s Opteron CPUs in 2003. Ten years later, the idea of using 32-bit CPUs seems incredibly backward. Even embedded systems want to have more than 4 GB of RAM on them, which is the maximum possible on 32-bit CPUs. On the mobile front, where ARM has had the most impact, Dan Lyons has a recent article about how Apple’s 64-bit A7 chip has mobile CPU vendors in a panic. Now, in order to compete with Apple, a handset maker wants a 64-bit chipset. Calxeda had a 64-bit CPU in the works, but it’s too far out to be useful in either market.

I’ve never really seen the point behind the “more smaller machines” movement, and I’m interpreting the end of Calxeda as evidence supporting my position. I’m sure there are specialized cases out there that make sense for these architectures, but the extreme limitations of the platform are just too much in the x64-dominated world of  IT. In the end, Calxeda focused too tightly on specific problems, and in doing so ignored both the larger problems of the enterprise and the changes in the computing landscape that ultimately made them irrelevant.

A Look at the HP Moonshot 1500

DataCenterVirtualizationLast week HP announced their “second generation” HP Moonshot 1500 enclosure and Intel Atom S1260-based Proliant Moonshot systems, a high-density computing solution targeted at hyperscale computing workloads. They’re billing it as the first “software defined server” and claiming that it can save 89 percent of energy, 80 percent of space, and 77 percent of the cost of their DL380 servers.
Continue reading A Look at the HP Moonshot 1500

Client Hypervisors: Intelligent Desktop Virtualization too clever for its own good?

DesktopVirtualizationIn 2011, we asked if  Client Hypervisors will drive will the Next Generation Desktop. Yet, other desktop virtualization industry experts, such as Ron Oglesby, decided the technology was a dead man walking, writing off Type 1 Client  Hypervisors.

Fight? Fight? Fight?

While VMware moved away from client hypervisors, they had to agree that an end user compute device strategy must encompass non-VDI. Their Mirage technology can be considered desktop virtualization, but it is not a client hypervisor. Client hypervisor vendors such as Citrix (who subsumed Virtual Computer’s NxTop) , MokaFive, Parallels, Virtual Bridges and joined by Zirtu. Organisations like  WorldView look to innovate on desktop vitualization through containers rather than full virtualization.

Tablets. Touch Screen capable laptops. Hybrid devices with detachable screens. The netbook might be dead, or they could just be resting. The presence of tablets has undeniably shaken the netbook market but businesses still need powerful, capable laptops.

Bring Your Own Pencil aside – there is still a need to manage “stuff”: still large and small organisations who need to manage the delivery of IT including the end device. The question remains how are devices, and the all important data and applications on them, managed? Hosted and session based desktops have their place – but offline capable device requirements will remain.  Is Intelligent Desktop Virtualization the same as client hypervisors?

Continue reading Client Hypervisors: Intelligent Desktop Virtualization too clever for its own good?

Bromium vSentry a Next Generation Hypervisor to End Malware Woes?

VirtualizationSecurityDesktop security start-up Bromium announced the general availability of vSentry, at the Gartner Security and Risk Management management Summit in London today. Their first product to be based on the Bromium Microvisor designed to protect from advanced malware that attacks the enterprise through poisoned attachments, documents and websites.

Continue reading Bromium vSentry a Next Generation Hypervisor to End Malware Woes?

Bromium unveils micro-virtualization trustworthy security vision

VirtualizationSecurityOne year after announcing that he and XenSource co-founder Ian Pratt were leaving Citrix to launch Bromium with former Pheonix Technologies CTO Gaurav Banga; Simon Crosby was back at the GigaOM Structure conference in San Francisco today to unveil Bromium’s micro-virtualization technology together with its plans to transform enterprise endpoint security. Bromium, despite the occasional blog post calling into question the security limitations of current desktop virtualization solutions and despite today’s announcement of the Bromium Microvisor,  has very little to do with desktop virtualization. Desktop virtualization whether it be VDI, or IDV or anything in between, is a management technology, a means of getting an appropriately specified endpoint configuration in front of the user. Bromium has set itself a bigger challenge, one that is applicable to every endpoint and every operating system – the extension of the precepts of trustworthy computing to mainstream operating systems. Continue reading Bromium unveils micro-virtualization trustworthy security vision