Articles Tagged with Intel

CloudComputing

Cloud for All

CloudComputingBuilding and operating a private cloud is a complex undertaking. Most cloud platforms are designed to play well with thousands of physical servers. This is great for public cloud providers and extremely large enterprise organizations. However, smaller organizations that need a cloud built from tens of physical servers can find these platforms challenging. I’ve written about the possibility that some of these customers might get what they want without a cloud platform. But what if a cloud platform were easy to deploy? If you cloud deploy an OpenStack cloud in one day, would that help? This is one target for the Intel Cloud for All program.

Read More

DataCenterVirtualization

From Mainframes to Containers

DataCenterVirtualizationA few days ago, Stevie Chambers tweeted about the evolution from mainframe to container: “Why is it a surprise that VMs will decline as things miniaturise? Mainframes → Intel VMs Containers, etc. Normal, I’d say.” By “Intel” here, I’m going to take Stevie to mean “rackmount servers.” I’m also going to assume that by “decline” he meant “decline in importance, or focus” rather than decline in raw numbers of units sold. It would be easy to argue that there have been fewer rackmount servers sold in the last few years than would have been the case without virtualization, due to the consolidation of servers onto fewer, more powerful boxes. It’s also arguable that virtualization has brought us options that would simply be unavailable without it and have led to more volume of sales. Either way, Intel’s profits seem to be doing ok.

Read More

DataCenterVirtualization

Goodbye to a Founding Father: Andy Grove, 1938–2016

DataCenterVirtualization

Andy Grove
Andy Grove, 1938 – 2016

On March 21, 2016, we lost Andy Grove, a founding father of our industry. Andy was a first-generation Hungarian immigrant who became employee number one at Intel. After earning his PhD at Berkeley, he worked with Robert Noyce and Gordon Moore at Fairchild Semiconductor until Moore and Noyce co-founded Intel; Grove joined them there on the day of Intel’s incorporation.

Read More

DataCenterVirtualization

Hardware Is Dead, Long Live Hardware

DataCenterVirtualizationThere is a growing movement to abstract hardware completely away, as we have discussed previously. Docker with SocketPlane and other application virtualization technologies are abstracting hardware away from the developer. Or are they? The hardware is not an issue, that is, until it becomes one. Virtualization may require specific versions of hardware, but these are commonplace components. Advanced security requires other bits of hardware, and those are uncommon; many servers do not ship with some of this necessary hardware. Older hardware may not deliver the chipset features needed to do security well. This doesn’t mean it can’t be done, but the overhead is greater. Hardware is dead to some, but not to others. This dichotomy drives decisions when buying systems for clouds or other virtual environments of any size. The hardware does not matter, until it does!

Read More

DataCenterVirtualization

4 Reasons The Calxeda Shutdown Isn’t Surprising

DataCenterVirtualizationHP Moonshot SystemThe board of Calxeda, the company trying to bring low-power ARM CPUs to the server market, has voted to cease operations in the wake of a failed round of financing. This is completely unsurprising to me, for a few different reasons.

Virtualization is more suited to the needs of IT

Calxeda’s view of the world competed directly with server virtualization in many ways. Take HP’s Project Moonshot as an example. It is a chassis with hundreds of small ARM-based servers inside it, each provisioned individually or in groups, but with small amounts of memory and disk. The problem is that this sort of model is complicated, fragile, inflexible, and not standards-based. At the end of the day, organizations want none of these things. Calxeda’s solution may save an enterprise money by consuming less power, but it spends that money with increased OpEx elsewhere. In contrast, virtualization of larger, more powerful CPUs is more flexible on nearly every level, reduces the amount of hardware an enterprise must manage, and can help contain both capital and operational expenses while solving actual problems.

There are diminishing performance returns in extreme multi-core applications

Originally stated to convey the increasing value of a network as more nodes joined, another way Metcalfe’s Law can be expressed is that the communications overhead in a network grows as the square of the number of nodes in that network. This is also true in multi-threaded applications, where the amount of interprocess communication, locking, and other administrative work to coordinate hundreds of threads ends up consuming more CPU time than the actual computational work. Calxeda’s vision of hundreds of CPU cores in a single system was ambitious, and needed computer science and the whole industry to catch up to it. Enterprises don’t want research projects, so they choose fewer, faster cores and got their work done.

A limited enterprise market for non-x64 architectures

ARM isn’t x86/x64, so while there are increasing numbers of ARM-based Linux OS distributions, mostly thanks to the immense popularity of hobbyist ARM boards like Raspberry Pi and the BeagleBoard, none are commercially supported, which is a prerequisite for enterprises. On the Windows side there is Windows RT, which runs on 32-bit ARM CPUs, but it is generally regarded as lacking features and underpowered compared to other Atom-powered x86 devices that run full installations of Windows 8. Windows RT isn’t a server OS, either, and there is very little third-party software for it due to the complexity of developing for the platform and the lack of ROI for a developer’s time and money. Why put up with all the complexity and limitations of a different architecture when you can get a low-power x86-compatible Atom CPU and a real version of Windows?

A limited market for 32-bit CPUs

On the server front, which is what Calxeda was targeting, enterprises have been consuming 64-bit architectures since the release of AMD’s Opteron CPUs in 2003. Ten years later, the idea of using 32-bit CPUs seems incredibly backward. Even embedded systems want to have more than 4 GB of RAM on them, which is the maximum possible on 32-bit CPUs. On the mobile front, where ARM has had the most impact, Dan Lyons has a recent article about how Apple’s 64-bit A7 chip has mobile CPU vendors in a panic. Now, in order to compete with Apple, a handset maker wants a 64-bit chipset. Calxeda had a 64-bit CPU in the works, but it’s too far out to be useful in either market.

I’ve never really seen the point behind the “more smaller machines” movement, and I’m interpreting the end of Calxeda as evidence supporting my position. I’m sure there are specialized cases out there that make sense for these architectures, but the extreme limitations of the platform are just too much in the x64-dominated world of  IT. In the end, Calxeda focused too tightly on specific problems, and in doing so ignored both the larger problems of the enterprise and the changes in the computing landscape that ultimately made them irrelevant.

DataCenterVirtualization

A Look at the HP Moonshot 1500

DataCenterVirtualizationLast week HP announced their “second generation” HP Moonshot 1500 enclosure and Intel Atom S1260-based Proliant Moonshot systems, a high-density computing solution targeted at hyperscale computing workloads. They’re billing it as the first “software defined server” and claiming that it can save 89 percent of energy, 80 percent of space, and 77 percent of the cost of their DL380 servers.

Read More