A Look at the HP Moonshot 1500

Last week HP announced their “second generation” HP Moonshot 1500 enclosure and Intel Atom S1260-based Proliant Moonshot systems, a high-density computing solution targeted at hyperscale computing workloads. They’re billing it as the first “software defined server” and claiming that it can save 89 percent of energy, 80 percent of space, and 77 percent of the cost of their DL380 servers.

The Proliant Moonshot servers aren’t like any other Proliant server. They’re cartridges, essentially mini-blades that fit into the larger chassis. Each has no more and no less than an Intel Atom S1260 dual-core 2.0 GHz CPU with Hyperthreading and VT-x, 8 GB of DDR-1333 RAM, dual Broadcom 1 Gbps NICs, and a single SATA drive, either traditional spinning disk or SSD. Up to 45 of these cartridges fit into the chassis in total. They’re divided by the management controller, iLO, into three completely separate zones of management, governed essentially by three different iLO controller interfaces. Two zones have 18 cartridges and one zone has 9, which seems like an unwelcome and arbitrary limit for a cloud provider to have to design a management and provisioning solution around. You reach the individual servers via the iLO management module, either via serial connection, serial-over-LAN IPMI, or SSH; from there you can do some basic configuration and setup and instruct the servers to boot. You’ll definitely need PXE-based booting for an architecture like this, as well as a configuration management tool like those from Puppet Labs to aid with deployment, though the customers this is marketed to will already have those types of tools available. The only operating systems supported at this time are Linux distributions, specifically the latest releases of Ubuntu, Red Hat Enterprise Linux, and SUSE Linux Enterprise Server, though HP has said future revisions will likely support Microsoft Windows Server, Hyper-V, and VMware ESXi.

The chassis has three different fabrics: Ethernet, storage, and cluster. The Ethernet fabric connects to internal network switches, which HP considers to be equivalent to top-of-rack switches, and are fully redundant and configurable. With this release there doesn’t appear to be much data about the storage and cluster fabrics, though one of the HP marketing videos refers to a future where there will be different types of nodes:

  • “direct hosting”, which appears to be what is shipping now and includes both compute and storage on the same blade,
  • “storage”, which would likely have more than just a single drive available, and
  • “multi-node”, which appears to not have any storage, just more than one CPU.

Technical documentation also indicates that there are four storage lanes/paths on the storage fabric, two of which are dedicated to shared storage on the chassis. While the chassis doesn’t currently have any shared storage, that would likely be an easy integration point to make this more of a full-featured converged infrastructure unit. Along those lines, the top of the blades have a link indicator to show that a particular node is connected to other nodes. You can see this in the photos, and it’ll be a welcome feature when it ships, because the Moonshot systems can be dynamically reconfigured based on customer requirements. This is likely what they mean when they speak of “software defined servers.” Linked nodes will also help remove the lone disk drive as a single point of failure, as well as open the door to running larger applications or even Linux-based virtualization solutions like OpenStack by enabling the use of more memory per operating system image.

The power, space, and cost savings calculations are somewhat nebulous. HP claims that one 47U rack of Moonshot servers (the chassis is 4.3U high, so a non-standard rack is the default) replaces eight racks of DL380 dual-socket systems. A single rack column of Moonshot is 10 chassis, 450 nodes, 900 cores, and 3600 GB of RAM. Since Intel Xeon CPUs have more real cores than Intel Atom CPUs a rack of DL380s has significantly more CPU time available to it, as well as much more RAM, at the cost of power consumption and possible acquisition price. In fact, Intel’s own Atom S1200 release briefing shows that, on a rack scale, their Xeon E2-1265Lv2 has 10x performance per node, with only a 3x gain in power consumption (as measured with the now-defunct SPECweb benchmark). The Intel benchmark and the HP claims aren’t apples-to-apples, but they do indicate that HP is likely assuming a fairly low utilization of the traditional rack servers. Since most enterprises have heard of, and might even be using, some form of virtualization to drive utilization up, this is probably a very poor assumption. HP has partnered with other CPU vendors than Intel, and over time we’ll hopefully see more notable offerings than the mediocre Intel Atom. Atom is fine if you need Intel instruction-set compatibility, but if you can recompile your own software the Calxeda EnergyCore ARM offerings are significantly faster and more efficient.

So should you buy one (or one hundred) of these things? As with most topics in IT, the solutions depend on the problems you’re having. With Moonshot, HP has shown that despite all of their organizational problems, there’s still some life in their R&D units, thinking about those real-world problems. Unfortunately, the real world for enterprises is virtualization and cloud, and I don’t see Moonshot disrupting that trend in any significant way. Right now, Moonshot is also too limited to be of much use for cloud providers who are also focused on virtualization. It’s a framework that shows a lot of promise, though, and with some of the improvements mentioned, with more operating system and CPU choices, and with more emphasis on it being a low-power converged infrastructure, I can definitely see more of the big data, hyperscale targets they’re after.