OpenCompute – Facebook drives Data Center and Cloud evolution

Traditionally, internet companies like  Google consider their custom server and data center designs as proprietary knowledge that creates significant value, but last week Facebook (which had previously bought commodity servers and rented data center space) has opened up a whole new area of Open Source technology by publishing the full specification of both its new custom server and its new data center as “Open Source” at

Facebook’s designs aim to reduce capital costs by removing unnecessary components from the server and the data center, and by simplifying manufacture and construction.  They also seek to reduce running costs by increasing the efficiency of power usage. Although the initiative has been “Greenwashed”, reductions in power consumption seems primarily motivated by saving cost, not saving the planet.

Overall, Facebook claims that its new data centers are 38 per cent more efficient than its existing leased data centers, but the cost is about 20 percent less.  Published data (such as it exists) indicates that Facebook is at or ahead of rivals or peers such as Microsoft and Google.

Server design

As far as Facebook’s server is concerned there is no quantum leap.  It has published Intel and AMD variants of a dual-CPU Motherboard.  It hasn’t moved to massively multi-core, ARM, or GPU architectures. The changes are more about what has been left out. There is no video card.  There are 2 CPUs neither of which have fans, just an oversized heatsink  that has to be accommodated in a 1.5U chassis (Facebook designed its own racks to accommodate the non-standard height). There is no front-panel bezel.  There are almost no screws holding the server together. There’s a single power supply, with AC and a backup DC input. There are, however, up to 6 SATA drives to which power is routed via the motherboard with a single connector from the PSU.

Datacenter design

A lot of the design considerations for the data center are to do with bringing higher-voltage power  as close as possible to the server to minimise transmission losses.  There’s some discussion of this at OpenCompute, and it’s probably best to go there.  However, the upshot is that heat generation is so low that the traditional air conditioning in the datacenter has been replaced by an occasional-use water evaporation system. The data center is located in Oregon, and the design may not work quite so well in other locations.

An open source initiative to build a diverse supply chain

Facebook clearly wasn’t getting what it wanted from its existing data center or server suppliers, and so it  built its own dacatenter and went directly to a volume manufacturer in Taiwan called Quant to build the servers. However, the decision to Open Source the technology signals Facebook’s intent not to remain in the business of developing hardware and data centers for the long haul. Having absorbed some of the R&D costs and taken the intial risk of building the servers and the datacenter, Facebook is now working to drive its requirements back into a more conventional supply chain, including Dell, HP and Rackspace.  It has also introduced Skype into the initiative to demonstrate significant additional market potential.

OpenCompute designs are released under new set of Open Source agreements.  These are structured specifically to deal with designs and patents rather than source code,  but they are similar in effect to the more familiar BSD or Apache licenses for software  in that they are permissive, subject only to attribution.   In contrast with BSD and Apache, however, the patent grant is dealt with very explicitly. There is a mechanism to avoid repudiation under Chapter 11 or other forms or reconstruction, a fairly broad patent retaliation clause, and a general anti-circumvention clause. The intent seems to be to allow innovation within the published specification, but to ensure multiple providers of the technology by inhibiting  implementers from suing each other regarding their implementations of the published specification.

Facebook is clearly seeking to get multiple tier-1 third-party providers for both servers and data centers according to these designs, turning these Open Source specifications into a form of  de-facto Standard.  If it succeeds in driving this server and datacenter architecture into its supply chain, further reductions in costs will be achieved through the resulting volumes, leading to a potential virtuous circle where reduced cost means  the server and datacenter designs become more ubiquitous in the general marketplace, further reducing cost.  Of course if you are a consumer of data center technology, this is only a “virtuous” circle if the OpenCompute “de-facto standard” design actually matches the requirements of your software architecture, otherwise you may actually find yourself paying more for “non-standard” datacenters or servers.

Implications for IAAS Software Architectures

The design decision in the OpenCompute data center that most impacts on software architecture is the use of local not shared storage.

Coincidentally, Dell and Rackspace (two of the manufacturers now working with Facebook on OpenCompute) have increased their commitment to another open source intiative called OpenStack which provides an open Source IAAS Platform for creating, tearing down and communicating with virtual servers, and for providing persistent storage to them. For OpenStack, Dell is supplying standard Intel-based PowerEdge-C servers, not anything based on OpenCompute,  but the OpenStack platform would work  well within the OpenCompute datacenter.  The key factor being the use of either Block Storage which is local to the Operating System image, or Object Storage which is distributed and remote. The same is true of Eucalyptus, (and Amazon) . You are free to do NFS, SMB or whatever between the operating system images if you want to, but it cannot rely on the efficiency of a shared storage layer underneath the hypervisor.

In contrast, if you look at the reference architectures provided by, say, Red Hat for IAAS cloud, the whole stack is underpinned by shared SAN storage.

One other key point is that there is nothing special in the way of storage – just 6 SATA drives inside each server.
The initative is to be found at, and
Posted in SDDC & Hybrid CloudTagged , , , , , ,