One of the big challenges of cloud scale data center operation is determining what to do with the waste heat. In a typical data center, cooling systems account for roughly forty percent of capital equipment costs, and thirty percent of the energy consumed in a facility goes into cooling. Data center operators are forever looking to new ways to reduce the overhead that cooling imposes. Facebook chose to site its first data center outside the US in Luleå, Sweden, a location chosen as much for its low-cost electricity, derived from 100% renewable sources, as it was for its subarctic climate, which enables the data center to use outside air cooling all year round. In Belgium, Google has taken a less direct approach. It too uses free outside air cooling for much of the year, but on days when the outside air temperature exceeds Google’s maximum threshold (Google maintains data centers at temperatures somewhat above 80°F), it avoids the issue by transferring computing load to other data centers.
Another approach to data center heat management is to put the excess heat to good use. The problem with this approach is that waste heat generated by data centers is low grade and frequently in the wrong place. You would never be able to boil a kettle for a cup of tea by placing it in the exhaust from a server rack, but you can use it to heat homes and offices. In Paris, TelecityGroup is using waste heat from its new Condorcet data center to heat its on-site Climate Change Arboretum, where scientists will recreate the climatic conditions expected to prevail in France in 2050. In London’s Docklands, waste heat from the $162 million Telehouse West 130,000 square foot colocation facility will be used in a district heat network, which is expected to produce up to nine megawatts of power for the local Docklands community. Amazon is adopting a similar approach in its new downtown Seattle campus, where data center cooling water can be routed to rooftop cooling towers in summer or piped across the road and plumbed into the new buildings’ heating systems in winter. Amazon estimates that the project will yield electrical savings of 80 million kWh over twenty-five years: not a huge amount, but certainly enough to offset the cost of implementation and a welcome reduction in greenhouse gas emissions. Of course, this approach only works where the data center is located close enough to an office to take advantage of it, and it can be prohibitively expensive to retrofit into existing residential developments. However, while you can’t get people to live in the data center, you can sometimes get the data center into the home.
In 2011, Microsoft Research proposed the use of “Data Furnaces” in the paper “The Data Furnace: Heating Up With Cloud Computing.” According the the proposal, self-contained micro-data centers on the order of 40 to 100 CPUs would be used as a direct replacement for home hot-air heaters. Microsoft’s proposed business model assumed that some of the capital cost of the Data Furnaces would be borne by consumers purchasing them to replace existing home heating systems, with the service provider subsidizing the cost to make it comparable to that of a conventional home furnace and covering the operating costs. The Microsoft paper suggested that certain types of workloads, including web crawling, content indexing, and the processing of large scientific data sets such as genome sequencing might be well-suited for this type of opportunistic processing.
This is not just a thought exercise. Building on the Microsoft Research Data Furnace concept, German startup Cloud&Heat Technologies, formerly known as AoTerra, is deploying self-contained server cabinets that can accommodate up to fourteen servers and can provide heating of hot water for a 2,100 square foot home. “We knew that there’s a tremendous need for new server capacity,” said Dr. Jens Struckmeier, of the Lothar Collatz Center for Computing in Science at the University of Hamburg, who co-founded the company. “Despite all efforts of making them more energy efficient by putting efficient cooling systems in data centers, they still use too much energy. You can increase the efficiency by using the heat and not wasting it into the environment.” Cloud&Heat has also deployed a system in a Dresden apartment building, Struckmeier says. “With 20 heaters, we’re providing all of the warm water demand for 56 apartments, plus some of the heat. A local energy supplier provides peak demand in the winter with a district heating system.” The cost of installation is about the same as that for a standard heating system but comes with the benefit of free heat and hot water. With Germany heavily dependent of fossil fuels for electricity production (19% from hard coal and 26% from lignite), the system offers significant benefits in curbing greenhouse gas emissions. A single heater can save up to six tons of carbon dioxide. Despite the cost of having to travel to each server cabinet for any repairs, the company says it still ends up being far cheaper than a conventional data center environment. Cloud&Heat is currently charging €0.389/hr. ($0.43/hr.) for an 8 vCPU Linux instance with 32 GB memory, 960 GB storage, and 7 TB network traffic, which compares very favorably with $0.592/hr. for a c4.2xlarge instance hosted in Amazon’s Frankfurt data center—accepting, of course, that a Cloud&Heat vCPU might not deliver the same performance as an Amazon vCPU.
A similar approach to residential cloud heating is being pioneered by two other European startups: Paris-based Qarnot Computing and Nerdalize, from the Netherlands. Rather than opting for single central furnaces, these companies offer smaller wall-mounted systems. From a homeowner’s perspective, as heaters go, both the Nerdalize heater and Qarnot Q.RAD are comparatively large, much larger than an equivalent electric convection heater. The term “Data Furnace” is a bit of a misnomer; compared to resistive heating elements, microprocessors make poor heat sources, running a microprocessor at anything over about 75°C usually results in trouble, and their location in the home necessitates fanless designs for silent operation. The 1kW Nerdalize heater is designed for always-on operation. To prevent overheating in summer months, it must be mounted on an outside wall so that it can vent any unwanted heat out of the building. The 500W Qarnot Q.RAD is thermostatically controlled, allowing the homeowner to regulate heat output by controlling compute throughput, and it can be turned off by the homeowner when not required for heating. The limited output of these devices means that they are best thought of as supplementary heating systems, although for owners of newer, more energy-efficient homes, it is possible to use them as primary heating.
It’s obviously the early days for the distributed cloud as a home heating system (Nerdalize only started field trials in March), but it’s clear that this approach can provide a low-cost alternative to hosting servers in expensive data centers. Cloud providers are becoming increasingly aware of the need to be seen to be green, and it is here that this technology has the greatest potential. It’s one thing to minimize excess energy consumption by using outside air to cool a data center, but it’s something else again to put that waste heat to work. The challenge is finding the right workload, the right customer, and the right home. Cloud&Heat is most like a conventional cloud provider, offering Linux and Windows PaaS, as well as block and object storage services. Nerdalize and Qarnot currently fall short of the characteristics offered by “true” cloud providers, inasmuch as they lack the on-demand, self-service elements normally considered primary characteristics of cloud services. They also focus on supporting customers with niche workloads that better align with the limited performance available from each node. Nerdalize, which at present only supports Docker, states that it offers its computing capacity to clients from industry and academics, where uses can include medical research, video transcoding, complex engineering models, and several forms of scientific computing. Qarnot offers public support for Blender and Python workloads and can also support workloads based on R, Gromacs, Q.expresso, Nwchem, Autodock, Openfoam, and Namd. However, scaling beyond these niche workloads is going to be a challenge. There still a marked reluctance in many businesses to trust their data to cloud services today. Taking the next step down this road and trusting business data to fully distributed clouds is a prospect that very few organizations would be willing to entertain today. Perhaps the biggest opportunity lies with the mega data center owners, such as Apple, Facebook, Google, and Microsoft, which have huge internal workloads that they can more readily transfer to distributed data centers without raising customer concerns. Microsoft and Google in particular do not need to host their web-crawling services in centralized data centers. This is a workload that can be performed anywhere with minimal loss of efficiency, and Google even has a ready source of potential Data Furnace sites in the many apartment blocks that are connected via 1 GB Google Fiber services.
Until the end of the nineteenth century, it was common practice to keep livestock in the home during the winter months, not to keep livestock safe, but to heat the home. It’s by no means certain, but if these services take off we could start to see servers replace the cow as a far cleaner supplementary heating system in the very near future.
Share this Article:
Latest posts by Simon Bramfitt (see all)
- Amazon WorkSpaces Cloud Desktop Service Now Offers Hourly Pricing - August 24, 2016
- Chrome OS: From Education to the Enterprise - August 22, 2016
- Microsoft Readies Azure GPUs - August 11, 2016