"Is raising data center temperature like a game of “you blinked first”, only with your job on the line?"
While no global standard exists for data center temperature recommendations, many refer to the white paper ASHRAE Technical Committee (TC 9.9) for Mission Critical Facilities, Technology Spaces and Electronic Equipment. As many know, the committee published a 2011 update titled 2011 Thermal Guidelines for Data Processing Environments – Expanded Data Center Classes and Usage Guidance. (Link to Whitepaper) With this document, ASHRAE’s TC 9.9 raised the recommended high end temperature from 25°C (77°F) to 27°C (80.6°F) for Class 1 data centers (the most tightly controlled class). More importantly, the allowed high end was set a warm 32°C (89.6°F), perfect for growing succulents like cacti.
And yet, recent posts on IT professional social media sites have produced questions like, “What gloves are recommended for data centers to help protect from cold temperatures?” So it appears not everyone is following ASHRAE’s guidelines. Yet the other fact is that many IT professional media discussions are about energy savings. And if I remember living through the history of the 1973 OPEC oil embargo correctly, raising home air conditioning temperatures during the summer and lowering home heating temperatures during the winter saves energy and money. The U.S. Department of Energy’s website estimates a 1% energy saving for each degree the AC temperature is raised. Some sites claim 2%, 3% and even 4% savings, but even 1% for a data center’s energy budget is very significant.
What are data center’s really doing? In a July 15, 2013 piece posted on the Computerworld U.K. website titled It’s getting warmer in some data centers, author Patrick Thibodeau notes that, “The U.S. General Services Administration, as part of data center consolidation and efficiency efforts, has recommended raising data center temperatures from 72 degrees Fahrenheit (22.2°C) to as high as 80 degrees (26.7°C). Based on industry best practices, the GSA said it can save 4% to 5% in energy costs for every one degree increase in the server inlet temperature.” (Link to Article) A 5% energy savings is something that makes IT managers really salivate.
eBay’s newest data center in Phoenix, Ariz. employs open-air cooling technology to reduce energy used for cooling as a percentage of total site power consumption. (Link to Image)
So where is the industry? The article continues that the 2013 Uptime Institute survey that included 1,000 data centers globally, almost 50% were operating at between 71°F (21.6°C) and 75°F (23.9°C). The Uptime Institute noted that the survey did not show much change from the previous year. Incredibly, 37% of data centers were operating a frigid 65°F (18.3°C) to 70°F (21.1°C). Some good news was the fact that data centers operating at less than 65°F (18.3°C) have decreased from 15% to 6% of those surveyed. This is a self-selected survey, so the data has to be looked at somewhat cautiously since some data center personnel may not elect to participate, but the data is sobering.
So what’s the problem? Server and other electronic equipment suppliers have participated fully in the TC 9.9 guidelines; they are certain that their equipment will operate within specification at the higher temperatures. Their warranties reflect this. And yet, other issues exist.
One may the issue of poorly controlled buildings. Older, poorly insulated facilities with dated, less efficient HVAC equipment may be forced to lower the temperature to withstand elevated summer temperatures, especially if they have significant air leakage. Indeed, in the Boston area the month of July 2013 has been an average 4°F (2.2°C) hotter than average, a load that will tax even newer cooling systems. Finally, the elevated temperatures may only apply to the newer equipment in any given data center. Many data centers have a collection of equipment that contains some of the newest, state of the art servers sharing space vintage electronics that need the cooler temperatures to operate without problems. And changing out equipment to allow a site to raise the temperature will mean assessing all electronic systems, including building facilities.
So the industry has a dilemma, save energy and operating cost by raising data center temperatures which could require building, HVAC electronic equipment upgrades, or continue to pay higher operating costs. The flip side is the price to retrofit buildings, systems and electronic equipment; a cost that would be paid by “Facilities” or “Operations”, not “IT”.
Image from Slate.com piece about Google’s data center
(Link to Image
Data center professionals are no different from other industries in that making change is hard, it can come with risks. And changes to operating protocols are not done lightly when many data centers based their business strategy on reliability guarantees to their customers. Who among us is willing to stake their professional reputation and possibly their job on a major undertaking that contains variables that may be out of our control? So a studied approach is called for. But in the end, the cost of energy will inevitably increase, and the need to implement more powerful servers, etc. will be irresistible. When that time comes, the need to implement raising temperature limits will be examined closely as part of an overall business strategy. In the mean time, data center personnel may want to check out a recent Slate website post titled “The Internet Wears Shorts”, wherein the author describes Google technicians who work in summer clothes. The thrust is that Google has achieved significant energy efficiency, partially by running their data centers at “a balmy 80°F” (22.2°C).
Author: Dave Ruede is VP Marketing at Boston based Temperature@lert (www.temperaturealert.com
), a leading developer and provider of low-cost, high-performance temperature monitoring products. Professional interests include environmental and energy issues as they relate to data centers, clean rooms and electronics. Contact: firstname.lastname@example.org