You are on page 1of 4

Cooling the Green Data Center

1 of 4

http://ecmweb.com/print/ops-amp-maintenance/cooling-green-data-center

print | close

Electrical Construction and Maintenance

Robert McFarlane, Shen Milsom & Wilke


Sun, 2012-04-01 12:00

Data centers are critical to our daily lives, but they continue to get bigger and more power-hungry. Now we want
them to be green as well. The good news is that energy-efficient design can create a better environment for the
computing equipment and reduced energy costs they dont have to be energy wasters, despite the fact that
they require substantial amounts of power. Specialized power and cooling equipment is available that gets the
job done, but special techniques are required to get the most out of it.

Staying Cool

Legacy techniques, if properly applied, cool approximately 5kW per rack. The problem is many cabinets today
are drawing 15kW to 40kW, which may soon rise to 60kW. Its easy to see why overheating is such an issue. Its
also obvious that these densities cant be addressed with conventional air conditioners.
The old practice of ringing the room with computer room air conditioners (CRACs) doesnt work across the
board. Placing CRACs at right angles causes uneven air flow and creates under-floor vortices that reduce
pressure and cooling effectiveness while consuming unnecessary energy. However, CRACs still have a place.
They should be placed perpendicular to cabinet rows and aligned with the hot aisles, unless specific methods
have been employed to isolate and control the return air. That may seem counter-intuitive, but when no other
means of air control is employed, this placement minimizes the amount of cold air that a CRAC can pull back
into itself
A boatload of new tools is available on the market to handle every situation efficiently, including the ability to
cool really high-density equipment for high-performance computing. The key is to know what the various
systems are and when to use them. There are two basic principles for energy efficiency that apply whether
cooling is done from below a raised floor or overhead:

07-03-15 3:23 PM

Cooling the Green Data Center

2 of 4

http://ecmweb.com/print/ops-amp-maintenance/cooling-green-data-center

Keep hot and cold air from mixing.


Maximize the temperature of the air returning to the air conditioners.
Mixing results in warmer air being delivered to the servers and cooler air returning to the air conditioners. You
can avoid mixing by controlling return air, which results in more efficient equipment cooling, higher cooling
capacity from the air-conditioner coils, and less concern about humidity and condensation. Return air can be
channeled with overhead ducts, via a ceiling plenum, by containing either the hot or cool aisles to keep air
from mixing or a combination of these.
The highest efficiency is achieved with close-coupled or source-of-heat cooling. This means cooling units are
installed with the computing equipment right next to it. This greatly reduces fan energy needed to push air under
the floor or through ducts, delivers the air right where its needed in the right quantity at the correct
temperature, and pulls the hot air back in before it has a chance to go anywhere else.
Close-coupled cooling can be done with in-row or overhead cooling units, with rear door pre-coolers, or a
combination of approaches. Some of these devices circulate chilled water or condenser water through the
cooling units; some use refrigerant. However, all are superior to conventional cooling, if deployed
appropriately. Beyond these methods are systems that circulate coolant directly to the processors or those that
immerse the servers in cooling fluid.
There is no longer any reason to keep the data center igloo-cold. Substantially more energy is saved by operating
in accordance with the Thermal Guidelines for Data Processing Environments published by ASHRAE TC 9.9.
This document allows equipment intake temperatures up to 27C (80.6F) on a continual basis, and even higher
for a few days if necessary, without voiding warranties or significantly increasing fan speeds.
The 27C limit was chosen because above this temperature fans speed up dramatically, resulting in significant
energy waste. (Doubling fan speed draws eight times the energy.) The increased power draw of thousands of
server fans could quickly offset the savings from higher temperature operation. Good practice today is to control
air delivery to around 75F much better than the 55F air that was the standard for decades.
This higher temperature operation is suitable for legacy and new computers. It enables more hours of free
cooling the use of outside air instead of mechanical refrigeration to remove heat through either water-side
economizers (using the air to remove the heat from circulating water) or through air-to-air heat exchangers
(air-side free cooling).

Water in the Data Center


Virtually all high-density systems use water somewhere in the cooling chain. It may be run directly to the
cabinets or hardware, or circulated through control units, which then distribute refrigerant to the cooling
devices. Water is far more efficient than air in removing heat. As temperatures continue to rise, the use of liquid
in the data center will become more prevalent. Although this prospect gives most IT people angina, it should not
be a concern so long as the piping is designed and installed correctly, and leak detection is adequately employed.
Pipe leaks are actually very rare.
It is likely that many data centers will eventually have equipment that requires water circulation, if they dont
already. Designing piping with extra connections in strategic locations means the data center is ready for rear
door coolers, water-cooled cabinets, directly cooled servers, or whatever new form of cooling comes along.

Heat Originates with Power


While cooling may offer the greatest opportunities for energy savings, nearly all the heat in a data center is the

07-03-15 3:23 PM

Cooling the Green Data Center

3 of 4

http://ecmweb.com/print/ops-amp-maintenance/cooling-green-data-center

result of power consumption, so power reduction must also be considered. Data center power systems are
mostly concerned with reliability, which still trumps efficiency in most cases, even though that is no longer
necessary. The first power issue relates back to cooling design.
Motors use a great deal of electricity, particularly those on fans, pumps, and chillers. There are two fundamental
solutions available: variable-frequency drives (VFDs) on fan, pump, and chiller motors; and electronically
commutated (EC) fan motors. VFDs adjust motor speed to match cooling need as determined by sensor
information, so devices dont run at full speed when demand is less. Through the use of VFD control, it is often
possible to run every part of a redundant system at lower speed under normal conditions, and actually use less
energy than the older approach of activating only the primary units and leaving the redundant ones shut down
until needed. This method also ensures that redundant units are always operational, and that there is no time lag
for redundancy to take over when a primary unit fails. EC motors (which are becoming common on
air-conditioner plug fans) can also reduce energy consumption by as much as 30% over conventional motors
because of their more efficient design.
A major and historic energy waster is an oversized uninterruptible power supply (UPS). Most UPS units in use
today are of the double-conversion design, which means incoming alternating current (AC) power is rectified to
direct current (DC), which charges the batteries and is then converted back to AC through an inverter. Power is
lost as heat through each step in the change process, and further losses occur through every transformer in the
power chain.
Although UPSs have been designed more efficiently in recent years, even the best units are rated at only 95% to
97% at full load, which means they waste 3% to 5% of the power delivered to them. (Thats as much as 50kW on
a 1MW system or 1,200kWh per day.) If UPSs run in the lower part of their capacity range (in the order of 30%
of rated load), their efficiency can drop dramatically into the 80% efficiency level or below range. This
problem is particularly acute when running redundant systems, because a 2N installation requires that each
half (UPSs, PDUs, etc.) be loaded to no more than 50% of capacity so that either can assume full load when
needed. But if UPS systems are grossly oversized to begin with (usually done because of poor load estimates or to
accommodate theoretical long-term future growth), it is easy for actual usage (particularly on a redundant
system) to drop down as far as 15% to 20%, which results in very low efficiency and enormous energy waste and
cost.
However, there are good solutions to this dilemma:
1. Right size the UPS using systems that enable incremental growth.
2. Use newer intelligent line interactive UPSs that can reach 98% to 99% average efficiency.
There are also systems being run on high-voltage DC to avoid double conversion completely, although those are
still somewhat controversial in many circles.
Regardless of UPS type and configuration, the computing hardware should be run on higher voltage. At 208V,
the power supplies run more efficiently. In addition, fewer conductors are used, and current draw is reduced,
which translates into the use of less copper. The only drawback is more challenging phase-balance, because each
load appears on two of the three phase wires. Incorporating good monitoring can help solve that problem as well
as achieve more efficient operation and realize maximum capacity and efficiency from the UPS.
McFarlane is a principal of Shen Milsom & Wilke LLC, heads the data center design and consulting practice,
and is considered a leading authority in the design of the physical space and its power and cooling. Based in
New York, he can be reached at rmcfarlane@smwllc.com.

07-03-15 3:23 PM

Cooling the Green Data Center

4 of 4

http://ecmweb.com/print/ops-amp-maintenance/cooling-green-data-center

Source URL: http://ecmweb.com/ops-amp-maintenance/cooling-green-data-center

07-03-15 3:23 PM

You might also like