Professional Documents
Culture Documents
1 of 4
http://ecmweb.com/print/ops-amp-maintenance/cooling-green-data-center
print | close
Data centers are critical to our daily lives, but they continue to get bigger and more power-hungry. Now we want
them to be green as well. The good news is that energy-efficient design can create a better environment for the
computing equipment and reduced energy costs they dont have to be energy wasters, despite the fact that
they require substantial amounts of power. Specialized power and cooling equipment is available that gets the
job done, but special techniques are required to get the most out of it.
Staying Cool
Legacy techniques, if properly applied, cool approximately 5kW per rack. The problem is many cabinets today
are drawing 15kW to 40kW, which may soon rise to 60kW. Its easy to see why overheating is such an issue. Its
also obvious that these densities cant be addressed with conventional air conditioners.
The old practice of ringing the room with computer room air conditioners (CRACs) doesnt work across the
board. Placing CRACs at right angles causes uneven air flow and creates under-floor vortices that reduce
pressure and cooling effectiveness while consuming unnecessary energy. However, CRACs still have a place.
They should be placed perpendicular to cabinet rows and aligned with the hot aisles, unless specific methods
have been employed to isolate and control the return air. That may seem counter-intuitive, but when no other
means of air control is employed, this placement minimizes the amount of cold air that a CRAC can pull back
into itself
A boatload of new tools is available on the market to handle every situation efficiently, including the ability to
cool really high-density equipment for high-performance computing. The key is to know what the various
systems are and when to use them. There are two basic principles for energy efficiency that apply whether
cooling is done from below a raised floor or overhead:
07-03-15 3:23 PM
2 of 4
http://ecmweb.com/print/ops-amp-maintenance/cooling-green-data-center
07-03-15 3:23 PM
3 of 4
http://ecmweb.com/print/ops-amp-maintenance/cooling-green-data-center
result of power consumption, so power reduction must also be considered. Data center power systems are
mostly concerned with reliability, which still trumps efficiency in most cases, even though that is no longer
necessary. The first power issue relates back to cooling design.
Motors use a great deal of electricity, particularly those on fans, pumps, and chillers. There are two fundamental
solutions available: variable-frequency drives (VFDs) on fan, pump, and chiller motors; and electronically
commutated (EC) fan motors. VFDs adjust motor speed to match cooling need as determined by sensor
information, so devices dont run at full speed when demand is less. Through the use of VFD control, it is often
possible to run every part of a redundant system at lower speed under normal conditions, and actually use less
energy than the older approach of activating only the primary units and leaving the redundant ones shut down
until needed. This method also ensures that redundant units are always operational, and that there is no time lag
for redundancy to take over when a primary unit fails. EC motors (which are becoming common on
air-conditioner plug fans) can also reduce energy consumption by as much as 30% over conventional motors
because of their more efficient design.
A major and historic energy waster is an oversized uninterruptible power supply (UPS). Most UPS units in use
today are of the double-conversion design, which means incoming alternating current (AC) power is rectified to
direct current (DC), which charges the batteries and is then converted back to AC through an inverter. Power is
lost as heat through each step in the change process, and further losses occur through every transformer in the
power chain.
Although UPSs have been designed more efficiently in recent years, even the best units are rated at only 95% to
97% at full load, which means they waste 3% to 5% of the power delivered to them. (Thats as much as 50kW on
a 1MW system or 1,200kWh per day.) If UPSs run in the lower part of their capacity range (in the order of 30%
of rated load), their efficiency can drop dramatically into the 80% efficiency level or below range. This
problem is particularly acute when running redundant systems, because a 2N installation requires that each
half (UPSs, PDUs, etc.) be loaded to no more than 50% of capacity so that either can assume full load when
needed. But if UPS systems are grossly oversized to begin with (usually done because of poor load estimates or to
accommodate theoretical long-term future growth), it is easy for actual usage (particularly on a redundant
system) to drop down as far as 15% to 20%, which results in very low efficiency and enormous energy waste and
cost.
However, there are good solutions to this dilemma:
1. Right size the UPS using systems that enable incremental growth.
2. Use newer intelligent line interactive UPSs that can reach 98% to 99% average efficiency.
There are also systems being run on high-voltage DC to avoid double conversion completely, although those are
still somewhat controversial in many circles.
Regardless of UPS type and configuration, the computing hardware should be run on higher voltage. At 208V,
the power supplies run more efficiently. In addition, fewer conductors are used, and current draw is reduced,
which translates into the use of less copper. The only drawback is more challenging phase-balance, because each
load appears on two of the three phase wires. Incorporating good monitoring can help solve that problem as well
as achieve more efficient operation and realize maximum capacity and efficiency from the UPS.
McFarlane is a principal of Shen Milsom & Wilke LLC, heads the data center design and consulting practice,
and is considered a leading authority in the design of the physical space and its power and cooling. Based in
New York, he can be reached at rmcfarlane@smwllc.com.
07-03-15 3:23 PM
4 of 4
http://ecmweb.com/print/ops-amp-maintenance/cooling-green-data-center
07-03-15 3:23 PM