You are on page 1of 9

DATA CENTER OPTIMIZATION: BEWARE OF THE POWER DENSITY PARADOX 1

Data Center Optimization:


Beware of the
Power Density Paradox

Avoid the trap—balance high density deployments with available


power, cooling and space

by Karl Robohm and Steven Gunderson


DATA CENTER OPTIMIZATION: BEWARE OF THE POWER DENSITY PARADOX 2

INTRODUCTION

D
R I V E N B Y T H E N E C E S S I T Y TO D E L I V E R H I G H PE R F O R M A N C E A N D 24/7
enterprise-wide services to customers and employees, many corporate data centers are “busting
at the seams” trying to support the explosive growth in servers, storage and network equipment.
As a result, many organizations are running out of power, cooling or physical space in their data centers.

Too often the decisions made to resolve these facility problems are based on vendor and analyst hype that does
not take into account your specific environment. Your legacy systems, physical infrastructure and long term
strategy all must be considered. Individual tactics such as high density blade servers and storage, data center
containers, modular power systems, virtualization and cloud computing all offer some potential relief to limits
on power, cooling and space—but only when implemented appropriately.

To increase capacity in a fixed space while at the same time trying to reduce power consumption per server,
many IT shops are turning to smaller, denser servers and storage systems. They hope that by packing more gear
into a smaller space they will delay or avoid the need for costly data center relocation, or the construction of an
entire new data center. And this certainly makes sense, at least up to a point.

AVOID THE TRAP. However, you need to plan carefully or you’ll fall victim to the “power-density paradox.” The
paradox is that by using more dense equipment (with its need for additional power, cooling and backup), you
will eventually reach an inversion point where your total need for data center space
Your challenge is to balance the density of servers increases, rather than falls. This translates into greater capital and operational costs,
and other equipment in your data center with rather than the reductions you’d been hoping for.
the availability of power, cooling and space so
Your challenge is to balance the density of servers and other equipment in your data
you truly gain operating efficiencies and lower
center with the availability of power, cooling and space so you truly gain operating
net costs.
efficiencies and lower net costs.

This white paper describes the power-density paradox, and how to avoid its traps by balancing facilities
capabilities with core IT needs. Such an approach can optimize the life span of your current facility or help you
create a new data center optimized to meet the needs of a modern, high-density environment. Ignoring the
impact of the power-density paradox will only:

• Increase overall data center operating expense (power and cooling)

• Increase capital expenditures for patchwork fixes (which also increase power
and maintenance expense); and

• Raise the danger of application downtime, thus putting your entire organization at risk.
DATA CENTER OPTIMIZATION: BEWARE OF THE POWER DENSITY PARADOX 3

Roots of the Power-Density Paradox


The power-density paradox stems from the great advances in server and storage technology over the last
decade. Because many data centers have not changed to accommodate the new, far more densely populated
environment, more organizations are facing its consequences—or soon will.

Consider how much application server technology has evolved since most data centers were designed 15 years
ago. In the mid 1990s, floor-mounted minicomputers like the IBM AS/400 (which is still available today as the
System-i) were just beginning to be replaced with rack-mounted servers taking “only” 3-5U of rack space.
That’s essentially a 14 fold increase in server density. By 2000 those rack-mounted servers had shrunk to 1U
“pizza box” designs, only to be replaced beginning in 2002 by blade servers. These devices put multiple server
motherboards (each with their own processor, memory, I/O connections and sometimes even a disk drive) on a
single blade.

The net change is an astounding 84X increase in density of servers per rack, with the total compute power
increasing by several orders of magnitude. With every increase in the number of processors in a given space,
substantially more power is required to run and cool them. Just managing the thermodynamics becomes a
limitation to growth and reliability. The increased density also brings with it greater risk to the business when a
rack overheats and systems shut down or fail.

POWER UP. The denser the server environment, the more electricity is required to power and cool the space.
For example, it takes 60 to 100 watts of power per square foot to operate legacy minicomputers or full racks of
3-5U servers. The same space filled with smaller, 1U servers would require at least 200 watts per square foot,
and the latest blade servers require as much as 400 watts per square foot.

COOL DOWN. Each additional watt of power consumed by the computing


Because higher-density servers require more power
environment must be offset with an equivalent amount of cooling. Denser data centers
per square foot than lower-density servers, they also
also require more air-moving capacity to deliver the colder air and remove warm air
need more support equipment such as air from the space efficiently.
conditioning, uninterruptible power supplies and
Then there is the fact that many data centers were never designed to remove the
backup generators.
amount of heat put off by these devices. For example, the ceilings may not be high
enough to let warm air rise from the servers; there may not be enough room under the raised floors to allow
cool air to circulate sufficiently; or the CRAC (computer room air conditioning) units may not be properly
placed to effectively cool all the racks and servers.

GIVE ME SPACE. This need for more power and cooling therefore drives a need for more support space,
creating the power-density paradox. Because higher-density servers require more power per square foot than
lower-density servers, they also need more support equipment such as air conditioning, uninterruptible power
supplies and backup generators. Because this equipment is usually housed in “support space” away from the
raised floor, IT and facilities managers often pay too little attention to the need for such increased support space
and its effect on overall data center costs and efficiency.

As a result of these dependencies, you cannot simply take an existing facility and begin adding more dense
servers and storage without addressing how these systems will be cooled. Without adding more CRAC units
and air flow, you will reach a point where your only option is to rework your floor plan around the higher
density environment. Without power and cooling upgrades, you may end up with very little net gain of
compute capacity in your existing facility by going to higher density devices.
DATA CENTER OPTIMIZATION: BEWARE OF THE POWER DENSITY PARADOX 4

Floor Plan Options with 100 Watts/SQ. FT. Capacity

Layout Using 5kw Racks Layout Using 10kw Racks Layout Using 20kw Racks

SOURCE: TDS 2009

This diagram illustrates different layout options for a 2,000 square foot facility as rack density increases from 5kw to 20kw (based on
200kw total power and cooling capacity). While the actual layouts may vary, you cannot deploy more than 200kw of IT systems without
upgrading the electrical and cooling infrastructure.

As the power density (watts required per square foot) increases in the raised floor portion of the data center, so
does the space needed for additional support equipment. In fact, at a density of about 400 watts per square
foot, plan to allocate around 6 times the usable data center space for internal cooling and backup power. To cost
effectively increase the power density capacity for an established facility, there must be adequate support space
to house the additional power/cooling infrastructure, sufficient raised floor space to handle the increased
airflow demands and of course sufficient power for the systems and support gear to operate. If any of these are
inadequate, the data center will not scale.

Data Center Space Allocation at Different Power Densities

5,000
Support
Square Feet of Space Required

Generator
Space UPS & Elec.
4,000
CRACs

3,000
Usable
Space
2,000

1,000

Watts/Sq.Ft. 50W 100W 200W 400W


Raised Floor 6-12 in. 12-18 in. 3 ft. 4+ ft.

SOURCE: TDS 2009

This chart shows the relationship of support space to usable space at various levels of power density based on a facility with 2,000
square feet of raised floor. This example is limited to the support space required for the additional CRAC units, UPS (and related
electrical) and backup generators, but does not include the additional space required for upgraded power cables, airflow, upgraded
chillers and additional fuel storage required for the upgraded generator capacity.

So unless you have been upgrading your power, cooling and air flow along the way, chances are good that any
data center more than a few years old is incapable of automatically and economically supporting environments
over 200 watts per square foot. To boost the capacity of your existing facility without raising costs, or to
properly design a new data center, you must consider the power-density paradox.
DATA CENTER OPTIMIZATION: BEWARE OF THE POWER DENSITY PARADOX 5

The Need to Optimize IT Facilities


The problems associated with the power-density paradox are often made worse by poor data center design and
management practices. These may have been acceptable in an era of lower-density equipment but become
intolerable as densities rise.

For example, equipment racks may have been laid out haphazardly over the years as various business units
required new server capacity. Because the new servers and racks were added without an overall plan for data
center airflow and cooling, they often cause hot spots, inefficient cooling and increased power consumption.

In addition, older data centers often may have only 12” to 18” of space under their raised floors for cooling
(likely shared with network and power cabling), rather than the 36” or more required for modern, high density
data centers. Reconfiguring the racks for hot aisle/cold aisle air flow will improve efficiency somewhat, but
provide no real increase in overall capacity since the cold air flow is limited by the shallow raised floor. In the
end, the organization is still limited by the fundamental capabilities of the data center infrastructure.

WORKING AT CROSS PURPOSES. Another poor practice that’s all too common is using the factory default
settings for data center infrastructure equipment, even if those settings are not appropriate for your
deployment. A good example of this can be found with the CRAC units. By default, CRAC systems run
independently of one other and each unit will try to maintain the relative humidity and temperature around it
in a specific range. This results in the CRAC units fighting against one another, with one unit cooling and
another heating, one humidifying and another dehumidifying the same air. Besides wasting electricity, this
causes extra wear and tear on the units, which leads to unnecessary (and expensive)
If you are at the actual limits of your cooling downtime, maintenance and hardware replacement.
capacity, you can also buy some time by installing
The humidification systems within the CRAC units can themselves be another source of
ultrasonic humidification (which actually helps waste. Like pans boiling water on the stove, these systems rely on steam generation.
cool the environment) and turning off built-in Unfortunately steam has the obvious side effect of raising the data center’s air
humidification systems. temperature, making CRAC units work even harder (and use more power) to cool the
data center environment. The result is a vicious circle that adds unnecessary heat that
you’ve then got to cool, increasing your spend on power. Again, this also causes extra wear, tear and
maintenance for the CRAC units.

There are simple solutions to some of these problems. One is to install a single control unit for all the CRAC
units so they don’t work against one another. If you are at the actual limits of your cooling capacity, you can
also buy some time by installing ultrasonic humidification (which actually helps cool the environment) and
turning off built-in humidification systems. This will create humidity more efficiently without raising the air
temperature through a steam process.
DATA CENTER OPTIMIZATION: BEWARE OF THE POWER DENSITY PARADOX 6

Critical Risks of Ignoring the Paradox


There are several big risks for those who choose to disregard the power-density paradox.
RISK 1: WASTED POWER AND COOLING COSTS. The first and most obvious risk of implementing high
density servers without accounting for the power-density paradox is dramatic and unnecessary increases in
power and cooling costs, as well as in maintenance of equipment such as CRAC units. If you are already at the
limits of available power, this extra demand will also reduce the net power available for IT computers, storage
and networking systems.

RISK 2: UNANTICIPATED EQUIPMENT EXPENDITURES. The second risk is unanticipated capital and
operating expenses for band-aid type solutions which attempt to keep a sub-optimal facility in operation. The
use of specialized air handling equipment such as fan tiles or portable/standalone CRACs are warning signs
that a data center may be approaching the end of its useful life. Even if these systems extend the life of the data
center, they add to your power consumption and maintenance expense while introducing additional points of
potential failure.

RISK 3: DOWNTIME. Servers deployed in a high density environment are at much greater risk from
unexpected downtime than those in lower-density environment. Even if the UPS continues to provide
operational power to the devices during a loss of utility power, the facility will lose cooling and airflow until the
generator kicks in, and the cooling system recycles. In some cases we have seen data temperatures rise from 70
degrees to over 100 in just 4 minutes before the cooling systems return online. This accelerated heating is
caused by the hot exhaust of high density systems and the relatively small bank of reserve colder air in the space
to offset it.

The downtime caused by such a cooling failure could be minutes if the server detects a temperature spike and
shuts down to prevent damage to the servers; or it could be much longer in the event the excess heat actually
damages hardware.

Depending on the architecture used to assure application software resiliency, this could quickly lead to
application downtime, reduced overall performance/throughput, and financial risk to the business.

When designing a new data center, ignoring the power-density paradox can lead to poor choices that waste
money. Trying to minimize the amount of raised floor space by installing high-density servers may require
more expensive power, cooling and backup systems. In many cases, this more than negates the space-saving
advantages of high-density systems. When optimizing the critical resources of a data center, it is less expensive
to purchase more real estate at $50 per square foot, when $1,000 - $2,000 per square foot in added MEP
(mechanical, electrical and plumbing) infrastructure is required to support the higher power density.

FACTORS TO WEIGH. Getting the maximum computing capability at the lowest total cost requires weighing a
variety of factors, ranging from real estate and power costs to the proper configuration of CRAC units and the
right choice of humidification equipment. Advances in processor technology have moved so quickly that
designers cannot simply add more servers, storage and network gear into the same space without regard for the
power, the thermodynamics of cooling this hardware and the support equipment required by high density
servers. To maximize application uptime and minimize costs and risks, data center designers and facilities
managers must understand the power-density paradox and act accordingly.
DATA CENTER OPTIMIZATION: BEWARE OF THE POWER DENSITY PARADOX 7

Boston Area Data Center Recovers 2,800 MWh Power per Year
By anticipating the impact of power-density paradox, a 16,000 square foot, Tier 3 class data center in Boston was
able to reduce its annual power requirements by 2,800 Megawatt hours. This was achieved by three infrastructure
upgrade projects working in tandem to increase power and cooling efficiency—and resulted in savings of
$420,000 per year. (Of course, as power costs go up, future savings will be even greater).

These efficiency improvements were implemented quickly and efficiently. Taking into account the power company
rebates, these three projects had a combined payback of 8 months, with the high efficiency motors and pumps
having a payback of only 3 months. In addition to these cost savings, the use of ultrasonic humidification also
recovered 15% additional cooling capacity.

High Efficiency Ultrasonic High Efficiency


Project Motors & Pumps Humidification Transformers

Turnkey Project Cost $180,000 $325,000 $184,000

Utility Company Rebates $135,000 $190,000 $82,000

Net Cost $45,000 $135,000 $102,000

Annual Power Reduction 1,250 MWh 1,050 MWh 525 MWh

Annual Power Savings* $185,000 $156,000 $79,000

* Savings based on electric power cost of $0.15 per kWh

While the amount of infrastructure needed for most corporate data centers is less than this example, these same
approaches to conserve power and related expenses can pay off for facilities as small as 1,000 square feet.

The Solution: Optimize and Balance


The power-density paradox is a result of the physics of squeezing higher-density systems into smaller and
smaller spaces. The fact is, nobody can outsmart the paradox, and you ignore it at your own peril. A better
approach is to understand the impact it will have on your organization and develop a forward-looking plan
based on a cross-disciplinary perspective.

Cross-disciplinary means that any assessment involving your data center’s computing power; electrical
requirements and cooling facilities should come from a diverse group of business, technical and IT managers.
The team should include IT, operations and facilities so they can all understand the effects of their choices on
the overall data center environment.

NO MORE SILOS. In practice, many corporate departments operate in isolated islands, or “silos,” that make
decisions somewhat autonomously. This is fine for most day-to-day activities. But data centers by their nature
are not autonomous. While IT equipment is purchased, installed and often maintained by IT, power and
cooling are usually the responsibility of the facilities staff, which often doesn’t understand the power and
cooling needs of modern, high-density servers. Involving all the affected parties helps keep everyone focused on
the organization’s objectives to reduce costs, make the most of current assets and avoid unnecessary capital
expenditures in today’s recessionary environment.
DATA CENTER OPTIMIZATION: BEWARE OF THE POWER DENSITY PARADOX 8

Outside Experts Can Help


Regardless whether you are trying to extend the life of a current facility or building a high density data center,
this is not the time for on-the-job training. The power density paradox makes seemingly simple decisions
more complex than appears on the surface. Involving expert outside assistance can help in several ways:

• An independent, third-party perspective can balance the needs and challenges facing the IT,
facilities and finance groups.

• Specialized expertise in high-density data center design and operations can save time and
money while providing a flexible path to longer term needs.

• Knowledge and experience with best practices tools, processes and efficient technologies to
support modern, high density data centers may fill gaps in your staff’s experience or
availability.

• Expertise in modern HVAC and MEP infrastructure alternatives can extend the life of current
facilities or be leveraged in the design of a new, high density facility.

• Qualifying for and obtaining utility rebates that can subsidize efficiency improvements in the
data center, lowering capital costs and reducing operating expenses.

Ideally, this expert resource will offer both a business and a technical perspective, provide unbiased advice
untainted by vendor or analyst hype and can recommend and help implement the best technology, facility and
operational options for you.

WHAT YOU LEARN. Through such an assessment, you may learn you can delay building or relocating to a new
data center. You may instead find the optimum approach is to mix some high-density systems with some lower-
density systems, or to simply not use all the vertical space in each rack. You may find that you need to simply
reprogram your CRAC units for your environment, consolidate the controls or move to
For example, in a 40,000 square foot data center ultrasonic humidification systems which can provide tremendous savings. For example,
in Massachusetts, the use of ultrasonic in a 40,000 square foot data center in Massachusetts, the use of ultrasonic humidification
instead of steam resulted in savings of $900,000 per year in utility costs, along with
humidification instead of steam resulted in
$52,000 a year in reduced maintenance costs.
savings of $900,000 per year in utility costs.
Some of the investments in new facility infrastructure, such as ultrasonic humidification,
can be recovered immediately through power company rebates. Depending on the size of your efficiency
improvement, such rebates could cover 40 percent to 50 percent of the equipment and installation costs. You
will, however, need to audit your power usage before and after the upgrades to prove the improvements you
made led to the reduced power consumption.

Another recommendation may be to allocate more space than you need right now, and build the data center
out to support your requirements in a modular fashion. This can result in lower long-term costs than trying to
squeeze as many servers as possible into the smallest area.

Whatever the specific actions that result from such an assessment, you’ll be making data center design and
operation decisions based on a true understanding of data center operations. You won’t be trapped into
unnecessary spending or risk by the power-density paradox.
DATA CENTER OPTIMIZATION: BEWARE OF THE POWER DENSITY PARADOX 9

CONCLUSION

Organizations facing severe cost, real estate and power constraints should not rush blindly into the use of ultra
high-density servers and storage systems to save space and money in their data center. If this is done without
proper planning, and a holistic analysis of business needs and the data center environment, the use of such
equipment can actually increase costs and business risks.

Conducting an overall assessment of the data center environment, perhaps with the help of qualified outside
specialists, can produce dramatic short-term cost savings and delay or even eliminate the need for costly data
center construction or relocations.

About the Authors


Karl Robohm leads the TDS Data Center practice. Karl has designed, built and managed over a half a
million square feet of high density and high efficiency (green) data centers for commercial and carrier-
grade clients.

Steven Gunderson co-founded and operated two commercial data center companies, Enclave Properties
and CO Space, prior to his current leadership role at TDS.

About Transitional Data Services (TDS)


Since 2002, Transitional Data Services has provided data center consulting, systems development and
technical operations services that help clients boost the performance of their technology operations and
business processes. TDS provides independent assessments, recommendations and improvements for IT
including data center designs, relocations, operational support, ERP, web and mobile applications. Our
recommendations cross departmental and technology silos to achieve the best ROI for our clients. Since
we do not operate as a vendor or VAR, we are unrestricted to a specific product portfolio and unbiased by
the latest trends and highest commission.

TDS clients include successful organizations of all sizes and focus including John Hancock, Monster.com,
Boston Red Sox, Cedars Sinai and Liberty Mutual. For more information, please visit
www.transitionaldata.com.

©Transitional Data Services. 2009 All rights reserved. 877-973-3377

www.transitionaldata.com

You might also like