You are on page 1of 21

Energy Efficient Data Centers

Daniel Costello
IT@Intel
Global Facility Services DC Engineering

The NYS Forum's May Executive Committee Meeting


Building an Energy Smart IT Environment
Legal Notices

This presentation is for informational purposes only. INTEL MAKES NO WARRANTIES, EXPRESS OR IMPLIED, IN THIS SUMMARY.
BunnyPeople, Celeron, Celeron Inside, Centrino, Centrino logo, Core Inside, Dialogic, FlashFile, i960, InstantIP, Intel, Intel logo, Intel386,
Intel486, Intel740, IntelDX2, IntelDX4, IntelSX2, Intel Core, Intel Inside, Intel Inside logo, Intel. Leap ahead., Intel. Leap ahead. logo, Intel
NetBurst, Intel NetMerge, Intel NetStructure, Intel SingleDriver, Intel SpeedStep, Intel StrataFlash, Intel Viiv, Intel vPro, Intel XScale, IPLink,
Itanium, Itanium Inside, MCS, MMX, Oplus, OverDrive, PDCharm, Pentium, Pentium Inside, skoool, Sound Mark, The Journey Inside, VTune,
Xeon, and Xeon Inside are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries.
*Other names and brands may be claimed as the property of others.
Copyright © 2007, Intel Corporation. All rights reserved.
Last Updated: Aug 28, 2006
Objective

y High density computing lowers TCO and increases energy


efficiency

– TCO i$ the driving force

– Adoption will not happen unless there is a financial incentive

y Demonstrate how Intel is improving energy efficiency


New Demand Drivers
Continue to Challenge our IT Group
Linear Demand Exponential Demand

Integration of digital Increasing platform Increase due to platform


and analog circuits features and multi-core validation Process technology

Corporate Acquisitions
70 since 1996, each with a data center

Growth1 1996 2006 Increase


Design Engineers 8,300 20,000 140%

Global Design Sites 8 64 700%

Design Data Centers 8 75 937%

Compute Servers 1,062 65,000 6,000%

1. This growth is specific to the Intel silicon design engineering environment and does not include overall corporate IT
demand—e.g.,~25,000 servers in 2005 to support silicon design.
Data Center Assets
Age of Data Centers Plans to Build a New
Data Center

Applications
drive the need
for DC capacity
(not hardware)
62% of DC’s
more than 10 More than 1/3
years old % of Applications in forecast new DC
each Tier construction

72% non mission


critical
applications

1. Source: Data Center Operations Council research. Tier 4 applications have more demanding service levels.
Data Center Consolidation: What is Intel’s
Strategy?
y “Right sizing” model1 to rebalance the number,
locations, and use of the data centers
y Our cost analysis indicates large
global and regional hubs
– (Eastern Europe and Asia bandwidth constraint)

y Choice of location driven by TCO


– Construction/expansion costs
(10 percent of total cost of ownership [TCO])
– Local utility (power/cooling) costs – economizer usage
(25 percent of TCO)
– WAN maturity and bandwidth costs /
operations headcount costs
(10 percent of TCO)
– IT hardware
(55 percent of TCO; sales tax adds to this)

y Dependent on virtualization strategy


y Consolidation is opportunistic to maximize ROI

1. This model is for illustration purposes only and does not represent actual Intel data center locations.
Why Density versus Space
y Savings
– Would require the same megawatts of power to run “same”
amount of servers in a dense or spread out configuration
• Would require longer conductor runs if spread out
– Would require the same or more tons of cooling for “X” number of servers
• Spreading out increases space to be cooled
• Drives up chiller plant size
• Additional fan capacity would be required
– Spreading out cabinets adds additional cost in building square feet and
raised metal floor (RMF) space
– Decreases heating, ventilation, and air conditioning (HVAC) to universal
power supply (UPS) output ratio by increasing efficiency of cooling systems
– High Density DC is up to 25% more energy efficient than low density
y Additional Scope
– 42° F thermal storage for uninterruptible cooling system (UCS)
– Fully automated data center and integrated control systems
– Sound attenuation – NC60
Large Data Center Construction Economies of Scale
(Modular Approach1)
Industry average
for Tier II/III
data center USD
11,000 – 20,000
per kilowatt

Intel goal cost


per module

After fifth
module,
diminishing
returns

• Design hub data centers to expand modularly


• Add as demand warrants
• New Intel data centers targeting 500 watts per square foot
• 15 kilowatts per rack
• It is critical to optimize data center design for thermal management2
• Airflow, cooling, cabling, rack configurations, and so on

NOTE: The first two modules are based on actual costs to build a high performance data center at Intel. Modules 3 through 7 are projections. All
timeframes, dates, and projections are subject to change.
1. Intel® IT currently defines a given module as 6,000 square feet of data center floor space.
2. Source: Intel white paper June 2006 “Increasing Data Center Density While Driving Down Power and Cooling Costs”
www.intel.com/business/bss/infrastructure/enterprise/power_thermal.pdf
Data Center Processor Efficiency Increases

1,000 Sq.Ft.
128 kW 30 Sq.Ft.
512 Servers 21 kW
25 Server Racks 53 Blades
?
1 Server Rack
3.7 Teraflops
3.7 Teraflops

2002 Today 2012

A greater than 6x energy efficiency increase


Power will continue to be the limiting factor
(increased performance per Watt and per square foot)

1. The above testing results are based on the throughput performance of Intel design engineering applications relative to each new
processor and platform technology generation.
Accelerated Server Refresh

Data center power and heat reduction


• Intel® Xeon® processor 5300 series and forthcoming multi-
core designs utilize the power efficient mobile architecture
• Reduces power consumption by 40 percent
(65 watts versus 110 watts)1
• Blade servers offer an additional 25 percent power
efficiency (direct current backplane)2
• Offset data center costs in cooling and floor space per
given workload
• The above are key factors in equipment selection

New features we will deploy in the data centers


• Demand-based switching (DBS) and (DPT) Data Center Power
Thermal advanced management dynamically tailor power to
workloads when peak performance is not needed and allow
management of power at the rack level
• Virtualization technology (virtualization at the hardware level)
combined with our distributed job scheduler will provide on-
demand OS provisioning

1. Source: Intel based on SPECint_rate_base2000* and thermal design power. Relative to 2H’05 single-core Intel® Xeon®
processor (“Irwindale”).
2. Based on internal Intel testing Q2 2006 using equivalent systems in a rack configuration versus a blade configuration.
Key Metrics - ACAE

y In the past, air conditioning (A/C) systems have been poorly


utilized in general purpose and manufacturing computing data
centers.
– Packaged computer room air conditioning (CRAC) units are capable of 20°
F to 28° F Delta-T coil conditions (Temp In minus Temp Out)
– A/C system Delta-Ts have been measured as low as 4° F
y Air Conditioning Airflow Efficiency (ACAE) is defined as “the
amount of heat that can be removed per standard cubic foot
per minute (SCFM) of cooling air” (Wattsheat/SCFM).
y In our studies, we have evaluated advantages of increasing
ACAE.
– Reduce initial cost and noise levels (less A/C equipment)
– Reduce operating costs (less fan horsepower).
– Better overall cooling efficiency (less kW per ton of refrigeration)
– Reduce facility support area per kilowatt of IT equipment.
Key Metrics – PUE/DCE

The IT Equipment Power is defined as the effective power used by the equipment that is used to
manage, process, store, or route data within the raised floor space.
The Facility power is defined as all other power to the data center required to light, cool, manage,
secure and power (losses in the electrical distribution system) the data center.
Industry average is estimate at 2 or a DCE of 50%
Intel Data Center Cooling Development

y Servers and storage across the board are growing in kW power


consumption and corresponding heat output, while still increasing
performance per watt.
y Example of an older data center with unmanaged airflow:
– 67 WPSF; 2 kW-4 kW cabinets; ACAE at 4.7 W/CFM; bypass air at ≥35 percent
y By installing blanking panels in cabinets, removing cable arms, blocking all
cable openings in the RMF, and placing perforated floor tiles only in the cold
aisles, we improved airflow management to get:
– 135 WPSF; 4 kW-8 kW cabinets; ACAE at 5.6 W/CFM; bypass air at 25 percent
y Next, we worked with multiple vendors to isolate (eliminate Vena
Contracta) supply and return air using chimney cabinets or hot aisle
enclosures, complete CFD model, upgrade floor tiles to grates, and clear all
utilities below RMF out of air stream.
– 247 WPSF; 8 kW-14 kW cabinets; ACAE at 7.5 W/CFM; bypass air at 10 percent
y The latest data centers were purposely built to host high-density systems in
a two-story building, a one-story building with a 36" RMF and no utilities
below the RMF, and even without a RMF.
– ~500 WPSF, 13 kW – 17 kW average per cabinet, ACAE at up to 10.8 W/CFM,
bypass air at <10 percent, CRAC units removed from raised floor

WPSF=watts per square foot; kW=kilowatt; W=watt; ACAE=air conditioning airflow efficiency; W/CFM=watts per cubic feet per minute;
CFD=computational fluid dynamics; RMF=raised metal floor
Chimney Cabinet
Two-Story Vertical Flowthrough
High-Performance Data Center

Non-
Non-ducted Hot Air Return Space Above Ceiling

240 Server Cabinets

600 Ampere Busway

Plenum Area
Plenum Area
Cooling Coils Cooling Coils

Electrical Area
Hot Aisle Panel Closure System

Slider and infill panel system


Accommodates various heights Plascore filler panel enclosures

Back-to-back cabinets Storefront doors at end


CFD Model output kW per rack

Cooling air
leaking
through floor
is used for
cooling and
bypass to
temper
system delta-T
to 43º F

1,250 watts per square feet (WPSF), 30 kilowatts (kW) per cabinet, air
conditioning airflow efficiency (ACAE) 13.7 watts per cubic feet per minute
(W/CFM)
Decoupled Wet Side Economizer System
Intel Data Center Energy Efficiency

kWHVAC
HVAC performance index (%) =
kWUPS Output
kW=kilowatt; UPS=universal power supply
1 “Data Centers and Energy Use - Let’s Look at the Data.” ACEEE 2003 Paper #162. William Tschudi and Tengfang Xu, Lawrence Berkeley National Laboratory;
Priya Steedharan, Rumsey Engineers, Inc.; David Coup, NYSERDA; Paul Roggensack, California Energy Commission.
Industry moving to 45nm Benefits

Reduction in source-drain leakage power

Compared to 65 nm technology, 45nm technology will provide:

improvement in transistor density –


~2x for either smaller chip size or increased transistor count

>20% improvement in transistor switching speed or

>5x reduction in source-drain leakage power

>10x reduction in gate oxide leakage power


~30% reduction in transistor switching power

Providing the Foundation for Improved Performance/Watt


Tie it all Together
y Applications move to be virtual (remote and not dedicated)
y Enable Consolidation of data centers to fewer instances
y Select data center hubs in cost-optimized energy efficient
locations
y Server refresh along with data center construction
y High-density data centers are more energy efficient and cost
less per kW than lower density
– Greater than 6x compute to energy efficiency since 2002
– Intel is running airflow data centers at ~500 watts per square
foot (WPSF). CFD modeling shows we can increase airflow data
centers to 1,250 WPSF (30 kW per rack)
– Use economizers to maximize free cooling and lower energy costs
– Increase supply temperature to increase free cooling and lower
energy costs (55°F-95°F). Data Centers are at 72°F supply air
y Performance per Watt for the platform and the DC is
required
y $ is the driving force. Adoption will not happen unless there
is a financial incentive (<TCO)
y An Extended invitation to visit Intel’s Data Center Summit

You might also like