Professional Documents
Culture Documents
1 Introduction ........................................................................................................................ 2
2 Summary ........................................................................................................................... 6
3 Overview of metrics ......................................................................................................... 12
4 Data centre energy efficiency .......................................................................................... 15
5 Estimation of fixed and proportional overheads .............................................................. 31
6 Metrics when selecting data centres and equipment ...................................................... 37
7 Monitoring and measurement.......................................................................................... 40
8 Impact of external temperature on data centre efficiency ............................................... 44
9 Glossary........................................................................................................................... 50
10 Acknowledgements ......................................................................................................... 52
11 References ...................................................................................................................... 53
1 Introduction
Whilst there is considerable coverage of IT energy use, despite the work of the European and
other government agencies there is still little hard information on the total size, power
consumption or efficiency of the data centre market. In the absence of this information it is
difficult to defend the data centre industry, predict growth or set effective metrics and targets.
To deal with these issues, properly understand the scale of the problem and deliver
improvements it is essential that an initial set of measurements and metrics is agreed upon
and data collection commenced on a large scale.
Opposing these constraints are demands from the business consumers of the data centre
services. The underlying growth in demand for IT services to the business is continuing and
now in addition many businesses are looking toward IT systems to reduce their
environmental impact in other areas, e.g. logistics systems for a road transport or tele-
commuting. As businesses have become more dependent upon IT services the
requirements for availability and continuity of services have increased, multiplying the
equipment requirements. This is a particular issue in sectors where regulators cover IT
systems. A failure to understand the relationship between the falling capital cost of IT
equipment and the rising costs of housing and powering it in the data centre is also creating
capacity and financial problems for many operators1.
1.2 Metrics
In this context of rising energy cost, energy security concerns, environmental pressure and
business demand data centre operators will soon be targeted, measured, grouped or
labelled by the efficiency of their facility. Many streams are currently underway both in the
European Union and worldwide to develop and apply efficiency metrics; specifically this
paper investigates the metrics suggested for the EU Data Centre Code of Conduct.
The scope of the metrics discussed in this paper is restricted to the data centre mechanical
and electrical infrastructure. These metrics do not reflect the efficiency with which IT
services, the end product, are delivered to users. This is a clear end goal for a metrics
development work streams and the capability to form part of a holistic set of system metrics
is a core consideration.
1.2.1 Methodology
The DCSG believes that for the industry to make real progress any data centre efficiency
metric will need to be part of a measurement methodology designed to calculate a
reasonable and fair approximation of the total environmental and financial cost of the
service provision from the data centre.
a
http://www.spec.org/power_ssj2008/
b
http://www.elsparefonden.org
2 Summary
There has been considerable progress in the market in a relatively short time in identifying
data centre efficiency reporting metrics. There is a general consensus developing that the
devices directly involved in delivering the useful work of the facility, that is the IT equipment,
servers, storage, networking and appliances are the energy targets and that any other
consumption of energy such as power conditioning and distribution or cooling is overhead.
measurement for their facilities. This should, at the very minimum be independent
metering of the utility power to the facility.
There are many sources of variability to the energy use and efficiency of a data centre
including the IT workload and external temperature. These can create quite significant
differences in the measured efficiency of the facility at different times of day or year. For
this reason it is important to measure the total energy used by the data centre and the
total energy delivered to the IT equipment as a long term average as well as recording
the individual data points to obtain useful information4.
2.2.1 DCiE
As DCiE does not contain any data to separate fixed and variable costs it cannot give
any useful information about the marginal energy or cost of new or reduced IT electrical
load. As a combined, point measure the DCiE of the data centre will change under any
significant change in load.
DCSG
Goal DCiE
F&P
Provide a clear, preferably intuitive understanding of the measure Y
Provide a clear, preferably intuitive direction of improvement Y
Describe a clearly defined part of the energy to useful work function of
Y Y
the IT services
Be persistent, i.e. the metrics should be designed to be stable and
extensible as the scope of efficiency measurement increases, rather Y Y
than confusing the market with rapid replacement
Demonstrate the improvements available in a modern design of facility Y Y
Demonstrate the improvements available through upgrade of existing
Y Y
facilities using more efficient M&E systems
Provide a clear, preferably intuitive understanding of the impacts of
Y
changes
Be reversible, i.e. it should be possible to determine the energy use at
the electrical input to the data centre for any specified device or group Y
of devices within the data centre
Be capable of supporting what if analysis for IT and data centre
operators in determining the energy improvement and ROI for
Y
improvements and changes to either the facility or the IT equipment it
houses
Table 2-1 Goals of metrics
As shown in the table neither the DCiE nor DCSG fixed and proportional metrics address
the full set of goals individually but complementary use of the two methods of analysis
meets all of the identified goals.
Assess changes
Very little information,
to IT power Good indicator of the Strong indicator of the
difficult to predict the
provisioning impact of changes impact of changes
impact of changes
processes
Weak indicator, ROI
Assess cost or Good indicator, Strong indicator,
likely to be
energy benefit effective ROI effective ROI
overestimated by
of Virtualisation prediction prediction
unknown margin
Assess M&E
Little information,
changes to Poor indicator of the Strong indicator of the
difficult to predict the
existing data impact of changes impact of changes
impact of changes
centre
As before this table shows that a combination of metrics and methods is required to
effectively support operators and improve the efficiency with which IT services are
delivered.
3 Overview of metrics
There is considerable opportunity for improvements in ICT and specifically data centre energy
efficiency, in order to realise these potential improvements it is important to provide not only
reporting measurements or targets but also analysis metrics and tools that assist operators in
understanding their facility and the impacts of their choices.
In this section we discuss the scope of current metrics, the development path to holistic
metrics and some of the issues with the DCiE metric, specifically why it is a reporting metric
and should not be used to make decisions about a data centre or to build a business case for
any data centre or IT changes.
changes. Once reporting metrics such as DCiE have identified an efficiency issue we
require metrics with analysis capabilities to enable operators to make the practical
decisions required in selecting a new, or improving the efficiency of an existing, data centre.
In the above table, if we only consider the ton miles per gallon metric it is clear that we
should all buy 38 Ton articulated lorries to do our shopping. The issue is that we have
discarded the information that tells us which is appropriate for our use and the metrics
are now likely to lead us to the wrong conclusion.
Before After
Design DCiE of facility 0.6 0.6
Fixed utility load of facility 100 100 kW
Rated IT load of facility 500 500 kW
Fixed loss multiplier 0.2 0.2 W/W
Proportional loss multiplier 1.5 1.5 W/W
Rated utility load of facility 850 850 kW
Figure 4-1 IT power delivery path and losses in the data centre
Figure 4-1 shows a simplified representation of the power delivery and loss path in a data
centre. Utility power enters the building on the left and passes through the power delivery
chain to the IT equipment on the right. Each stage in the delivery chain has inherent losses,
shown by the red arrows as well as the specific overheads shown as their own paths.
The actual implementation of a data centre is considerably more complex than in this
diagram and detail such as whether the CRAC units are fed from the UPS may vary. This
diagram is provided to provide a general understanding of how power flows through the
facility for the reader.
The fixed losses are particularly significant in this type of design as the 2N+1 resilience
doubles the impact of the fixed losses of each component, although this is partially offset by
the reductions in square law losses achieved by running the equipment below rated loadd.
c
Note that this is a simplistic model for CRAC and chiller plant but provides a useful
approximation at this point. A more detailed model is used later in this paper
d
Note that as the fixed losses of data centre M&E equipment improve the reductions in
square law and other losses in 2N and 2N+1 facilities start to offset the increased fixed
losses, reducing the overheads inherent in 2N type infrastructure resilience.
The graph shows the increasing power requirements at each stage of the delivery chain,
note that due to the fixed loads in the delivery chain such as UPS battery maintenance
power and CRAC fan power the data centre would draw a significant proportion of its peak
power even if all of the IT equipment were turned off. For instance, the figure above shows
that at zero IT electrical load the data centre would draw around 670kW.
As shown in the table above and the graphs below, both the DCiE (Figure 4-3) and the
PUE (Figure 4-4) are non linear functions and are significantly influenced by the IT
electrical load in the data centre.
This presents an issue with the use of DCiE and PUE metrics as the IT electrical load at
which they should they be measured for each facility is not defined.
If the DCiE or PUE is to be measured and reported by data centre operators there is an
incentive to measure this at maximum IT electrical load to optimise the result whilst if IT or
utility electrical load is to be reported as has also been suggested, the incentive would be to
measure at the lowest power point, where the DCiE is at its worst. This presents a conflict
in measurement objectives and approaches and would require the definition of some sort of
standard load profile which is unlikely to effectively represent the range of data centres
and their utilisation.
Facility PowerZero The power drawn at the Utility feed at zero IT electrical load
Facility PowerFull The power drawn at the Utility feed at full IT electrical load
Rated IT Load The rated IT electrical load of the facility
Facility PowerZero
Fixed Overhead =
Rated IT Load
Fixed overhead has no units as the component units are Watts / Watts.
Once these two values are determined for the facility the two loss components can be
plotted together, in the case of the data centre example from 4.3 these are;
Fixed Overhead = 0.65
Proportional Overhead = 1.41
Note that these two values sum to the PUE at full IT electrical load of 2.06.
Figure 4-5 Data centre power transfer as fixed and proportional losses
This server exhibits a fairly high minimum power though there are many significant and
useful efforts already underway within the industry to reduce this and produce devices
that exhibit a far more linear relationship between IT workload and PSU power draw as
described in 4.5.
These two values can then be summed for any workload value to determine the utility
electrical load required to power and cool the server in the data centre.
Figure 4-6 Server utility power draw by fixed and proportional overheads
Figure 4-6 above shows the power drawn by the server defined in Table 4-3 across the
range of applied IT workload, for the example data centre with a fixed overhead of 0.6
and proportional overhead of 1.4. The mauve area shows the servers power draw
ranging from 200W to 350W at the plug whilst the yellow area shows the proportional
losses of the data centre applied to the servers power draw curve, including the
proportional losses the server power now varies from 284W at idle to 497W at full load.
We can determine the fixed utility load of the server by allocating a proportion of the
facility fixed power draw based upon the proportion of the facility rated IT electrical load
that is provisioned to this servere. This server uses a standard, hot swap power supply
module rated at 700W, even though its peak draw, as configured is only 350W. With this
nameplate power of 700W the purple blue area shows that even when turned off, the
provisioned power multiplied by the fixed overhead gives a draw of 441W at the utility
feed, this is substantially more than the peak draw of the server.
To determine the overall utility power for this server we add the fixed and proportional
loads together. The servers total utility draw at the idle PSU draw of 200W is 737W
whilst at the full load PSU input of 350W the utility draw is 949W as shown by the blue
line.
e
Note that in the DCSG data centre model the overall utilisation of the facility is also taken
into account and that the fixed overheads of IT devices in a partially utilised data centre can
be much larger.
Figure 4-7 Server power draw by fixed and proportional overheads vs. PUE
Unfortunately, as shown in Figure 4-7 the fixed and proportional overhead approach
produces a significantly different result to simply multiplying the servers power draw by
the power usage effectiveness (PUE) of the data centre. This is shown for both the
design PUE by the red line and the achieved PUE at the operating IT electrical load by
the orange line. The PUE and DCiE are not able to account for the difference between
the provisioned power and the actual power drawn, or the fixed power floor of the data
centre and significantly underestimate the power drawn. Further the PUE and DCiE can
be misleading and may seriously overestimate the power savings at utility feed. This will
lead to overestimated ROI for low power IT equipment, which could damage operator
confidence in these technologies as described in 3.3.3.
f
For traditional chiller technologies, facilities with variable speed pumps, air or water side
economisers may achieve optimum performance below 100% rated IT electrical load.
g
The DCSG data centre simulator provides both methods of cost allocation.
Figure 4-8 Data centre fixed and proportional losses under modular provisioning
The graph in Figure 4-8 shows the data centre from section 4.3 with the same inefficient
M&E equipment but this time the M&E equipment provider uses modular provisioning of
the PDU, UPS, CRAC and chiller systems in 200kWh steps for rated IT electrical load.
This provides substantial efficiency improvements in the early years of the facility
operation where the facility is at low utilisation as well as reducing initial capital costs and
improving flexibility. Whilst more complex than just the two values of fixed and
proportional, this graph is easy to visualise from an understanding that the fixed
overheads will increase in steps as the M&E infrastructure is provisioned.
The graph in Figure 4-9 shows the same modular provisioning approach in terms of the
facility New DCiE against the Base DCiE under monolithic provisioning as shown in
Figure 4-3. The DCiE function now varies significantly through the life of the facility with
distinct saw-tooth steps. Although the DCiE is significantly improved across much of the
IT load range, if this facility were to be measured purely on DCiE the results may be
confusing to the operator as well as difficult to explain to business management. This is
not a set of results that a facility operator would intuitively expect to see.
h
200kW IT load steps, the actual increments are larger for most devices due to the losses
further down the chain
To further illustrate the fixed and proportional overhead analysis for the modular
provisioning data centre Figure 4-10 shows the fixed overhead (Watts drawn / Watt
provisioned) and the proportional overhead (utility Watts drawn / IT Watts drawn) for the
facility at each of the 5 provisioning steps.
Note that in this facility with modular provisioning of the same M&E equipment, the
proportional overhead is very nearly constant whilst the fixed overhead is slightly higher
at the lower provisioned capacities. This is what one would intuitively expect to happen
within such a facility as there is no change in the nature of the equipment creating the
proportional losses and as the capacity increases the overheads of the 2N+1 resilience
reduce in proportion to the overall load.
i
Again, 200kW IT load steps
Figure 4-11 Fixed and proportional losses powering down unused CRACs
Whilst this does not produce as effective a reduction as modular provisioning of the
whole infrastructure, note that each 48kW step in CRAC fixed overhead produces more
than 65kW reduction in overall utility electrical load due to the reduction in proportional
overheads elsewhere in the infrastructure.
The graph in Figure 4-12 shows the same data, represented as the DCiE again against
the Base DCiE under monolithic provisioning as shown in Figure 4-3. The improvement is
smaller here than in the modular provisioning approach but this is action is available in an
existing facility at no capital cost. Note that whilst the curve is smoother and more
predictable the DCiE still varies significantly through the IT load range.
Figure 4-13 shows the facility overheads for this scenario, as expected the proportional
overheads are nearly constant whilst the Fixed Overhead falls substantially as more of
the installed M&E equipment is utilised by IT equipment load and the fixed power draw
becomes a smaller fraction of the overall power draw.
Many vendors, such as APC and Chloride, already produce modular M&E equipment which
can be installed in small increments to meet demands. Unfortunately once a unit of capacity
is provisioned the fixed overhead associated with that capacity is applied and instead of the
sawtooth efficiency curve of Figure 4-9 the facility is better described by the family of DCiE
curves shown in Figure 4-14.
We suggest that a natural extension of modular M&E equipment be equipment that self
tunes to the current electrical and thermal loads by turning off modular components when
the IT load drops. This would also allow for more flexible installation and more effective
management of modular provisioned solutions as additional modules could be installed
ahead of requirement and automatically provision themselves when the load breaks the set
threshold.
As for the previous examples fixed and proportional overheads will highlight these issues
more effectively and in a more intuitive manner that drives understanding of the issues and
the available mitigation strategies.
This is illustrated in Figure 4-15 for the set of facilities, all with the same design DCiE of 0.5
described in Table 4-4.
The graph, Figure 5-1 of the measured load data in Table 5-1 shows noticeable scatter
away from a straight line due to the random error used to simulate measurement errors
and other variations.
j
See section 8 for further analysis of the impacts of external temperature
Figure 5-2 Scatter plot of IT and utility load measurements with regression line
In the Microsoft Excel spreadsheet provided to DCSG members9 this regression analysis
is achieved with the LINEST() function providing the following estimated values for the
fixed and proportional overhead as well as the 95th percentile upper and lower confidence
boundsk;
k
The upper and lower confidence bounds are determined by using the standard error and
degrees of freedom outputs of the LINEST function as inputs to a two tailed t-test
These values are shown in the graph in Figure 5-3, the error bars show the 95th
percentile upper and lower confidence bounds for the estimation.
From these regression analysis values we can estimate the fixed and proportional
overheads for the facility to be;
Facility PowerZero 558,578
Fixed Overhead = = = 0.54
Rated IT Load 1,000,000
Facility PowerFull Facility PowerZero
Proportional Overhead = = 1.67
Rated IT Load
These values are a reasonable first approximation of the real values for this facility;
Figure 5-4 Chiller plant power transfer by IT load and external temperature
As shown in Figure 5-4 the overhead of the data centre is influenced by the external
temperature. In many facilities this is a major influence on the overall utility load and
therefore the efficiency. In the context of this variability the fixed and proportional
overheads are considered to be instantaneously fixed or proportional. The fixed part of
the load is still that which would remain if all IT electrical load was removed and the
proportional still the remainder. This allows us to average the fixed and proportional
overheads of the facility over the operating temperature ranges.
Equipment audit
An audit of the M&E equipment in the facility should be conducted, gathering the power
specifications of the components from the manufacturer, maintainer or specification
plates. This is particularly relevant in the case of fixed load items such as chilled water
pumps or fixed speed CRAC fans.
Component measurements
The second approach is to identify further measurement points within the facility to
isolate the power drawn by the M&E components as explained in section 7.1. Power
delivered to each part of the infrastructure such as the CRAC units and chiller plant
should be independently measured and monitored where the electrical layout allows.
Again, this data can be fed back into a model of the facility to further tune the analysis.
These values would give a theoretical efficiency at 100% load of 89.3% for this UPS. If we
now examine the impact of these losses in the example data centre with 1MW rated IT
electrical load in 2N+1 redundancy configuration;
As shown in Table 6-2 and Figure 6-1 the efficiency of this UPS design and configuration
falls away quite rapidly as the electrical load of the data centre reduces. A facility using
nameplate provisioning is only likely to reach 50 percent load peak and many facilities take
considerable time to fill with equipment.
l
http://sunbird.jrc.it/energyefficiency/html/standby_initiative_UPS.htm
Figure 7-1 shows, at a high leveln, the points where energy use can be measured in the
data centre to assist in understanding the efficiency, losses and impacts of changes and
improvements. These are represented in three groups;
1. Simple measurements, these are the most basic level of measurement, required
to deliver a PUE/DCiE metric of the ratio of utility power to IT power for the
facility. These can be initially informative to the operator and are easy to perform
with portable equipment. It is important that the external and data floor set
temperatures are recorded along with the electrical measurements
2. Detailed measurements, these are the next level of measurement where we
specifically measure each of the points where power is lost in the delivery chain
or diverted to non IT loads. This provides more effective information on how to
improve the facility and can directly change relevant behaviour. This level may
be necessary dependent upon the electrical configuration, for example if the
CRAC units are fed from the UPS then CRAC power is a required measurement.
m
Some data centre consultancies such as Keysource,
http://www.keysource.co.uk/index.asp?ID=242 now offer free efficiency audits
n
The details of the electrical design will vary between facilities, as such the power delivery
path and measurement point(s) for each of the loads identified will vary. Expert advice should
be sought to determine the measurement points within each specific facility
Development Measurement
Recording Frequency Output
stage points
Manual -
Simple survey Simple Once Report
Temporary
M&E
Simple facility Automatic -
Simple 5 Minutes monitoring
monitoring Permanent
screen(s)
M&E
Advanced facility Simple & Automatic -
5 Minutes monitoring
monitoring Detailed Permanent
screen(s)
IT
Simple, Automatic -
Holistic monitoring 5 Minutes monitoring
Detailed & IT Permanent
screen(s)
Table 7-1 Stages of energy monitoring
The recording equipment can be either temporary or permanently installed. Many M&E
maintenance vendors have portable equipment that can be used to carry out a survey.
There is also a growing range of power reporting modules to be fitted to PDUs, UPS and
rack level power strips. Of significant impact to the effectiveness of the power metering is
how it is viewed by the facility operator;
1. Report, the results of a survey will be delivered as a report, this provides a one
time view, or reminder every reporting period if recurring under contract
2. M&E monitoring screen, where permanent power metering is in place the outputs
can be sent live to a monitoring screen for M&E staff to view or extract reports,
this suggests an organisational separation of the IT and M&E teams
3. IT monitoring screen, the most integrated and effective choice is to send the
power metering to the same monitoring screens as the IT monitoring data for an
integrated IT and M&E team response.
7.3.1 How effectively does the IT electrical load track the IT workload?
It is quite common during installation of IT equipment for engineers to either not enable or
actively disable all of the power management features of the servers and storage they
are installing. In a few instances this is for a valid operational reason but it is
inappropriate in many cases. Measurement of the IT electrical load at varying times of
day and analysis of the variance will reveal how effectively the IT electrical load tracks
the IT workload and whether it is necessary to audit the power management
configuration of the IT devices.
The correlation between IT workload and IT electrical load should also improve as the IT
equipment is replaced during normal end of life upgrade as new IT equipment exhibits
substantially better load to power linearity. This correlation is a key measure of the IT
platform and should be monitored.
The graph in Figure 8-2 shows the joint impact of varying IT electrical load and external
temperature on the efficiency (DCiE) of the simulated data centre. As shown, the
efficiency of the data centre reduces noticeably with a rise in external temperature but the
IT electrical load is still the dominant influence on efficiency in this facility. This supports
the use of fixed and proportional overhead metrics derived from simple measurement
data for this class of facility.
Figure 8-2 DCiE by IT electrical load and external temperature, traditional cooling
As shown in Figure 8-3 humidity control is largely achieved through the intelligent use of
control systems to manage the re-circulating air flow and the use of evaporative adiabatic
humidifiers. For a more detailed explanation of fresh air cooling see Fresh Air Cooling12
In this analysis we use the same inefficient chiller plant but the CRAC units are replaced
by a central air mover. We do not model the potential fan efficiency improvements of the
central air handler over distributed CRAC units.
As shown in Figure 8-4 there is a distinct change in efficiency starting at the set data floor
temperature of 21 Celsius through to the exhaust air temperature of 31 Celsius. At the
new recommended ASHRAE data floor temperatures as specified in the EU Code of
Conduct11 the set temperature would be closer to 27 degrees and the facility would
spend more of its operating time in the higher efficiency region.
Figure 8-4 DCiE by IT electrical load and external temperature, fresh air cooling
When operated with a varying IT workload and external temperature this facility will
exhibit significant changes in overall efficiency through the working day. The graph in
Figure 8-5 shows the large impact in the middle of the day for this type of facility where
high IT workload coincides with high external temperature in the summer. Comparison
with Figure 5-4 for the traditionally cooled facility shows both the increase in variation and
the significant improvement in overall efficiency.
This joint impact of IT workload and external temperature gives the daily variance in
DCiE shown in Figure 8-6.
Figure 8-6 Daily data centre efficiency by hour, fresh air cooling
The models used in this section to demonstrate the external temperature impacts on
cooling power are relatively simple to demonstrate the effects and do not account for a
number of variables such as the external humidity, There has already been substantial
work demonstrating the application of more complex and effective models to data centre
cooling loads13.
As shown in Figure 8-7 we can continue to use the fixed and proportional approximations
with this type of facility and receive effective analysis of their behaviour subject to
understanding the performance characteristics of the building.
Figure 8-8 Scatter plot and regression line of IT and utility load fresh air cooling
Figure 8-8 shows that the simple regression against only the IT electrical load is
confused by the cluster of data points that occur at both high IT workload and
temperature and the regression suggests a near zero value for the fixed overhead which
is clearly in error when compared with Figure 8-7.
To perform a regression analysis of this type of facility it would be necessary to perform a
multivariate regression including the external temperature. To perform this effectively it
would be advantageous to obtain some performance and set point data about the cooling
system. Considering the additional complexity this involves it is recommended that a full
simulation model of the facility be calibrated against the measured data instead to
determine the working (average) fixed and proportional overheads. In the BCS data
centre model it is possible to consider the instantaneous fixed and proportional
overheads at each time point to obtain greater granularity and accuracy.
o
NASA Atmospheric Science Data Center, http://eosweb.larc.nasa.gov
9 Glossary
9.1 Power provisioning terms
The following terms are used in BCS Data Centre Specialist Group documents to refer to
the power and cooling capacity allocated to IT equipment within the data centre at the time
of provisioning.
Nameplate power
The Nameplate power of an IT device is defined as the rated power of the power supply
(or supplies where more than one is required to operate the device), this is frequently
substantially larger then the maximum power the equipment draws.
Peak power
The peak power is the maximum power that could be drawn at the power supply of the IT
equipment. This is determined by adding up the peak power requirement of each
component within the device plus overheads. For a server this would involve adding up
the power of the processor(s), memory, IO cards, main board, chassis and disks and
then factoring the losses of the voltage converters and power supplies.
Statistical provisioning
The statistical approach is frequently used in large Internet data centres where there are
large numbers of the same class of hardware under the same workloads. This allows the
operator to measure the peak power used by a rack of their standard servers and then
provision to this value15. This presents possible reliability issues and requires some
safety headroom as each server, rack or group could suffer a surge in workload, such as
under a DOS attack and draw substantially more power than is provisioned.
PUE
This is the power usage effectiveness metric as defined by the Green Grid, the utility
electrical load of the data centre divided by the IT electrical load.
DCiE
This is the data center infrastructure efficiency metric as defined by the Green Grid, the
IT electrical load divided by the utility electrical load of the data centre.
Design DCiE/PUE
This is the DCiE or PUE measured at the IT electrical load (usually peak) that maximises
the metric by minimising the impact of the fixed load overheads.
Achieved DCiE/PUE
This is the DCiE or PUE measured when the facility is operating under the actual
production IT electrical load. This is a variable value and therefore represents a moving
target.
Fixed overhead
This is the BCS DCSG proposed metric to describe the utility electrical load of the data
centre that is present irrespective of IT equipment electrical load.
Proportional overhead
This is the BCS DCSG proposed approximating metric to describe the additional utility
electrical load of the data centre above the fixed overhead that is proportional to the IT
electrical load.
IT electrical load
This is the power drawn at the power supply input of the IT equipment housed within the
data centre.
This is also referred to as the IT equipment power.
10 Acknowledgements
The DCSG would like to thank the following people for their input, review and comment on
this paper during the review phases;
The DCSG committee and membership
Paul Elliott, Future Tech
Ian Bitterlin, Chloride
Bernard Aebischer, ETH
Benjamin Petschke, Stulz
Victor Smith, Dell
11 References
1
Kenneth G Brill, The Invisible Crisis in the Data Center: The Economic Meltdown of Moores
Law
2
Going Green? The Reader Perspective, Dale Vile, Freeform Dynamics
3
Carbon Trust joins up with British Computer Society to cut carbon from data centres,
http://www.bcs.org/server.php?show=ConWebDoc.19925
4
Quoi de neuf dans le domaine de lefficacit nergtique des data centres?, Centre for
Energy Policy and Economics (CEPE), ETH Zrich, September 10, 2007, Aebischer, Bernard
5
http://www.thegreengrid.org/events/technical_forum/Day_1.Track_1._Rating_Systems_for_D
ata_Centers_P.pdf
6
Enabling the Energy-Efficient Data Center, C Belady, J Pflueger, Green Grid
http://www.dell.com/downloads/global/power/ps1q08-20080199-GreenGrid.pdf
7
THE GREEN GRID DATA CENTER POWER EFFICIENCY METRICS: PUE AND DCiE,
http://www.thegreengrid.org/gg_content/TGG_Data_Center_Power_Efficiency_Metrics_PUE_
and_DCiE.pdf
8
APC, Electrical Efficiency Modeling of Data Centers, Rasmussen, Neil,
http://www.apcmedia.com/salestools/NRAN-66CK3D_R1_EN.pdf
9
Analysis spreadsheet for data centre efficiency metrics white paper, DCSG members
repository,
http://dcsg.bcs.org//component/option,com_docman/task,cat_view/gid,22/Itemid,50/
10
APC, Electrical Efficiency Modeling of Data Centers, Rasmussen, Neil,
http://www.apcmedia.com/salestools/NRAN-66CK3D_R1_EN.pdf
11
EU Code of Conduct for Data Centres,
http://sunbird.jrc.it/energyefficiency/html/standby_initiative_data%20centers.htm
12
Fresh Air Cooling, Paul Elliott, Future Tech,
http://dcsg.bcs.org//component/option,com_docman/task,cat_view/gid,17/Itemid,50/
13
Model-Based Approach for Optimizing a Data Center Centralized
Cooling System, Beitelmal, Patel, http://www.hpl.hp.com/techreports/2006/HPL-2006-67.pdf
14
State of the Art Energy Efficient Data Centre Air Conditioning, Benjamin Petschke, Stulz
http://www.stulz.de/en/downloads/general-information/
15
Power Provisioning for a Warehouse-sized Computer, Fan, Weber, Barroso,
http://research.google.com/archive/power_provisioning.pdf
16
Dynamic Power Management has Significant Values, Intel,
http://communities.intel.com/openport/blogs/server/2008/04/11/dynamic-power-management-
has-significant-values-a-baidu-case-study