You are on page 1of 4

2005, American Society of Heating, Refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org).

Reprinted by permission from ASHRAE Journal, (Vol. 47, No. 12, December 2005). This article may not be copied nor distributed in either paper or digital form without ASHRAEs permission.

Data Centers

Roadmap for Datacom Cooling


I
By Christian L. Belady, Associate Member ASHRAE, and Don Beaty, P.E., Member ASHRAE power densities simply compounds the problem. These loads are at concentrations that have never been cooled before in a datacom facility and, therefore, a lack of historical knowledge or precedence exist on how to cool. In addition, the physical location of the load concentrations may vary with each deployment, making the cooling load a moving target in both magnitude and location. Providing expansion capacity at the plant level may not be the complete answer. How will the datacom equipment be cooled in the future? Will it be air, liquid, or a combination, etc.?1 As a result, the planning of new datacom rooms or even major upgrades has many unknowns and variables. In addition, the industry is becoming increasingly obsessed with performance metrics (especially rst cost), so overspending for the Day 1 build (initial build) seldom will be accepted enthusiastically. Although we have become accustomed to cookbook solutions, this is not a practical approach to such a complex and variable problem. A datacom cooling roadmap to consider is one that returns to fundamental engineering and problem solving. It is process centric.
ashrae.org

nformation systems (e.g., Internet, search engines, archival systems) are all integral parts of the datacom (data processing and communication) industry. As a result, the cooling of datacom facilities has the same potential for overload, change, evolution, and premature obsolescence. In some ways, effectively addressing datacom cooling could be characterized as being more difcult than the challenge associated with the datacom equipment and systems themselves. The typical life cycle or refresh rate of datacom hardware and software is two to ve years. The typical life cycle of cooling equipment is 10 to 25 years. Therefore, the cooling system must be designed to accommodate at least several major hardware/software upgrades. This represents a major challenge since forecasting the impact of a single major upgrade event is difcult enough but forecasting beyond to the second or third upgrade can seem impossible. In non-datacom applications, the cooling plant simply would be designed for the current (or Day 1) load with the ability to expand to the future load. In the case of datacom, the continued upward trend in datacom equipment
52 ASHRAE Journal

This article briey addresses the following areas to characterize the datacom cooling roadmap: Planning process (baseline, Day 1 needs and future needs), and Providing a framework for a datacom cooling roadmap. Too many variables exist, as well as different personal/corporate preferences, policies, and strategies for one answer to exist. At the same time, a movement towards more standardization and low cost exibility are benecial.
Planning Process Part 1: Baseline

Any measurement of the merits of a particular process requires the establishment of a strong baseline to measure against. The baseline involves a clear characterization of any relevant existing performance metrics and historical experience. This includes the current and past operational
About the Authors Christian L. Belady is distinguished technologist at Hewlett Packard, Richardson, Texas. He is one of the founding members of TC 9.9. Don Beaty, P is president of DLB Associates Consulting En.E., gineers, Ocean, N.J. He is chairman of ASHRAE TC 9.9, Mission Critical Facilities, Technology Spaces and Electronic Equipment.

December 2005

These loads are at concentrations that have never been cooled before in a datacom facility and, therefore, a lack of historical knowledge or precedence exist on how to cool.
loads and information on existing cooling difculties at the current facilities. Another critical component in establishing the baseline is to determine the preferences, policies, strategies, and concerns of each stakeholder including their value system. Is there an increased focus on reliability, redundancy? Does the rst cost have a bigger emphasis than quality? What are the details associated with the deployment strategy and the transitional period? This information needs to be compiled, analyzed and documented as a part of the owners project requirements (OPR),2 which should be referenced and adhered to throughout the design phases of the project.
Planning Process Part 2: Day 1 Needs

The OPR sets the ground rules for the roadmap, and the next step is to translate those rules into a set of design criteria to establish Day 1 needs. One key aspect to that is the classication of the Day 1 cooling load. The environmental specications for a specic type of facility can be obtained based on the classications that are established in Thermal Guidelines for Data Processing Environments3 authored by ASHRAE Technical Committee (TC) 9.9, Mission Critical Facilities, Technology Spaces and Electronic Equipment. In the past, the only information that was available to determine the cooling loads for datacom equipment was to use the nameplate rating. However, the nameplate rating is determined with a focus on regulatory safety and is typically
December 2005

signicantly higher than the heat release of that piece of equipment which can lead to over-design and stranded capacities. Thermal Guidelines also establishes a forum for datacom equipment manufacturers to provide concise cooling information for their products via a thermal report template. The thermal report provides actual heat release and cooling airow information for multiple congurations of the same product. It also graphically indicates the airow pattern through the product. With datacom equipment manufacturers beginning to use this method to publish their heat release numbers, early indications show that in some cases, the actual load at the minimum conguration of a product is about 40% to 60% (this could be much lower for large systems) of the load indicated on its nameplate based on whether the product is minimally or fully congured. This not only shows how much lower and more accurate the heat release numbers are, but also highlights the variation in cooling load that can occur with changes in product conguration.4 Other methods also can be used to plan the distribution of the Day 1 cooling loads. One such method involves establishing physical zones within the footprint of the raised oor area that have varying areas and load densities.5 For example, three zones may be created for low, medium and high density loads. The zoning of the area in this manner allows for a cooling distribution system that is more tailored to a given zone as opposed to trying to formulate a design that was

innitely exible for the entire footprint of the datacom equipment room.
Planning Process Part 3: Future Needs

Establishing the future cooling needs of a datacom facility is the most difcult aspect of the roadmap, since it is the one step that has the greatest quantity of unknowns and variables. This makes it also the most reliant on a process-centric solution. Once again, the OPR is referred to in order to determine the owners emphasis and preferences with regards to future deployment strategies, exibility, scalability and tolerance to operational disruption. The information on these topics that is extracted from the OPR may not be succinctly dened, but should provide the underlying principals upon which to develop the design. The key to establishing the future cooling loads is an understanding of the trends in applications and equipment and being able to make predictions with some level of condence. The power trends established by the Thermal Management Consortium and published by the Uptime Institute6 have been recently updated and expanded upon by ASHRAE TC 9.9 in its Datacom Equipment Power Trends and Cooling Applications7 to include a greater delineation of datacom equipment types and projected loads to 2014. The application of the trends can vary based on a given application. The overall percentages of the types of equipment are required (e.g., storage servers vs. compute servers vs. communication equipASHRAE Journal 53

ment, etc.) and also an understanding as to whether Industry Driver D changes in future applications may inuence these Industry Driver C percentages for future deployments. Industry Driver B The trend charts themselves are based on the Industry Driver A maximum predicted loads for a given type of dataASHRAE Thermal Guidelines ASHRAE Power Trend com equipment and so another decision that must be Transition Period Uptime Power Trend Compliance Period made is whether the values in the trend charts are to 40 be considered or whether some percentage reduction 30 of those values should be considered to allow for the 20 possibility of using equipment that does not represent the extreme cutting edge in technology. 10 Blades and 1U Servers A deployment strategy can allow for an understandServers 0 ing of the timing of the deployments and the trend 1990 1995 2000 2005 2010 2015 2020 charts can provide the impact of the deployment from a cooling load standpoint. However, the incorporation of Figure 1: Example of datacom cooling roadmap. the actual changes to the cooling system can be planned for as well. The questions that need to be answered are: New low power technologies. Breakthrough semiconductor 1. What percentage of the anticipated future cooling capacity technologies could lower power. Unfortunately, at this time, no do you install as a part of the initial construction? This option new technologies are on the horizon that could be commercialhas the greatest rst cost impact but also is the least disruptive ized in the next 10 years. and time consuming future deployment option. Rapid rise of computing performance. The increase in 2. What percentage of the anticipated future cooling capac- performance capabilities in datacom equipment could signiity should I provision for? This option requires some initial cantly outstrip the demand for performance. Performance per installation and some spatial provisioning for the acceptance watt is increasing signicantly and end users computation of future equipment. With careful planning, this can provide a demand is increasing more rapidly. These factors consume all non-disruptive option that does not have a great rst cost impact, performance and then some. This drives the need for more or but the deployment is not as immediate as the previous option larger data centers. This could change if the reverse is true and since some installation is required. performance grows faster than the demand. The result would 3. What percentage of the anticipated future cooling capac- be fewer computers in the data center but, in the short term, ity do I not want to provision for and accept that it will be a this is not a likely scenario. major facility upgrade? This option is the most disruptive but New high density air cooling. Raised oor cooling in data has little to no impact on rst cost. The selection of this option centers is already reaching its limits and at some data centers (or rather the value of the percentage of this option) has to be density will no longer be adequate. Instead of moving to liquid weighed against the risk of meeting or exceeding the future cooling at that point, there may be signicant opportunity in cooling capacity earlier than expected. using new cooling technologies in the data center to extend the air cooling limits. Framework for a Datacom Cooling Roadmap Liquid cooling. Clearly, this is one of the most discussed Currently, there is no indication from datacom manufacturers paradigm shifts being talked about in the industry. Most everyor end users on where the industry is heading and when they one knows if trends continue, liquid cooling is an option that will arrive there. In other words, an industry roadmap for the needs to be considered. cooling of datacom equipment in the data center is needed. All of these are examples of paradigm shifts that may or may This was one of the motivating reasons for the development of not occur but as an industry we must try to develop a roadmap TC 9.98 and, as a result, efforts are now underway at TC 9.9 to with items similar to these. If we succeed, the industry will develop this industry roadmap. align and navigate through an easy and efcient transition of The goal is to provide a timeline for datacom cooling para- relatively complex paradigm shifts. digm shifts. These shifts are anything that could signicantly Not adopting a roadmap, would create guesswork for all inimpact datacom equipment or data center design for continued volved. Manufacturers would guess when customers would be operational success. Some examples of potential industry driv- ready for industry shifts, and data centers would be built with ers for these paradigm shifts could be the following: suboptimum solutions based on guessing when shifts would New data center standards or guidelines. Standards and occur. As an example, if liquid in the data center becomes a real guidelines can dene operating temperature and humidity consideration, when should industry players respond? conditions in the data center environment. In addition, these Guessing incorrectly by manufacturers and data center guidelines could drive how air moves in the data center. owners would add signicant cost and, in fact, in some cases,
3 7 6

54

ASHRAE Journal

Heat Load in kW Per Rack

ashrae.org

December 2005

making the wrong decision could be devastating to the business. So, as an industry we have the choice of doing nothing and reacting to change or proactively driving and inuencing when change occurs. This is where TC 9.9 can have signicant impact. To some extent, TC 9.9s two publications 3,7 have laid the foundations for this roadmap. The rst document suggests the alignment of the industry on common datacom environments and equipment cooling techniques by 2008, while the second document projects the datacom power density growth over the next decade. The committee is focusing on how to continue dening the roadmap even further. An example of such a roadmap is shown in Figure 1. This gure shows a graph of server power data as a function of rack power7 where a typical rack occupies about 7 ft2 (0.7m2) Above the graph is an example roadmap showing various industry drivers as a function of year that they are effective. Several drivers already have been accepted by the industry. The rst was the power trend published by Uptime6 in 2000 (denoted by Uptime power trend in the gure) that was developed largely by the same companies that currently make up TC 9.9 (but prior to the TCs existence). Now in its second year, TC 9.9 also has published an updated power curve (denoted by ASHRAE power trend in the gure) that will be valid from 2005 2010. As described earlier, one of the most important publications of TC 9.9, Thermal Guidelines,3 has provided one of the foundation elements for the datacom cooling roadmap. Published in early 2004, alignment is expected after 2008. However, companies are encouraged to adopt it sooner, as shown by the transition period in Figure 1. In addition, Figure 1 shows generic industry drivers A, B, C and D for illustrative purposes only. In each case, the year for targeted alignment will be projected as well as transitional period for adoption. The role of TC 9.9 will be to dene some of these generic industry
December 2005

drivers moving forward. The committee will need to look at all of the possible industry drivers in the datacom space, some of which have been outlined in this document, and develop consensus across the complete data center value chain, which includes infrastructure suppliers (power and cooling), datacom equipment manufacturers, consulting engineers and data center owners/manager. With this in mind, TC 9.9 has been trying to broaden its membership to include representatives from all of these areas.
Summary

With the dynamic nature of the datacom business, developing a cooling roadmap is imperative for the industry. Without such an industry-wide roadmap, the cooling industry will remain fragmented and inefcient. TC 9.9 has initiated this cooling roadmap through its planning and publishing of the Thermal Guidelines for Data Processing Environments3 and the Datacom Equipment Power Trends and Cooling Applications.7 The ongoing activities of TC9.9 are continuing towards the goal of improved industry support and a cooling roadmap.
References
1. Beaty D and R. Schmidt. 2004. Back to the future: liquid coolingdata center considerations. ASHRAE Journal 46(12). 2. Stum, K. 2002. Design intent and basis of design: clarication of terms, structure and use. ASHRAE Transactions 108(2). 3. ASHRAE TC 9.9 2004. Thermal Guidelines for Data Processing Environments. ASHRAE Special Publications. 4. Beaty D and R. Schmidt. 2005. Data center load characterization: connected vs. demand. Engineered Systems (1). 5. Beaty, D., N. Chauhan and D. Dyer. 2005. High density cooling of data centers and telecom facilitiesPart 1. ASHRAE Transactions 111(1). 6. The Uptime Institute. 2000. Heat Density Trends in Data Processing, Computer Systems, and Telecommunications Equipment. Uptime Institute White Paper. 7. ASHRAE TC 9.9. 2005. Datacom Equipment Power Trends and Cooling Applications. ASHRAE Special Publications. 8. Schmidt, R., et al. 2004. Evolution of data center environmental guidelines. ASHRAE Transactions 110(1).

Advertisement formerly in this space.

ASHRAE Journal

55

You might also like