Professional Documents
Culture Documents
Associate Certification
Exam
Course Transcript
Study Guide
Fundamentals of Availability
Transcript
Slide 1
Welcome to Data Center University™ course on Fundamentals of Availability.
Slide 2: Welcome
For best viewing results, we recommend that you maximize your browser window now. The screen controls
allow you to navigate through the eLearning experience. Using your browser controls may disrupt the
normal play of the course. Click the attachments link to download supplemental information for this course.
Click the Notes tab to read a transcript of the narration.
Slide 4: Introduction
In our rapidly changing business world, highly available systems and processes are of critical importance
and are the foundation upon which successful businesses rely. So much so, that according to the National
Archives and Records Administration in Washington, D.C., 93% of businesses that have lost availability in
their data center for 10 days or more have filed for bankruptcy within one year. The cost of one episode of
downtime can cripple an organization. Take for example an e-business. In a case of downtime, not only
would they potentially lose thousands or even millions of dollars in lost revenue, but their top competitor is
only a mouse-click away. Therefore loss is translated not only to lost revenue but also to a loss in customer
loyalty. The challenge of maintaining a highly available network is no longer just the responsibility of the IT
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
departments, rather it extends out to management and department heads, as well as the boards which
govern company policy. For this reason, having a sound understanding of the factors that lead to high
availability, threats to availability, and ways to measure availability is imperative regardless of your business
sector.
Physical Infrastructure is the foundation upon which Information Technology (IT) and telecommunication
Networks reside.
Physical Infrastructure consists of the Racks, Power, Cooling, Fire Prevention/Security, Management, and
Services.
Regardless of the line of business, these three objectives ultimately lead to improved earnings and cash
flow. Investments in Physical Infrastructure are made because they both directly and indirectly impact these
three business objectives. Managers purchase items such as generators, air conditioners, physical security
systems, and Uninterruptible Power Supplies to serve as “insurance policies.” For any network or data
center, there are risks of downtime from power and thermal problems, and investing in Physical
Infrastructure mitigates these and other risks. So how does this impact the three core business objectives
above (revenue, cost, and assets)? Revenue streams are slowed or stopped, business costs / expenses
are incurred, and assets are underutilized or underproductive when systems are down. Therefore, the more
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
efficient the strategy is in reducing downtime from any cause, the more value it has to the business in
meeting all three objectives.
While these arguments still hold true, today’s rapidly changing IT environments are dictating an additional
criteria for assessing Physical Infrastructure business value. Agility. Business plans must be agile to deal
with changing market conditions, opportunities, and environmental factors. Investments that lock resources
limit the ability to respond in a flexible manner. And when this flexibility or agility is not present, lost
opportunity is the predictable result.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Reliability is the ability of a system or component to perform its required functions under stated conditions
for a specified period of time.
Availability, on the other hand, is the degree to which a system or component is operational and accessible
when required for use. It can be viewed as the likelihood that the system or component is in a state to
perform its required function under given conditions at a given instant in time. Availability is determined by a
system’s reliability, as well as its recovery time when a failure does occur. When systems have long
continuous operating times, failures are inevitable. Availability is often looked at because, when a failure
does occur, the critical variable now becomes how quickly the system can be recovered. In the data center,
having a reliable system design is the most critical variable, but when a failure occurs, the most important
consideration must be getting the IT equipment and business processes up and running as fast as possible
to keep downtime to a minimum.
According to the IEC (International Electro-technical Commission) there are two basic definitions of a failure:
1. The termination of the ability of the product as a whole to perform its required function.
2. The termination of the ability of any individual component to perform its required function but not
the termination of the ability of the product as a whole to perform.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
MTTR Mean Time to Recover (or Repair), is the expected time to recover a system from a failure. This may
include the time it takes to diagnose the problem, the time it takes to get a repair technician onsite, and the
time it takes to physically repair the system. Similar to MTBF, MTTR is represented in units of hours. MTTR
impacts availability and not reliability. The longer the MTTR, the worse off a system is. Simply put, if it takes
longer to recover a system from a failure, the system is going to have a lower availability. As the MTBF goes
up, availability goes up. As the MTTR goes up, availability goes down.
Let’s take for example two data centers that are both considered 99.999% available. In one year, Data
Center A lost power once, but it lasted for a full 5 minutes. Data Center B lost power 10 times, but for only
30 seconds each time. Both Data Centers were without power for a total of 5 minutes each. The missing
detail is the recovery time. Anytime systems lose power, there is a recovery time in which servers must be
rebooted, data must be recovered, and corrupted systems must be repaired. The Mean Time to Recover
process could take minutes, hours, days, or even weeks. Now, if you consider again the two data centers
that have experienced downtime, you will see that Data Center B that has had 10 instances of power
outages will actually have a much longer duration of downtime, than the data center that only had once
occurrence of downtime. Data Center B will have a significantly higher Mean Time to Recover. It is
because of this dynamic that reliability is equally important to this discussion of availability. Reliability of a
data center talks to the frequency of downtime in a given time frame. There is an inversely proportional
relationship in that as time increases, reliability decreases. Availability, however is only a percentage of
downtime in a given duration.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 13: Factors that Affect Availability and Reliability
It should be obvious that there are numerous factors that affect data center availability and reliability. Some
of these include AC Power conditions, lack of adequate cooling in the data center, equipment failure, natural
and artificial disasters, and human errors.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
center layout, a hot aisle/cold aisle configuration is used. Hot spots can also be alleviated by the use of
properly sized cooling systems, and supplemental spot coolers and air distribution units.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
groups. With a variety of vendors, contractors and technicians freely accessing the IT equipment, errors are
inevitable.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 25: Cost of Downtime
And others yet, may lose at a lower rate for a short outage (since revenue is not lost but simply delayed),
and as the duration lengthens, there is an increased likelihood that the revenue will not be recovered.
Regarding customer satisfaction, a short duration may often be acceptable, but as the duration increases,
more customers will become increasingly upset. An example of this might be a car dealership, where
customers are willing to delay a transaction for a day. With significant outages however, public knowledge
often results in damaged brand perception, and inquiries into company operations. All of these activities
result in a downtime cost that begins to accelerate quickly as the duration becomes longer.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 26: Cost of Downtime
Costs associated with downtime can be classified as direct and indirect. Direct costs are easily identified
and measured in terms of hard dollars. Examples include:
1. Wages and costs of employees that are idled due to the unavailability of the network. Although
some employees will be idle, their salaries and wages continue to be paid. Other employees may
still do some work, but their output will likely be diminished.
2. Lost Revenues are the most obvious cost of downtime because if you cannot process customers,
you cannot conduct business. Electronic commerce magnifies the problem, as eCommerce sales
are entirely dependent on system availability
3. Wages and cost increases due to induced overtime or time spent checking and fixing systems. The
same employees that were idled by the system failure are probably the same employees that will
go back to work and recover the system via data entry. They not only have to do their ‘day job’ of
processing current data, but they must also re-enter any data that was lost due to the system crash,
or enter new data that was handwritten during the system outage. This means additional hours of
work, most often on an overtime basis.
4. Depending on the nature of the affected systems, the legal costs associated with downtime can be
significant. For example, if downtime problems result in a significant drop in share price,
shareholders may initiate a class-action suit if they believe that management and the board were
Fundamentals of Availability P a g e | 10
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
negligent in protecting vital assets. In another example, if two companies form a business
partnership in which one company’s ability to conduct business is dependent on the availability of
the other company’s systems, then, depending on the legal structure of the partnership, the first
company may be liable to the second for profits lost during any significant downtime event.
Indirect costs are not easily measured, but impact the business just the same. In 2000, Gartner Group
estimated that 80% of all companies calculating downtime were including indirect costs in their calculations
for the first time.
Examples include: reduced customer satisfaction; lost opportunity of customers that may have gone to
direct competitors during the downtime event; damaged brand perception; and negative public relations.
For example, Energy and Telecommunications organizations may experience lost revenues on the order of
2 to 3 million dollars an hour. Manufacturing, Financial Institutions, Information Technology, Insurance,
Retail and Pharmaceuticals all stand to lose over 1 million dollars an hour.
Fundamentals of Availability P a g e | 11
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 28: Calculating Cost of Downtime
There are many ways to calculate cost of downtime for an organization. For example, one way to estimate
the revenue lost due to a downtime event is to look at normal hourly sales and then multiply that figure by
the number of hours of downtime.
Remember, however, that this is only one component of a larger equation and, by itself, seriously
underestimates the true loss. Another example is loss of productivity.
The most common way to calculate the cost of lost productivity is to first take an average of the hourly
salary, benefits and overhead costs for the affected group. Then, multiply that figure by the number of hours
of downtime.
Because companies are in business to earn profits, the value employees contribute is usually greater than
the cost of employing them.
Therefore, this method provides only a very conservative estimate of the labor cost of downtime.
Fundamentals of Availability P a g e | 12
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 29: Summary
• To stay competitive in today’s global marketplace, businesses must strive to achieve high levels of
availability and reliability. While 99.999% availability is the ideal operating condition for most
businesses.
• Power outages, inadequate cooling, natural and artificial disasters, and human errors pose a
significant barrier to high availability.
• The direct and indirect costs of downtime in many business sectors can be exorbitant, and often is
enough to bankrupt many organizations.
• Therefore it is critical for businesses today to calculate their level of availability in order to reduce
risks, and increase overall reliability and availability.
Fundamentals of Availability P a g e | 13
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Examining Fire Protection Methods in the Data Center
Transcript
Slide 1
Welcome to the Data Centers UniversityTM course on Examining Fire Protection Methods in the Data Center.
Slide 2: Welcome
For best viewing results, we recommend that you maximize your browser window now. The screen controls
allow you to navigate through the eLearning experience. Using your browser controls may disrupt the
normal play of the course. Click the attachments link to download supplemental information for this course.
Click the Notes tab to read a transcript of the narration.
Slide 4: Introduction
Throughout history, fire has systematically wreaked havoc on industry. Today’s data centers and network
rooms are under enormous pressure to maintain seamless operations. Some companies risk losing millions
of dollars with one data center catastrophe.
Slide 5: Introduction
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
In fact, industry studies tell us that 43% of businesses that closed due to fire never reopen and 29% of those
that do reopen fail within 3 years. With these statistics in mind, it is imperative that all businesses prepare
themselves for unseen disasters. The good news is that the most effective method of fire protection is fire
prevention. At the completion of this course you will be one step closer to understanding industry
safeguarding methods that are used to protect a data centers hottest commodity, information.
Slide 6: Introduction
This course will discuss the prevention, theory, detection, communication and suppression of fire specific to
data centers.
The NFPA is responsible for creating fire protection standards, one of them being NFPA 75. NFPA 75 is the
standard for protection of computer or data processing equipment. One notable addition to NFPA 75 that
took place in 1999, allows data centers to continue to power electronic equipment upon activation of a
Gaseous Agent Total Flooding System, which we will discuss later in detail. This exception was made for
data centers that meet the following risk considerations:
• Economic loss that could result from:
• Loss of function or loss of records
• Loss of equipment value
• Loss of life
• and the risk of fire threat to the installation, to occupants or exposed property within that installation
It’s important to note that NFPA continually updates its standards to accommodate the ever changing data
center environment. Please note, that NFPA does set the worldwide standards for fire protection but in most
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
cases the Authority Having Jurisdiction (AHJ) has final say in what can or cannot be used for fire protection
in a facility. Now that we have identified the standards and guidelines of fire protection for a data center,
let’s get started with some facts about fire protection.
Slide 8: Prevention
Fire prevention provides more protection then any type of fire detection device or fire suppression
equipment available. In general, if the data center is incapable of breeding fire there will be no threat of fire
damage to the facility. To promote prevention within a data center environment it is important to eliminate as
many fire causing factors as possible. A few examples to help achieve this are:
• When building a new data center, ensure that it is built far from any other buildings that may pose a
fire threat to the data center
• Enforce a strict no smoking policy in IT and control rooms
• The data center should be void of any trash receptacles
• All office furniture in the data center must be constructed of metal. (Chairs may have seat
cushions.)
• The use of acoustical materials such as foam or fabric or any material used to absorb sound is not
recommended in a data center
Even if a data center is considered fire proof, it is important to safeguard against downtime in the event that
a fire does occur. Fire protection now becomes the priority.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
center. Prior to the selection of a detection, communication or suppression system, a design engineer must
assess the potential hazards and issues associated with the given data center.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Class B fires are fires involving flammable liquids and gases such as oil, paint lacquer, petroleum and
gasoline. Class C fires involve live electrical equipment. Class C fires are usually Class A or Class B fires
that have electricity present. Class D fires involve combustible metals or combustible metal alloys such as
magnesium, sodium and potassium. The last class is Class K fires. These fires involve cooking appliances
that use cooking agents such as vegetable or animal oils and fats. Generally, Class A, B and C fires are the
most common classes of fire that one may encounter in a data center. This chart represents all of the
different classes of fire that are able to be extinguished successfully with a basic fire extinguisher. Later in
the course, we will discuss several types of extinguishing agents used in data centers.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
As these stages progress, the risk of property damage, and risk to life increases drastically. All of these
categories play an important role in fire protection specifically, data centers. By studying the classes of fire
and the stages of combustion it is easy to determine what type of fire protection system will best suit the
needs of a data center.
For the purposes of protecting a data center, smoke detectors are the most effective. Heat detectors and
flame detectors are not recommended for use in data centers, as they not provide detection in the incipient
stages of a fire and therefore do not provide early warning for the protection of high value assets. Smoke
detectors are far more effective forms of protection in data centers simply because they are able to detect a
fire at the incipient stage. For this reason we will be focusing on the attributes and impact of smoke
detectors.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 15: Smoke Detectors
The two types of smoke detectors that are used effectively in data centers are:
1. Intelligent spot type detectors and
2. Air sampling smoke detectors
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
a network of pipes attached to a single detector, which continually draws air in and samples it. The pipes are
typically made of PVC but can also be CPVC, EMT or copper. Depending on the space being protected and
the configuration of multiple sensors, these systems can cover an area of 2,500 to 80,000 square feet or
232 to 7,432 square meters. This system also utilizes a laser beam, much more powerful than the one
contained in a common photoelectric detector, to detect by-products of combustion. As the particles pass
through the detector, the laser beam is able to distinguish them as dust or byproducts of combustion.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
suppression have been exhausted, the authorities that arrive on site will have the option to utilize this
feature. The EPO is intended to power down equipment or an entire installation in an emergency to protect
personnel and equipment.
EPO is typically used either by fire fighting personnel or by equipment operators. When used by firefighters,
it is used to assure that equipment is de-energized during fire fighting so that firefighters are not subjected to
shock hazards. The secondary purpose is to facilitate fire fighting by eliminating electricity as a source of
energy feeding combustion. EPO may also be activated in case of a flood, electrocution, or other
emergency.
There is a high cost associated with abruptly shutting down a data center. Unfortunately, EPO tripping is
often the result of human error. Much debate has ensued over the use of EPO and may one day lead to the
elimination of EPO in data centers.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
a gas therefore; it leaves no residue upon discharge. Simply put, it extinguishes fires by removing heat and
chemically preventing combustion.
Total Flooding Fire Extinguishing Systems are comprised of a series of cylinders or high pressure tanks
filled with an extinguishing or gaseous agent. A gaseous agent is a gaseous chemical compound that
extinguishes the fire by either removing heat or oxygen or both. Given a closed, well-sealed room, gaseous
agents are very effective at extinguishing a fire while leaving no residue.
When installing such a system, the total volume of the room and how much equipment is being protected is
taken into consideration. The number of the tanks or cylinders to be installed are dependent upon these
factors. It is important to note, that the Standard that guides Total Flooding Suppression Systems is NFPA
2001. The next slide features a live demonstration of a Total Flooding Fire Extinguishing system in action.
Now that we have discussed Total Flooding Fire Extinguishing System, let’s start reviewing the agents that
such systems deploy.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
components within IT equipment and corrosive agents may “eat away” at electronic components within IT
equipment. The gaseous agents used in today’s data centers are non conductive and non corrosive. An
effective agent that is both non conductive and non corrosive and was widely used in data centers is Halon.
Unfortunately, it was discovered that Halon is detrimental to the ozone layer and as of 1994, the production
of Halon is no longer permitted. This has lead to the development of safer and cleaner gaseous agents.
Let’s review some of the more popular gaseous agents for data centers.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Another suppression alternative for data centers is Fluorine Based Compounds. Fluorine Based Compound,
HFC-227ea is known under two commercial brands; FE-200 and FE-227. HFC-227ea has a zero ozone
depletion potential (ODP) and an acceptably low global warming potential. It is also an odorless, colorless.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Other methods of fire suppression often found in data centers are:
1. Water Sprinklers Systems and
2. Water Mist Suppression Systems
Of the two options, Water Sprinklers are often present in many facilities due to national and/or local fire
codes.
Let‘s review a few of the key elements of Water Sprinklers and Water Mist Suppression Systems.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
fine mist is used, less water is needed; therefore, the water mist system needs minimal storage space.
Water mist systems are gaining popularity due to their effectiveness in industrial environments. Because of
this, we may see an increase in the utilization of such systems in data centers.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
o Water mist suppression systems
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Fundamental Cabling Strategies for Data Centers
Transcript
Slide 1
Welcome to Fundamental Cabling Strategies for Data Centers.
Slide 2
For best viewing results, we recommend that you maximize your browser window now. The screen controls
allow you to navigate through the eLearning experience. Using your browser controls may disrupt the
normal play of the course. Click ATTACHMENTS to download important supplemental information for this
course. Click the Notes tab to read a transcript of the narration.
Slide 3
At the completion of this course, you will be able to:
• Discuss the evolution of cabling
• Classify different types of common data center cables
• Describe cabling installation practices
• Identify the strategies for selecting cabling topologies
• Utilize cable management techniques
• Recognize the challenges associated with cabling in the data center
Slide 4
From a cost perspective, building and operating a data center represents a significant piece of any
Information Technology (IT) budget. The key to the success of any data center is the proper design and
implementation of core critical infrastructure components. Cabling infrastructure, in particular, is an
important area to consider when designing and managing any data center.
The cabling infrastructure encompasses all data cables that are part of the data center, as well as all of the
power cables necessary to ensure power to all of the loads. It is important to note that cable trays and cable
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
management devices are critical to the support of IT infrastructure as they help to reduce the likelihood of
downtime due to human error and overheating.
Slide 5
This course will address the basics of cabling infrastructure and will discuss cabling installation practices,
cable management strategies and cable maintenance practices. We will take an in-depth look at both data
cabling and power cabling. Let’s begin with a look at the evolution of data center cabling.
Slide 6
Ethernet protocol has been a data communications standard for many years. Along with Ethernet, several
traditional data cabling practices continue to shape how data cables are deployed.
• High speed data cabling over copper is a cabling medium of choice
• Cable fed into patch panels and wall plates is common
• The RJ45 is the data cable connector of choice
The functionality within the data cables and associated hardware, however, has undergone dramatic change.
Increased data speeds have forced many physical changes. Every time a new, faster standard is ratified by
standardization bodies, the cable and supporting hardware have been redesigned to support it. New test
tools and procedures also follow each new change in speed. These changes have primarily all been
required by the newer, faster versions of Ethernet, which are driven by customers’ needs of more speed and
bandwidth. When discussing this, it is important to note the uses and differences of both fiber-optic cable,
and traditional copper cable. Let’s compare these two.
Slide 7
Copper cabling has been used for decades in office buildings, data centers and other installations to provide
connectivity. Copper is a reliable medium for transmitting information over shorter distances; but its
performance is only guaranteed up to 109.4 yards (100 meters) between devices. (This would include
structured cabling and patch cords on either end.)
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Copper cabling that is used for data network connectivity contains four pairs of wires, which are twisted
along the length of the cable. The twist is crucial to the correct operation of the cable. If the wires unravel,
the cable becomes more susceptible to interference.
Copper cabling, patch cords, and connectors are classified based upon their performance characteristics
and for which applications they are typically used. These ratings, called categories, are spelled out in the
TIA/EIA 568 Commercial Building Telecommunications Writing Standard.
Slide 8
Fiber-optic cable is another common medium for providing connectivity. Fiber cable consists of five
elements. The center portion of the cable, known as the core, is a hair thin strand of glass capable of
carrying light. This core is surrounded by a thin layer of slightly purer glass, called cladding, that contains
and refracts that light. Core and cladding glass are covered in a coating of plastic to protect them from dust
or scratches. Strengthening fibers are then added to protect the core during installation. Finally, all of these
materials are wrapped in plastic or other protective substance that serves as the cable’s jacket.
A light source, blinking billions of times per second, is used to transmit data along a fiber cable. Fiber-optic
components work by turning electronic signals into light signals and vice versa. Light travels down the
interior of the glass, refracting off of the cladding and continuing onward until it arrives at the other end of
the cable and is seen by receiving equipment.
When light passes from one transparent medium to another, like from air to water, or in this case, from the
glass core to the cladding material, the light bends. A fiber cable’s cladding consists of a different material
from the core — in technical terms, it has a different refraction index — that bends the light back toward the
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
core. This phenomenon, known as total internal reflection, keeps the light moving along a fiber-optic cable
for great distances, even if that cable is curved. Without the cladding, light would leak out.
Fiber cabling can handle connections over a much greater distance than copper cabling, 50 miles (80.5
kilometers) or more in some configurations. Because light is used to transmit the signal, the upper limits of
how far a signal can travel along a fiber cable is related not only to the properties of the cable but also to the
capabilities and relative location of transmitters.
Slide 9
Besides distance, fiber cabling has several other advantages over copper:
• Fiber provides faster connection speeds
• Fiber is not prone to electrical interference or vibration
• Fiber is thinner and light-weight, so more cabling can fit into the same size bundle or limited spaces
• Signal loss over distance is less along optical fiber than copper wire
Two varieties of fiber cable are available in the marketplace: multimode fiber and single mode fiber.
Multimode is commonly used to provide connectivity over moderate distances, such as those in most data
center environments, or among rooms within a single building. Single mode fiber is used for the longest
distances, such as among buildings on a large campus, or between sites.
Copper is generally the less expensive cabling solution over shorter distances (i.e. the length of data center
server rows), while fiber is less expensive for longer distances (i.e. connections among buildings on a
campus).
Slide 10
In the case of data center power cabling, however, historical changes have taken a different route.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
In traditional data centers, designers and engineers were not too concerned with single points of failure.
Scheduled downtime was an accepted practice. Systems were periodically taken down to perform
maintenance, and to make changes. Data center operators would also perform infrared scans on power
cable connections prior to the shutdowns to determine problem areas. They would then locate the hot spots
that could indicate possible risk of short circuits and address them.
Traditional data centers, very often, had large transformers that would feed large uninterruptible power
supplies (UPSs) and distribution switchboards. From there, the cables would go to distribution panels that
would often be located on the columns or walls of the data center. Large UPSs, transformers, and
distribution switchgear were all located in the back room.
The incoming power was then stepped down to the correct voltage and distributed to the panels mounted in
the columns. Cables connected to loads, like mainframe computers, would be directly hardwired to the
hardware. In smaller server environments, the power cables would be routed to power strips underneath the
raised floor. The individual pieces of equipment would then plug into those power strips, using sleeve and
pin connectors, to keep the cords from coming apart.
Slide 11
Downtime is not as accepted as it once was in the data center. In many instances, it is no longer possible to
shut down equipment to perform maintenance. A fundamentally different philosophical approach is at work.
Instead of the large transformers of yesterday, smaller ones, called power distribution units (PDUs) are now
the norm. These PDUs have moved out of the back room, onto the raised floor, and in some cases, are
integrated into the racks. These PDUs feed the critical equipment. This change was the first step in a new
way of thinking, a trend that involved getting away from the large transformer and switchboard panel.
Modern data centers also have dual cord environments. Dual cord helps to minimize a single point of failure
scenario. One of the benefits of the dual cord method is that data center operators can perform
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
maintenance work on source A, while source B maintains the load. The server never has to be taken offline
while upstream maintenance is being performed.
This trend began approximately 10 years ago and it was clearly driven by the user. It became crucial for
data center managers to maintain operations 24 hours a day, 7 days per week. Some of the first businesses
to require such operations were the banks, who introduced ATMs, which demanded constant uptime. The
customer said “We can no longer tolerate a shutdown”.
Now that we have painted a clear picture of the history of cabling infrastructure, we’ll discuss the concept of
modularity and its importance in the data center.
Slide 12
Modularity is an important concept in the contemporary data center. Modular, scalable Network Critical
Physical Infrastructure (NCPI) components have been shown to be more efficient and more cost effective.
The data cabling industry tackled the issue of modularity decades ago. Before the patch panel was
designed, many adds, moves and changes were made by simply running new cable. After years of this ‘run
a new cable’ mentality, wiring closets and ceilings were loaded with unused data cables. Many wiring
closets became cluttered and congested. The strain on ceilings and roofs from the weight of unused data
cables became a potential hazard. The congestion of data cables under the raised floor also impeded
proper cooling and exponentially increased the potential for human error and downtime.
Slide 13
In the realm of data cabling, the introduction of the patch panel brought an end to the ‘run a new cable’
philosophy and introduced modularity to network cabling. The patch panel, located either on the data center
floor or in a wiring closet, is the demarcation point where end points of bulk cable converge. If a data center
manager were to trace a data cable from end to end, starting at the patch panel, he would probably find
himself ending at the wall plate. This span is known as the backbone.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
The modularity of the system is in the use of patch cables. The user plugs his patch cable into a wall plate. If
he needs to move a computer, for example, he simply unplugs his patch cable and connects into a different
wall plate. The same is true on the other end, back at the patch panels. If a port on a hub or router
malfunctions, the network administrator can simply unplug it and connect it into another open port.
Data center backbone cabling is typically designed to be non-scalable. The data cabling backbone, 90% of
the time, is located behind the walls, not out in the open.
Typically a network backbone, when installed, especially in new construction scenarios, accounts for future
growth considerations. Adds, moves and changes can be very costly once the walls are constructed. In new
construction it is best to wire as much of the building as possible, with the latest cable standard. This
reduces expenses once the walls are constructed.
Now that we have discussed the concept of modularity, let’s overview the different types of data cables that
exist in a data center.
Slide 14
So, what are the different types of common data center specific data cables?
Category 5 (Cat 5) was originally designed for use with 100 Base-T. Cat 5e supports 1 Gig Ethernet. Cat 6a
supports 10 Gig Ethernet. It is important to note that a higher rated cable can be used to support slower
speeds, but the reverse is not true. For example, a Cat 5e installation will not support 10 Gig Ethernet, but
Cat 6a cabling will support 100 Base-T.
Cable assemblies can be defined as a length of bulk cable with a connector terminated onto both ends.
Many of the assemblies used are patch cables of various lengths that match or exceed the cabling standard
of the backbone. A Cat 5e backbone requires Cat 5e or better patch cables.
Slide 15
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Data center equipment can require both standard and custom cables. Some cables are specific to the
equipment manufacturer. One example of a common connection would involve a scenario in which a Cisco
router with the 60-pin LFH connector connected to a router with V.35 interface requires an LFH60 to V.35
Male DTE cable.
An example of a less common connection would be a stand alone tape backup that may have a SCSI
interface. If the cable that came with the equipment does not match up to the SCSI card in a computer, the
data center manager will find himself looking for a custom SCSI cable.
A typical example of the diversity of cables required in the data center is a high speed serial router cable. In
a wide area network (WAN), routers are typically connected to modems, which are called DSU/CSU’s.
Some router manufacturers feature unorthodox connectors on their routers. Depending on the interface that
the router and DSU/CSU use to communicate to one another, several connector possibilities exist.
Other devices used in a computer room can require any one of a myriad of cables. Common devices
besides the networking hardware are telco equipment, KVM’s, mass storage, monitors, keyboard and
mouse, and terminal servers. Sometimes brand-name cables are expensive or unavailable. A large market
of manufacturer equivalent cables exists, from which the data center manager can choose.
Slide 16
When discussing data center power cabling, it is important to note that American Wire Gauge (AWG) copper
wire is the common medium for transporting power in the data center. This has been the case for many
years and it still holds true in modern data centers.
The formula for power is Amp x Volts = Power; and data center power cables are delineated by amperage.
The more power that needs to be delivered to the load, the higher the amperage has to be. (Note: The
voltage will not be high under the raised floor. It will be less than 480V; most servers are designed to handle
120 or 208V.)
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
If the level of power is the same, the amperage and voltage are the same. As the amperage increases or
decreases, the gauge of the wire needs to be larger or smaller to accommodate the change in amperage.
AWG ratings organize copper wire into numerous recognizable and standard configurations.
A relatively new trend in the domain of data center power cabling is the invention of the whip. Whips are pre-
configured cables with a twist lock cap on one end and insulated copper on the other end. The insulated
copper end feeds a breaker in the main PDU; the twist lock end feeds the rack mounted PDU that supplies
the intelligent power strips in the rack. Server equipment then plugs directly into the power strip. With whips,
there is no need for wiring underneath the floor (with the possible exception of the feed to the main PDU
breakers). Thus, the expense of a raised floor can be avoided. Another benefit of whips is that a licensed
electrician is not required to plug in the twist lock connectors of the whip into the power strip twist lock
receptacles.
Slide 17
Dual cord, dual power supply also introduced significant changes to the data center power cabling scheme.
In traditional data centers, computers had one feed from one transformer or panel board, and the earliest
PDUs still only had one feed to servers. Large mainframes required two feeds to keep systems consistently
available. Sometimes two different utilities were feeding power to the building.
Now, many servers are configured to support two power feeds, hence the dual cord power supply. Because
data center managers can now switch from one power source to another, this allows for maintenance on
infrastructure equipment without having to take servers offline.
It is important to understand that the power cabling requirements to support the dual cord power supply
configuration have doubled as a result. The same wire, the same copper, and the same sizes, are required
as was required in the past, but now data center designers need to account for double the power
infrastructure cable, including power related infrastructure that may be located in the equipment room that
supports the data center.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Now that we’ve talked about a basic overview of both power and data cabling, let’s take a look at some best
practices for cabling in the data center.
Slide 18
Some best practices for data cabling include:
• Overhead deployments
o Overhead cables that are in large bundles should run in cable trays or troughs. If the
manufacturer of the tray or trough offers devices that keep the cable bend radius in check then
they should be used as well. Do not over tighten tie wraps or other hanging devices. It can
interfere with the performance of the cable.
• Underfoot deployments
o Be cognizant of the cable’s bend radius specifications and adhere tightly to them. Do not over
tighten tie wraps. This can interfere with the performance of the cable.
• Rack installations
o As talked about previously, be cognizant of the cable’s bend radius specifications and adhere
tightly to them. Don’t over tighten tie wraps. This can interfere with the performance of the
cable. Use vertical and/or horizontal cable management to take up any extra slack.
• Testing cables
o There are several manufacturers of test equipment designed specifically to test today’s high
speed networks. Make sure that the installer tests and certifies every link. A data center
manager can request a report that shows the test results.
Are there any common practices that should be avoided? When designing and installing the network’s
backbone care should be taken to route all Unshielded Twisted Pair (UTP is the U.S. standard) or Shielded
Twisted Pair (STP is the European standard) cables away from possible sources of interference such as
power lines, electric motors or overhead lighting.
Slide 19
Power cabling best practices are described in the National Electric Code (NEC).
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
When addressing best practices in power cabling, it is important that data center professionals use the term,
“continuous load’. The continuous load is defined as any load left on for more than 3 hours, which is, in
effect, all equipment in a data center. Due to the requirements of the continuous load, data center operators
are forced to take all rules that apply to amperages and wire sizes and de-rate those figures by 20%. For
example, if a wire is rated for 100 amps, the best practice is not to run more than 80 amps through it. Let’s
discuss this further.
Over time, cables can get overheated. The de-rating approach helps avoid overheated wires that can lead
to shorts and fires. If the quantity of copper in the cable is insufficient for the amperages required, it will heat
to the point of melting the insulation. If insulation fails, the copper is exposed to anything metal or grounded
in its proximity. If it gets close enough, the electricity will jump or arc and could cause a fire to start.
Undersized power cables also stress the connections. If any connection is loose, the excess load
exacerbates the situation. The de-rating of the power cables takes these facts into account.
To further illustrate this example, let’s compare electricity to water. If too much water gets pushed into a pipe,
the force of the water will break the pipe if it is too small. Amperages are forcing electricity through the wire;
therefore, the wire is going to heat up if the wire is undersized.
The manufacturer, or supplier, of the cable provides the information regarding the circular mill, or the area of
the wires, inside the cable. The circular mill does not take into account the wire insulation. The circular mill
determines how much amperage can pass through that piece of copper.
Next, let’s compare overhead and under the floor installations.
Slide 20
The benefit of under the floor cabling is that the cable is not visible. Many changes can be made and the
wiring will not be seen. The disadvantage of under the floor cabling is the significant expense of constructing
a raised floor. Data center designers also need to take into account the danger of opening up a raised floor
and exposing other critical systems like the cooling air flow system, if the raised floor is used as a plenum.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
With overhead cabling, data center designers can use cabling trays to guide the cables to the equipment.
They can also run conduit from the PDU directly to the equipment or computer load. The conduit is not
flexible, however, which is not good if constant change is expected.
A best practice is to use overhead cables which are all pre-configured in the factory and placed in the
troughs to the equipment. This standardization creates a more convenient, flexible environment for the data
center of today.
Slide 21
Where your power source is, where the load is, and what the grid is like, all affect the design and layout of
the cabling in the data center. When discussing overhead cabling, data centers designers are tasked with
figuring out the proper placement of cables ahead of time. Then, they can decide if it would be best to have
the troughs directly over the equipment or in the aisle. Also designers have to take into account local codes
for distributing power. For example, there are established rules that require that sprinkler heads not be
blocked. If there is a 24 inch (60.96 cm) cable tray, designers could not run that tray any closer than 10
inches (25.4 cm) below the sprinkler head to cover up or obstruct the head. They would need to account for
this upfront in the design stage.
Now that we’ve touched upon best practices for installation, let’s discuss some strategies for selecting
cabling topologies.
Slide 22
Network Topology deals with the different ways computers (and network enabled peripherals) are arranged
on or connected to a network. The most common network topologies are:
• Star. All computers are connected to a central hub.
• Ring. Each computer is connected to two others, such that, starting at any one computer, the
connection can be traced through each computer on the ring back to the first.
• Bus. All computers are connected to a central cable, normally termed bus or backbone.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
• Tree. A group of start networks are each connected to a linear backbone.
For data cabling, in IEEE 802.3, UTP/STP Ethernet scenarios, a star network topology is used. Star
topology implies that all computers are connected to a central hub. In it’s simplest form a UTP/STP Ethernet
Star topology has a Hub at the center and devices (i.e. personal computer, printers, etc.) connected directly
to it. Small LANs fit this simple model. Larger installations can be much more complicated, with segments
connecting to other segments, but the basic Star topology remains intact.
Slide 23
Power cables can be laid out either overhead in troughs or below the raised floor. Many factors come into
play when deciding on a power distribution layout from the PDUs to the racks. The size of the data center,
the nature of the equipment being installed and budget are all variables. However, be aware that two
approaches are commonly utilized for distribution of power cables in the data center.
Slide 24
One approach is to run the power cables inside conduits from large wall mounted or floor mounted PDUs to
each cabinet location. This works moderately well for a small server environment with a limited number of
conduits. This does not work well for larger data centers when cabinet locations require multiple power
receptacles.
Slide 25
Another approach, more manageable for larger server environments, is the installation of electrical
substations at the end of each row in the form of circuit panels. Conduit is run from power distribution units
to the circuit panels and then to a subset of connections to the server cabinets.
This configuration uses shorter electrical conduit, which makes it easier to manage, less expensive to install,
and more resistant to a physical accident in the data center. For example, if a heavy object is dropped
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
through a raised floor, the damage it can cause is greatly reduced in a room with segmented power,
because fewer conduits overlap one another in a given area.
Even more efficient is to deploy PDUs in the racks themselves and to have whips feed the various racks in
the row.
Slide 26
What are the best practices for cable management and organization techniques? Some end users purchase
stranded bulk data cable and RJ45 connectors and manufacture their own patch cables on sight. While
doing this assures a clean installation with no excess wire, it is time consuming and costly. Most companies
find it more prudent to inventory pre-made patch cables and use horizontal or vertical cable management to
take up any excess cable. Patch cables are readily available in many standard lengths and colors.
Are there any common practices that should be avoided? All of today’s high speed networks have minimum
bend radius specifications for the bulk cable. This is also true for the patch cables. Care should be taken not
to exceed bend radius on the patch cables.
Slide 27
Proper labeling of power cables in the data center is a recommended best practice. A typical electrical panel
labeling scheme is based on a split bus (two buses in the panel) where the labels represent an odd
numbered side and an even numbered side. Instead of normal sequenced numbering, the breakers would
be numbered 1, 3, 5 on the left hand side and would be numbered 2, 4, 6 on the right side, for example.
When labeling a power cable or whip, the PDU designation from the circuit breaker would be a first identifier.
This identifier number indicates from where the whip comes. Identifying the source of the power cable can
be complicated because the power may not be supplied from the PDU that is physically the closest to the
rack and may not be the one that is feeding the whip. In addition, data center staff may want to access the
“B” power source even though the “A” power source might be physically closer. This is why the power
cables need to be properly labeled at each end. The cable label needs to indicate the source PDU (i.e.
PDU1) and also identify the circuit (i.e. circuit B).
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Ideally on the other end of the cable, a label will indicate what load the cable is feeding (i.e. SAN device, or
Processer D23). To help clarify labeling, very large data centers are laid out in two foot squares that match
the raised floor. They are usually addressed with east/west and numbered designations. For example, “2
west by 30 east” identifies the location of an exact square on the data center floor (which is supporting a
particular piece or pieces of equipment). Therefore the label identifies the load that is being supported by
the cable. Labeling of both ends of the cable in an organized, consistent manner allows data center
personnel to know the origin of the opposite end.
Slide 28
With network data cabling, once the backbone is installed and tested it should be fairly stable. Infrequently,
a cable may become exposed, damaged, and therefore needs to be repaired or replaced. But once in place,
the backbone of a network should remain secure. Occasionally, patch cables can be jarred and damaged;
this occurs most commonly on the user end.
Since the backbone is fairly stable except for occasional repair, almost all changes are initiated simply by
disconnecting a patch cable and reconnecting it somewhere else. The modularity of a well designed cabling
system allows users to disconnect from one wall plate, connect to another and be back up and running
immediately. In the data center, adds, moves and changes should be as simple as connecting and
disconnecting patch cables.
So what are some of the challenges associated with cabling in the data center? We’ll talk about three of the
more common challenges.
Slide 29
The first challenge is associated with useful life.
The initial design and cabling choices can determine the useful life of a data cabling plant. One of the most
important decisions to make when designing a network is choosing the medium: copper, fiber or both?
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Every few years newer-faster-better copper cables are introduced into the marketplace, but fiber seems to
remain relatively unchanged. If an organization chose to install FDDI grade 62.5/125 fiber 15 years ago, that
organization may still be using the same cable today. Whereas if the same organization had installed Cat 5
the organization more than likely would have had replaced it by now. In the early days few large installations
were done in fiber because of the cost. The fiber was more expensive and so was the hardware that it
plugged into. Now the costs of fiber and copper are much closer. Fiber cabling is also starting to change.
The current state of the art is 50/125 laser optimized for 10 Gig Ethernet.
There are a few issues with cables and cabling in the data center that effect airflow and cooling. Cables
inside of an enclosed cabinet need to be managed so that they allow for maximum airflow, which helps
reduce heat. When cooling is provided through a raised floor it is best to keep that space as cable free as
possible. For this reason expect to see more and more cables being run across the tops of cabinets as
opposed to at the bottom or underneath the raised floor.
Many manufacturers offer labeling products for wall plates, patch panels and cables. Also software
packages exist that help keep track of cable management. In a large installation, these tools can be
invaluable.
Let’s take a look at some expenses associated with cabling in the data center.
Slide 30
For data cabling, the initial installation of a cabling plant, and the future replacement of that plant, are the
two greatest expenses. Beyond installation and replacement costs, the only other expense is adding patch
cables as the network grows. The cost of patch cables is minimal considering the other costs in an IT
budget. Cabling costs are, for the most part, up front costs.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Regarding power cables, the design of the data center, and the location of the PDUs, will have a significant
impact on costs. Dual cord power supplies are driving up the cost because double the power cabling is
required. Design decisions are critical. Where will the loads be located? How far from the power distribution?
What if PDUs are fed from different circuits? If not planned properly, unnecessarily long power cable runs
will be required and will drive up overall data center infrastructure costs.
Slide 31
How are cables replaced?
Patch cables are replaced by simply unplugging both ends and connecting the new one. However, cables
do not normally wear out. Most often, if a cable shorts, it is due to misuse or abuse. Cable assemblies have
a lifetime far beyond the equipment to which they are connected.
Slide 32
The equipment changes quite frequently in the data center; on the average a server changes every 2-3
years. It is important to note that power cabling only fails at the termination points. The maintenance occurs
at the connections. Data center managers need to scan those connections and look for hot spots. It is also
prudent to scan the large PDU and its connection off the bus for hot spots.
Heat indicates that there is either a loose connection or an overload. By doing the infrared scan, data center
operators can sense that failure before it happens. In the dual cord environment, it becomes easy to switch
to the alternate source, unplug the connector, and check it’s connection.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 33
Every few feet, the power cable is labeled with a voltage rating, an amperage rating, and the number of
conductors that can be found inside the cable. This information is stamped on the cable. American Wire
Gauge (AWG) is a rating of the size of the copper, and identifies the number of conductors in the cable.
Inside a whip there are a minimum of 3 wires, one hot, one neutral, and one ground. It is also possible, to
have 5 wires (3 hot, 1 neutral, 1 ground) inside the whip. Feeder cables which feed the Uninterruptible
Power Supply (UPS) and feed the PDU are thicker, heavier cables. Single conductor cables (insulated
cables with multiple strands of uninsulated copper wires inside) are usually placed in groups within metal
conduit to feed power hungry data center infrastructure components such as large UPSs and Computer
Room Air Conditioners (CRACs). Multiple conductor cables, (cables inside the larger insulated cable that
are each separately insulated) are most often found on the load side of the PDU. Single conductors are
most often placed within conduit, while multiple conductor cables are generally distributed outside of the
conduit. Whips are multiple conductor cables.
Slide 34
To summarize, let’s review some of the information that we have covered throughout the course.
• A modular, scalable approach to data center cabling is more energy efficient and cost effective
• Copper and fiber data cables running over Ethernet networks are considered the standard for data
centers
• American Wire Gauge copper cable is a common means of transporting power in the data center
• Cabling to support dual cord power supplies helps to minimize single points of failure in a data
center
• To minimize cabling costs, it important for data center managers to take a proactive approach to
the design, build, and operation of the data center of today
Slide 35
Thank you for participating in this course.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 2: Welcome
For best viewing results, we recommend that you maximize your browser window now. The screen controls
allow you to navigate through the eLearning experience. Using your browser controls may disrupt the
normal play of the course. Click the Notes tab to read a transcript of the narration.
Slide 3: Objectives
At the completion of this course, you will be able to:
• Explain why cooling in the data center is so critical to high availability
• Distinguish between Precision and Comfort Cooling
• Recognize how heat is generated and transferred
• Define basic terms like Pressure, Volume and Temperature as well as their units of measurement
• Describe how these terms are related to the Refrigeration Cycle
• Describe the Refrigeration Cycle and its components
Slide 4: Introduction
Every Information Technology professional who is involved with the operation of computing equipment
needs to understand the function of air conditioning in the data center or network room. This course
explains the function of basic components of an air conditioning system for a computer room.
Slide 5: Introduction
Whenever electrical power is being consumed in an Information Technology (IT) room or data center,
heat is being generated. We will talk more about how heat is generated a little later in this course. In
the Data Center Environment, heat has the potential to create significant downtime, and therefore must
be removed from the space. Data Center and IT room heat removal is one of the most essential yet
least understood of all critical IT environment processes. Improper or inadequate cooling significantly
detracts from the lifespan and availability of IT equipment. A general understanding of the fundamental
principles of air conditioning and the basic arrangement of precision cooling systems facilitates more
precise communication among IT and cooling professionals when specifying, operating, or maintaining
a cooling solution. The purpose of precision air-conditioning equipment is the precise control of both
temperature and humidity.
Slide 6: Evolution
Despite revolutionary changes in IT technology and products over the past decades, the design of cooling
infrastructure for data centers had changed very little since 1965. Although IT equipment has always
required cooling, the requirements of today’s IT systems, combined with the way that those IT systems are
deployed, has created the need for new cooling-related systems and strategies which were not foreseen
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
when the cooling principles for the modern data center were developed over 30 years ago.
As damaging as the wrong ambient conditions can be, rapid temperature swings can also have a
negative effect on hardware operation. This is one of the reasons hardware is left powered up, even
when not processing data. According to ASHRAE, the recommended upper limit temperature for data
center environments is 81°F (27.22°C). Precision air conditioning is designed to constantly maintain
temperature within 1°F (0.56°C). In contrast, comfort systems are unable to provide such precise
temperature and humidity controls.
Low Humidity – Low humidity increases the possibility of static electric discharges. Such static discharges
can corrupt data and damage hardware.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
What is Temperature?
Temperature is most commonly thought of as how hot or cold something is. It is a measure of heat
intensity based on three different scales: Celsius, Fahrenheit and Kelvin.
What is Pressure?
Pressure is a basic physical property of a gas. It is measured as the force exerted by the gas per unit area
on surroundings.
What is Volume?
Volume is the amount of space taken up by matter. The example of a balloon illustrates the relationship
between pressure and volume. As the pressure inside the balloon gets greater than the pressure outside of
the balloon, the balloon will get larger. Therefore, as the pressure increases, the volume increases.
We will talk more about the relationship between pressure, volume and temperature a little later in this
course.
A second property of heat transfer is that Heat energy cannot be destroyed. The third property is that heat
energy can be transferred from one object to another object. In considering the ice cube placed on a hot
surface again, the heat from the surface is not destroyed, rather it is transferred to the ice cube which
causes it to melt.
Conduction is the process of transferring heat through a solid material. Some substances conduct heat
more easily than others. Solids are better conductors than liquids and liquids are better conductors than
gases. Metals are very good conductors of heat, while air is very poor conductor of heat.
Radiation related to heat transfer is the process of transferring heat by means of electromagnetic waves,
emitted due to the temperature difference between two objects.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Let’s take a closer look at heat generation at the server level. Approximately 50% of the heat energy
released by servers originates in the microprocessor. A fan moves a stream of cold air across the chip
assembly. The server or rack-mounted blade assembly containing the microprocessors usually draws cold
air into the front of the chassis and exhausts it out of the rear. The amount of heat generated by servers is
on a rising trend. A single blade server chassis can release 4 Kilowatts (kW) or more of heat energy into
the IT room or data center. Such a heat output is equivalent to the heat released by forty 100-Watt light
bulbs and is actually more heat energy than the capacity of the heating element in many residential cooking
ovens.
Now that we have learned about the physics and properties of heat, we will talk next about the Ideal Gas
Law.
The relation between pressure (P), volume (V) and temperature (T) is known as the Ideal Gas Law,
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
which states PV/T= constant . In this equation, P = pressure of gas, V = volume occupied, and T =
temperature. In simpler terms, if pressure is constant, an increase in temperature results in a
proportional increase in volume. If volume is constant, an increase in temperature results in a
proportional increase in pressure. Inversely, if volume is decreased and pressure remains constant,
the temperature must decrease. Basically, pressure and volume are directly proportional to
temperature and inversely proportional to each other.
The working fluid used in the refrigeration cycle is known as the refrigerant. Modern systems primarily
use fluorinated hydrocarbons that are nonflammable, non-corrosive, nontoxic, and non-explosive.
Refrigerants are commonly referred to by their ASHRAE numerical designation. Environmental
concerns of ozone depletion may lead to legislation increasing or requiring the use of alternate
refrigerants like R-134a. Additional legislation related to the use of alternate refrigerants is under
consideration.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Refrigerant changes its physical state from liquid to gas and back to liquid again each time it traverses the
various components of the refrigeration cycle. As the refrigerant changes state from liquid to gas, heat
energy flows into the refrigerant from the area to be cooled (the IT environment for example). Conversely,
as the refrigerant changes state from gas to liquid, heat energy flows away from the refrigerant to a
different environment (outdoors or to a water source).
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
To summarize, let’s review some of the information that we have covered throughout this course.
• When IT equipment is operating, heat is generated, and the removal of this heat is critical to the
proper functioning of data center environments
• Precision Cooling systems are required to provide adequate cooling conditions for IT spaces
• Heat, Pressure, Temperature and Volume are interrelated for gasses
• Heat is transferred via Conduction, Convection and Radiation, and it only moves naturally from
areas of high heat to areas of low heat
• Refrigeration Cycle is a closed cycle of evaporation, compression, condensation and expansion
that serves to remove heat from the data center
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Fundamentals of Cooling II Humidity in the Data Center
Transcript
Slide 2: Welcome
For best viewing results, we recommend that you maximize your browser window now. The screen controls allow you to
navigate through the eLearning experience. Using your browser controls may disrupt the normal play of the course.
Click the attachments link to download supplemental information for this course. Click the Notes tab to read a transcript
of the narration.
Slide 4: Introduction
Every Information Technology professional who is involved with the operation of computing equipment needs to
understand the importance of air conditioning in the data center or network room. Data center and IT room heat
removal and humidity management is one of the most essential yet least understood of all critical IT environment
functions. Improper or inadequate cooling and humidity management significantly detracts from the lifespan and
availability of IT equipment. A general understanding of these principles facilitates more precise communication among
IT and cooling professionals when specifying, operating, or maintaining a cooling solution. This course explains the role
humidity plays in data center cooling.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 5: Introduction
A data center must continuously operate at peak efficiency in order to maintain the business functions it supports and to
decrease operational expenses. In this environment, heat has the potential to create significant downtime, and
therefore must be removed from the space. In addition to heat, the control of humidity in Information Technology
environments is essential to achieving high availability. Humidity can affect sensitive electronic equipment in adverse
ways, and therefore strict humidity controls are required. IT and cooling professionals need a general understanding of
the effects of humidity on their mission-critical systems in order to achieve peak performance.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
carbon dioxide, and water vapor. The water vapor in air is known as humidity. Air in the IT environment must contain the
proper amount of humidity in order to maximize the availability of computing equipment. Too much or too little humidity
directly contributes to reduced productivity and equipment downtime.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
present. If the relative humidity is 100%, then the air is holding all the water vapor it possibly can.
The amount of water that can be contained in this volume of air is not fixed however. As the temperature of air
increases it has the ability to hold more and more water vapor. As air temperature decreases, its ability to hold water
also decreases.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
center do not work together in a coordinated fashion, they are likely to fall short of their cooling capacity and incur a
higher operating cost. CRAC units normally operate in four modes: Cooling, Heating, Humidification and
Dehumidification. While two of these conditions may occur at the same time (i.e., cooling and dehumidification), all
systems within a defined area should always be operating in the same mode. Demand fighting can have drastic effects
on the efficiency of the CRAC system leading to a reduction in the cooling capacity, and is one of the primary causes of
excessive energy consumption in IT environments. If not addressed, this problem can result in a 20-30% reduction in
efficiency which, in the best case, results in wasted operating costs and worst case, results in downtime due to
insufficient cooling capacity.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 16: Humidification Systems
There are three types of humidification systems commonly installed in computer room air conditioners and air handlers:
Steam canister humidifiers, Infrared humidifiers and Ultrasonic humidifiers. All three designs effectively humidify the
IT environment.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
the IT environment against uncontrolled humidity gain or loss from outside the room. A vapor barrier could could
simply involve sealing doorways, or it could mean retrofitting the structure of the data center to seal the entire space. It
is important to consider certain conditions when utilizing a vapor barrier. These include:
Sealing perimeter infiltrations – This involves blocking and sealing all entrance points that lead to uncontrolled
environments
Sealing doorways – Doors and doorways should be sealed with high efficiency gaskets and sweeps to guard
against air and vapor leaks
Paint perimeter walls – all perimeter walls from the structural deck to the ceiling should be treated with a paint
impenetrable to moisture in order to minimize the amount of moisture infiltration.
Avoid unnecessary openings – this becomes particularly relevant in spaced that have been converted to IT
rooms. Open access windows, mail slots, and too-large cable openings should all be blocked or sealed.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Note the exhaust air exiting the server has a higher temperature and lower humidity but the dew point is unchanged.
This is because the nature of the heat a server generates raises the temperature of the entering air but does not change
the amount of moisture in the air.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 27: Operational Set Points
CRAC units should be tested to ensure that measured temperatures (supply & return) and humidity readings are
consistent with design values. Set points for temperature and humidity should be consistent on all CRAC units in the
data center. Unequal set points will lead to demand fighting and fluctuations in the room. Heat loads and moisture
content are relatively constant in an area and CRAC unit operation should be set in groups by locking out competing
modes through either a building management system (BMS) or a communications cable between the CRACs in the
group. No two units should be operating in competing modes during a recorded interval, unless part of a separate
group. When grouped, all units in a specific group will be operating together for a distinct zone
Set point parameters should be within the following ranges to ensure system longevity and peak performance.
Temperature of 68-80.6°F (20-27°C), and
Humidity – 40-60% Relative Humidity
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 28: Short Cycling
Despite proper operational set points, a common cooling challenge occurs when the cool supply air from the CRAC unit
bypasses the IT equipment and flows directly into the CRAC unit air return duct. This is known as short cycling and is a
leading cause of poor cooling performance in a data center. Temperature measurement is one way to determine if
short cycling is occurring. Measurements should be taken at the CRAC supply duct, CRAC return duct, and at the
server inlet. Return air temperatures lower than that of the server inlet temperatures indicates short cycling
inefficiencies. For example, if the CRAC supply AND return temperatures are 70°F, but the server inlet temperature
is measuring 75°F, this would be an indication of short cycling.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Fundamentals of Physical Security
Transcript
Slide 1
Welcome to the Data Center University™ course: Fundamentals of Physical Security.
Slide 2: Welcome
For best viewing results, we recommend that you maximize your browser window now. The screen controls
allow you to navigate through the eLearning experience. Using your browser controls may disrupt the
normal play of the course. Click the attachments link to download supplemental information for this course.
Click the Notes tab to read a transcript of the narration.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Several companies offer Certified Information Systems Security Professional (CISSP) training, which covers
these areas of physical security.
Physical security reduces data center downtime by reducing human presence in a data center to essential
personnel only.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 8: What needs Protection?
To identify what needs protecting, create a conceptual map of the physical facility. Then, locate the areas
that need to be secured, and classify them by the strength or level of security.
The areas that need to be secured might have concentric boundaries with security strongest at the core…
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 9: What needs Protection?
Or the areas that need to be secured might have side-by-side boundaries that require comparable security
levels.
The darkest shading indicates the deepest, strongest security. With depth of security, an inner area is
protected both by its own access methods and by those of the areas that enclose it. In addition, any breach
of an outer area can be met with another access challenge at a perimeter further in. Computer racks stand
at the innermost depth of security, because they house critical IT equipment.
It is important to include in the security map not only areas containing the functional IT equipment of the
facility, but also areas containing elements of the physical infrastructure which, if compromised, could result
in downtime. For example, someone could accidentally shut down the HVAC equipment or deliberately steal
generator starting batteries. A system management console could be fooled into thinking the fire sprinklers
should be activated. (Image on next page)
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
To summarize, successful physical security needs to consider any form of physical access that can
adversely impact business-critical equipment.
The first question, “Who are you?” establishes or verifies personal identity. The second question, “Why are
you here?” provides justification for physical access, a “reason to be there”.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 12: Access Criteria
Certain individuals who are known to the facility need access to the areas relevant to their position. For
example, the security director will have access to most of the facility but not to client data stored at the
installation. The head of computer operations might have access to computer rooms and operating systems,
but not the mechanical rooms that house power and HVAC facilities. The CEO of the company might have
access to the offices of the security director and IT staff and the public areas, but not the computer rooms or
mechanical rooms.
A reason for access to extremely sensitive areas can be granted to specific people for a specific purpose —
that is, if they “need to know,” and only for as long as they have that need.
Because a person’s organizational role typically implies the reason for access, security focuses on identity
verification. The next section discusses physical security identification methods and devices.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
• Amount of data carried
• Computational ability
• Cost of cards
• Cost of reader
The infrared shadow card improves upon the poor security of the bar-code card by placing the bar code
between layers of PVC plastic. The reader passes infrared light through the card, and the shadow of the bar
code is read by sensors on the other side.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
The proximity card, sometimes called a “prox card”, is a step up in convenience from cards that must be
swiped or touched to the reader. As the name implies, the card only needs to be in "proximity" with the
reader.
It is a card with a built-in silicon chip for onboard data storage and/or computation. The general term for
objects that carry such a chip is smart media.
Smart cards offer a wide range of flexibility in access control. For example, the chip can be attached to older
types of cards to upgrade and integrate with pre-existing systems, or the cardholder’s fingerprint or iris scan
can be stored on the chip for biometric verification at the card reader — thereby elevating the level of
identification from “what you have” to “who you are.”
Contactless smart cards having the “vicinity” range offer nearly ultimate user convenience: half-second
transaction time with the card never leaving the wallet.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 19: Physical Security Identification Methods and Devices
Who you are refers to identification by recognition of unique physical characteristics. Physical identification
is the natural way people identify one another with nearly total certainty. When accomplished (or attempted)
by technological means, it is called biometrics.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 21: Physical Security Identification Methods and Devices
This chart shows the relative reliability of each of the three basic security identification approaches.
“What you have” is the least reliable form of identification, because there is no guarantee that the correct
person will use it. It can be shared, stolen, or lost and found.
“What you know” is more reliable than “What you have”, but passwords and codes can still be shared, and if
they are written down, they carry the risk of discovery.
“Who you are” is the most reliable security identification approach, because it is based on something
physically unique to you. Biometric devices are generally very reliable, if recognition is achieved — that is, if
the device thinks it recognizes you, then it almost certainly is you. The main source of unreliability for
biometrics is not incorrect recognition or spoofing by an imposter, but the possibility that a legitimate user
may fail to be recognized (“false rejection”).
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 22: Physical Security Identification Methods and Devices
A typical security scheme uses methods of increasing reliability. For example, entry into the building might
require a combination of swipe card plus PIN; entry to the computer room might require a keypad code plus
a biometric. Combining methods at an entry point increases reliability at that point; using different methods
for each level significantly increases security at inner levels, since each is secured by its own methods plus
those of outer levels that must be entered first.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 23: Other Physical Security Methods
A safe location enhances physical security. Best practices include keeping a data center in its own building,
away from urban areas, airports, high voltage power lines, flood areas, highways, railways, or hazardous
manufacturing plants. If a data center has to be in the same building as the business it supports, the data
center should be located toward the center of the building, away from the roof, exterior walls, or basement.
Data center location within a building is further discussed in the next section, Building Design.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 26: Other Physical Security Methods
• Concealment -- Make use of barriers to obstruct views of the entrances and other areas of concern
from the outside world. This prevents visual inspection by people who wish to study the building
layout or its security measures
• Ducts -- Be aware of the placement of ventilation ducts, service hatches, vents, service elevators
and other possible openings that could be used to gain access.
• Avoid Clutter -- Avoid creating spaces that can be used to hide people or things.
• Locks -- Install locks and door alarms to all roof access points so that security is notified
immediately upon attempted access. Avoid points of entry on the roof whenever possible. The keys
should be of the type that cannot be casually duplicated.
• Plumbing -- Take note of all external plumbing, wiring, HVAC, etc., and provide appropriate
protection.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
lock and an alert to be issued indicating that an intruder has been caught. A footstep detecting floor can be
added to confirm there is only one person passing through. A new technology for solving this problem uses
an overhead camera for optical tracking and tagging of individuals as they pass, issuing an alert if it detects
more than one person per authorized entry.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
the surveillance capability of all the human senses, plus the ability to respond with mobility and intelligence
to suspicious, unusual, or disastrous events. The International Foundation for Protection Officers (IFPO) is a
non-profit organization founded for the purpose of facilitating standardized training and certification of
protection officers. Their Security Supervisor Training Manual is a reference guide for protection officers and
their employers.
Staff and visitors should be required to wear badges that are visible at all times while in a facility.
Visitors should be required to sign in through a central reception area, and be made aware of all visitor
policies prior to facility access.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
part of the security problem, they are also part of the solution — the abilities and fallibilities of people
uniquely qualify them to be not only the weakest link, but also the strongest backup.
An asset removal policy should be in place for company materials. Security guards should be trained and
authorized to monitor, document, and restrict the removal of company assets, such as computer media and
computer equipment, such as laptops.
Beyond an alert staff, the incomparable value of human eyes, ears, brains, and mobility also qualifies
people for consideration as a dedicated element in a security plan — the old-fashioned security guard. The
presence of guards at entry points and roving guards on the grounds and inside the building, while
expensive, can save the day when there is failure or hacking of technological security. The quick response
of an alert guard when something “isn’t right” may be the last defense against a potentially disastrous
security breach.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
In protecting against both accidental and deliberate harm, the human contribution is the same, strict
adherence to protocols.
The first objective identifies what needs protecting, and identifies who is permitted access.
The second objective selects a set of access control methods for authorized personnel, and adds other
elements to back up the overall security strategy.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 39: Summary
• Physical security means keeping unauthorized or ill-intentioned people out of places that they do
not belong, such as a data center or other locations that may contain Network Critical Physical
Infrastructure
• Physical security screens people who want to enter a data center -- Network Security screens data
that comes to a data center
• Human error accounts for 60% of data center downtime
• Create a map of the physical facility to identify what needs protecting, then identify the areas that
need to be secured
• The two main questions that security access asks are
• “Who are you?”
• “Why are you here?”
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
The Fundamentals of Power
Transcript
Slide 1
Welcome to the Fundamentals of Power.
Slide 2
For best viewing results, we recommend that you maximize your browser window now. The screen controls
allow you to navigate through the eLearning experience. Using your browser controls may disrupt the
normal play of the course. Click the Notes tab to read a transcript of the narration.
Slide 3
At the completion of this course, you will be able to:
• Identify basic electricity concepts
• Describe electrical power and its generation
• Differentiate between various power usages in a data center
• Define power factor
• Recognize the importance of electrical safety measures in a data center, and
• Identify potential problem areas in the data center
Slide 4
Power is a primary resource within the data center. Many instances of equipment failure, downtime,
software and data corruption, are the result of power problems. Sensitive components within today’s servers
require power that is free of interruption or distortion. Fortunately, the consequences of large-scale power
incidents are well documented. Across all business sectors, an estimated $104 billion to $164 billion per
year are lost due to power disruptions with another $15 billion to $24 billion per year in losses attributed to
secondary power quality problems.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
It is imperative that servers are isolated from utility power failures, surges, and other potential electrical
problems. The building in which a data center is located could have a mixture of power requirements: air
conditioners, elevators, office equipment, desktop computers, and kitchen area microwaves and
refrigerators. It is important to provide a separate, dedicated power source and power infrastructure for the
data center.
This course will explore the topic of power, and how it is utilized within the data center. Let’s begin by
refreshing ourselves with definitions of some basic electrical terms.
Slide 5
The Volt is a unit of measurement of potential difference or electrical pressure between two points. If the two
points are connected together, they form a circuit and current will flow.
An Ampere measures the amount of electrical current flowing through a circuit during a specific time interval.
The Ohm is the unit of measurement which describes the amount of resistance electricity encounters as it
flows through a circuit.
Slide 6
Hertz is the unit of frequency measurement. One complete cycle of change in voltage direction per second
is equal to one Hertz (Hz).
Alternating Current, or AC, is constantly being reversed back and forth through an electrical circuit. Power
supplied to a building by a nearby utility is an example of AC power.
Direct Current, or DC, is electrical current that only flows in one direction. The power supplied by a battery is
one example of a DC power source.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
To fully demonstrate how all of these terms relate to one another, let’s compare the flow of electricity
through a power cable to the flow of water through a garden hose.
Slide 7
Let’s use a typical garden hose as an illustration for how electricity can work. Water will flow through the
hose at a slow rate, or a fast rate, depending on how far the faucet is opened. Water pressure (equivalent to
voltage) usually remains constant whether the faucet is opened or closed. Current is controlled by the faucet
position (resistance). The faucet is either more open or less open at any given time. The current can also be
controlled by an increase or loss of water pressure (voltage). The amount of water that moves through a
hose in gallons, or liters, per second can be compared to the quantity of electrons that flow per second
through a conductor as measured in amperes.
Slide 8
Our garden hose analogy can also help to explain resistance. Consider a garden hose which is partially
restricted by a large rock. The weight of the rock will slow the flow of water in the garden hose. We can say
that the restricted garden hose has more resistance to water flow than does an unrestricted garden hose. If
we want to get more water out of the hose, we would need to turn up the water pressure at the faucet. The
same is true of electricity. Materials with low resistance let electricity flow easily. Materials with higher
resistance require more voltage to make the electricity flow.
Slide 9
When discussing the concept of power, it is important to understand the term, electrical load. The load is the
sum of the various pieces of equipment in a data center which consume and are supplied with electrical
power. A typical data center load would consist of computers, networking equipment, cooling equipment,
power distribution equipment and all equipment supported by the electrical infrastructure.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 10
As mentioned in our section on key terms, Alternating Current (AC) and Direct Current (DC) are two forms
of power. Let’s begin to explore the ways in which each is utilized.
When the direction of current flowing in a circuit constantly reverses direction, it is known as Alternating
Current (AC). The electrical current coming into your home is an example of alternating current. Alternating
Current, which comes from the utility company is switched back and forth approximately 60 times each
second, measured as 60 Hertz. This measurement is called the frequency. The utility determines the
frequency for the AC power that reaches the data center. In the US, frequency is set at 60 Hertz (Hz). In
other countries, 50 Hz is more common.
AC power is a combination of voltage and current. AC voltage at a generating station is stepped up via high
voltage transformers to voltage levels that enable power to be distributed over long distances with minimal
loss of energy.
Slide 11
Direct Current (DC) has several applications in the typical data center, most commonly in telecom
equipment where banks of batteries supply power at 48 Volts DC or in battery systems supporting
uninterruptible power supplies, which can be at potentials over 500 Volts DC. However, whether the supply
is available from banks of batteries, or from DC generators, DC systems are not practical in data centers
because of heavy resistive losses and the large cable sizes required to power information technology
equipment. Almost all data center equipment is designed for the local nominal AC supply voltages.
Now that we have discussed the forms of current, let’s compare single-phase and 3-phase power.
Slide 12
Two common forms of AC power provided to data centers are single phase and 3-phase power. Single-
phase power has only one basic power waveform, while 3-phase power has three basic power waveforms
that are offset from each other by 120º.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
When AC power comes into a building as a single voltage source, it is referred to as single phase. If the
power comes into the building utilizing three voltage sources, or three phases, or three hot wires, with
accompanying neutrals and grounds, it is referred to as 3-phase power.
Slide 13
Single phase electricity is usually distributed to residential and small commercial customers. The single
phase implies that power comes in with only one hot wire, along with accompanying neutral and ground.
Generating and distributing 3-phase power is more economical than distributing single phase power. Since
the size of the wire affects the amount of current that can pass, it also determines the amount of power that
can be delivered. If a large amount of power were distributed as a single phase, huge heavy transmission
wires would be needed and it would be nearly impossible to suspend them from a pole. It is much more
economical to distribute AC power using 3-phase voltage sources.
Slide 14
120 Volts and 240 Volts AC are the most common single phase voltages supplied to residential customers.
Single phase 240 Volts tends to supply larger domestic appliances, such as clothes dryers, electric cooking
stoves, and water heaters. Single phase 120 Volts is also available in some data centers. Many IT devices,
including computer monitors and individual desktop computers accept 120 Volts. 3-phase 208 Volts power
usually supports commercial environments, including most data centers.
(Please note: In many countries, such as in parts of Europe and Asia, voltages such as 220-240V
and 400V are also common.)
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 15
The Watt measures the real power drawn by the load equipment, and is used as a measurement of both
power and heat generated by the equipment. Wattage rating is typically stamped on the nameplate of the
load equipment. However, the nameplate rating is rarely the same as the measured wattage in IT
equipment. Many data centers have metering available on UPS or power distribution units (PDU), or even
on rack mounted power strips all of which allow accurate recording of power at the site.
Slide 16
The Volt-Amps (VA) rating, or apparent power, represents the maximum load that the device in question can
draw. It is the product of the applied AC voltage times the current drawn by the device. VA is used in sizing
and specifying wire sizes, circuit breakers, switchgear, transformers and general power distribution
equipment. VA ratings represent the maximum power capable of being drawn by the equipment. VA ratings
are always greater than or equal to the watt rating of the equipment.
The significance of the difference between Watts and Volt-Amps is that power supplies, wiring, and circuit
breakers may need to be rated to handle more current and more power than what may be expected.
Slide 17
The terms Watts (W) and Volt-Amps (VA) are often used interchangeably when discussing load sizing for
power infrastructure components, such as UPS devices. These terms are however, not the same. The key
to understanding the relationship between Watts and VA is the Power Factor. Watts represent real power
and Volt Amps represent apparent power.
The power factor is the ratio of real power to apparent power. Power factor can be expressed as a number
between 0 and 1 or as a %. If a given UPS has a watts rating of 8 and a VA rating of 10, then its power
factor is .8 (or 80%). A UPS with a power factor of .8 is more efficient than a UPS with a power factor of .7.
Next, we will look at one type of electronic switching power supply: Power Factor Corrected.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 18
Power Factor Corrected power supplies were introduced in the mid-1990s and have the characteristic that
Watt and VA ratings are equal. That is they have a power factor of nearly 1. Power Factor Correction is
simply a method of offsetting inefficiencies created by electrical loads.
All large computing equipment such as servers, routers, switches, drive arrays made after 1996 use the
Power Factor Corrected power supply. Personal computers, small hubs and personal computer accessories
can have a power factor of less than 1.
For a small UPS designed for computer loads which only have a VA rating, it is appropriate to assume that
the Watt rating of the UPS is 60% of the published VA rating.
For larger UPS systems, it is becoming common to focus on the Watt rating of the UPS. State-of-the-art
larger UPS systems are rated for unity power factor. In other words they are designed so that their capacity
in kVA is the same as in kW.
Slide 19
Many different types of power plugs are used throughout the world. Two of the more common plug
standards in data centers are: the International Electrotechnical Commission (IEC) standard, which is based
in Switzerland, but used globally; and the National Electric Manufacturers Association (NEMA) standard,
which is commonly used in North America.
Most plugs in the data center have three prongs and the receptacles are designed to accept these three
prong configurations. In the US, a typical 3-prong plug consists of two flat prongs and one rounded prong.
The larger of the flat prongs is the neutral, the smaller of the two flat prongs is the hot, and the rounded
prong on the bottom is the ground.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
The most common plug/receptacle combination for IT equipment is of an IEC design. These receptacles are
often designed in a recessed fashion for safety reasons. The design helps to prevent a person from
touching the pins when they are live.
Also common are plugs and receptacles of the twist lock variety. The plug is twisted to lock into the
receptacle. This is particularly useful if you choose to deploy overhead cabling rather than below the raised
floor cabling. With twist lock, the receptacle is less likely to allow gravity and vibration to dislodge it from its
plug.
Slide 20
Among the most common IEC plugs found in data centers are: the IEC-320-C13 and IEC-320-C14, which
are rated over a range from 100 to 240 Volts AC, and a current of about 10 Amps; the IEC-320-C19 and
IEC-320-C20, which are rated over a range from 100 to 240 Volts AC, and a current range of about 16 to 20
Amps.
Also common are the IEC 309 series of 208 Volt single phase Russell Stoll connectors. The IEC 309 2P3W
208V, 20A for example, is rated at 20 Amps, and the IEC 309 2P3W 208V, 30A is rated at 30 Amps. Clues
to the makeup of the plug can be determined by analyzing the name of the plug. In the case of the IEC 309
2P3W 208V, 30A , for example, the letter “P” identifies the number of poles, the letter “W” identifies the
number of wires. “V” identifies volts and “A” designates the current in amperes.
Receptacles are installed in rack-mounted power strips as well as on power whips, and those plugs are
most commonly attached to power cords on IT equipment.
(Please note: In many countries, such as in parts of Europe and Asia, voltages such as 220-240V and
400V are also common.)
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 21
There many examples of NEMA standard plug types. Each NEMA plug and receptacle type follows a
naming convention. For example, a common plug type may read “L5-15P”.
If the code begins with the letter L, the plug or receptacle locks. If the code does not begin with a letter, the
plug or receptacle does not lock. In this example, the plug locks. The first number can be a digit between 1
and 24, where 3 and 4 are never used. That number represents a certain combination of voltage, number of
poles, number of wires, and whether it is a grounding type plug or not. In this example, the plug is a Number
5 plug. The number after the hyphen indicates the
amperage rating. In this example, the number after the hyphen is 15, which means the plug is rated to
handle 15 Amps. The final letter, being a “P”, indicates that the device is, indeed, a plug. If the device was a
receptacle, the final letter would be an “R”.
Now that we have learned what we need to know about plugs and receptacles, let’s explore some common
areas where power failures can occur.
Slide 22
According to M Technology, Inc., an expert in the field of Probabilistic Risk Assessment, the most common
areas of power system failure in data center electrical infrastructure are: the power distribution unit (PDU)
and its respective circuit breakers at 30%, all other circuit breakers at 40%, UPS failure at 20%, and balance
of system at 10%.
We will now discuss the topic of circuit breakers and their importance in the data center.
Slide 23
A circuit breaker is a piece of equipment, or a type of switch, that is designed to protect electrical equipment
from damage caused by overload or short circuit. Circuit breakers are designed to trip at a given current
level. Unlike fuses and switches, circuit breakers can be reset. Large circuit breakers have adjustable trip
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
mechanisms, while smaller circuit breakers, designed for branch circuits, have their trip levels internally
preset according to their electrical current rating.
As mentioned earlier, in the data center’s electrical infrastructure, most failures can be traced back to the
circuit breaker. Circuit breakers can fail in a number of ways: failure to close; failure to open under fault
conditions; spurious trip, where a breaker opens with no fault; and failure to operate with the time-current
specifications of the unit.
Slide 24
Circuit breakers are designed to interrupt excessive current flow and come in a wide range of sizes. The
number of times they trip or switch should be monitored as most have a rated lifetime of 1-10 fault current
interruptions.
Slide 25
If you trace the path of power into your data center, from the utility through the transformer and UPS down
to the load, you will see that there are multiple breaker types all along the way. Some are bigger breakers
(600 amps or greater) and some are the commodity type of breakers, such as branch circuit and PDU
breakers. Circuit breaker coordination is important. The breaker closest to the fault should open faster than
the circuit breakers upstream. Since the bigger breakers are often located upstream, the fault could
potentially affect most of the building instead of just part of the building, if the breakers are not properly
coordinated.
Coordination of breakers is complicated and must be done carefully. Both the rating and speed of breakers
must be considered. It is recommended that data center staffs consult with electricians who are well versed
in this area.
Let’s discuss two popular circuit breaker types that may be found in IT equipment: thermal circuit breakers
and magnetic circuit breakers.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 26
Increasing current raises the temperature inside a thermal circuit breaker. If the current is too high, the
thermal circuit breaker gets hot enough to trip the circuit breaker. A common thermal circuit breaker uses a
bimetallic strip to trip the breaker. A bimetallic strip sandwiches two different metals together. Current flows
through the bimetallic strip, and causes it to heat. Because one metal expands faster than the other metal
as the temperature rises, the strip bends. If the current is too high, the metal strip bends enough to break
the contact in the electric circuit.
Slide 27
A magnetic circuit breaker uses an electromagnetic coil to pull a switch when a circuit carries too much
current. As current increases, the electromagnetic coil pulls with greater force against the spring that keeps
the switch closed. When the current is too high for the circuit, the force from the electromagnetic coil
overcomes the force of the spring, and forces the switch contact to break the circuit.
These two breaker types can also combined into another type of breaker, called a thermal-magnetic circuit
breaker.
Slide 28
Circuit breakers are designed to be either fast acting or slow acting. A circuit breaker may need to switch
short circuit currents as high as 15 times its rated current. A 30 Amp breaker, for example, may need to
switch, in an emergency, 450 or more Amps of current.
Slide 29
Circuit breakers are designed to trip at 110% of their rated threshold. This allows for normal short term
overloads such as the start up currents in electrical motors. For example, a 20 Amp circuit breaker is not
guaranteed to trip until the current exceeds 22 Amps. Circuit breaker tripping thresholds may vary according
to design specification or safety code requirements. To avoid downtime and unnecessary circuit breaker
tripping, a circuit breaker needs to be sized according to both its rated current and its tripping current.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Trip settings are adjusted so that the circuit breaker in question will trip in a timely fashion on overload and
before the upstream breaker trips.
It is advisable to choose a breaker designed for the characteristics of the load. For example, some breakers
have an “HCAR” rating, which is a rating for heating, cooling and air conditioning applications. Breakers
without this particular rating should not be used for the HVAC systems.
Circuit breakers with delayed action may be needed for heavy electrical loads, such as motors, transformers,
and air conditioners that draw temporarily high surge currents. The circuit breaker needs to be rated high
enough to prevent an electric arc from forming that could jump over the contacts of the switch.
Slide 30
Certain types of circuit breakers are designed to trip a circuit if they detect a small amount of ground current.
These breakers are known as Ground Fault Circuit Interrupters (GFCI), Earth Leakage Circuit Breakers
(ELCI), or Residual-Current Devices (RCD). Because they are too sensitive to currents, and pose a risk to
availability, GFCI units are not used in data centers; however, they are commonly placed in damp
environments such as swimming pools, bathrooms, kitchens and on construction sites, to protect personnel
from electric shock. Larger data centers use resistor banks to limit possible ground currents to safer levels,
and protect personnel from electric shock.
Next, we’ll discuss why convenience outlets are so important in the data center environment.
Slide 31
A convenience outlet is an outlet which is used for non-computer devices. It is important to provide this
additional resource outlet which can be used for electronic devices that may be necessary for the data
center environment; data center personnel need a place to plug in office equipment or lighting without the
worry of tripping a circuit breaker or taxing the power supply. Installing convenience outlets is a way to
ensure that enough power is provided to supply not only the critical load, but also any additional power that
may be required.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Next, we’ll discuss safety issues such as electrical grounding and ground loops.
Slide 32
Grounding is principally a safety measure to protect against electric shock. The grounded wire is connected
to the exterior of metal cases on appliances to protect against a hot-wire short inside the appliance. If a
short occurs, the ground wire will limit the touch voltage to less than 30 volts and will also provide a return
path for the excessive current to trip the branch circuit breaker. Some wires are considered hot, because
they are not grounded.
Slide 33
Ground loops occur when there is a varying quality of connections to the earth at different points in an
electrical installation. The result is that current may flow in unexpected loops between ground connections.
Ground loops are a potentially hazardous situation. The solution to stopping ground loops is to confirm the
quality of ground connections at all points in an electrical installation.
Now, let’s discuss seven categories of common power problems and their solutions.
Slide 34
Impulsive transients are sudden high peak events that raise the voltage and/or current levels in either a
positive or a negative direction. Electrostatic discharge (ESD) and lightning strikes are both examples
impulsive disturbances. Impulsive transients can be very fast, happening as quickly as 5 nanoseconds and
lasting less than 50 nanoseconds.
For example, an ESD may have a peak of over 8000 volts, but last less than 4 billionths of a second. The
transient, however, may still be strong enough to damage sensitive electronic equipment.
An approach to solve the problem of impulsive transients is the utilization of a Transient Voltage Surge
Suppressor (TVSS). A TVSS is a device that either absorbs the transient energy, or short circuits the energy
to ground, before it can reach sensitive equipment.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 35
Motors turning on or off commonly cause oscillatory transients for power systems. The voltage quickly rises
above its normal level, and then gradually fades back to its normal level over several wave cycles.
Slide 36
Interruptions occur when there is a temporary break in the power supplied. There are four types of
interruptions: Instantaneous (0.5 cycles to 30 cycles), Momentary (30 cycles to 2 seconds), Temporary (2
seconds to 2 minutes), and Sustained (longer than 2 minutes). An uninterruptible power supply (UPS) can
provide short-term backup power during an interruption.
Slide 37
A sag or dip is a reduction of AC voltage at a given frequency for a duration of 0.5 cycles to 1 minute’s time.
Sags are usually caused by system faults, and are also often the result of switching on loads with heavy
startup currents. Common causes of sags include starting large loads, such as one might see when they
first start up a large air conditioning unit, and remote fault clearing performed by utility equipment. Power
line conditioners and UPSs can compensate for sags or dips.
Slide 38
According to the IEEE, Undervoltage is “… a Root Mean Square (RMS) decrease in the AC voltage, at the
power frequency, for a period of time greater than one minute”. An undervoltage is the result of long-term
problems that create sags. The term “brownout” has been in common usage in describing this problem, but
has been superseded because the term is ambiguous in that it also refers to commercial power delivery
strategy during periods of extended high demand. Undervoltages can create overheating in motors, and can
lead to the failure of non-linear loads such as computer power supply failures. Undervoltages can overheat
a motor or make a power supply fail. Power line conditioners and UPSs can compensate for undervoltages.
Slide 39
A swell, or surge, is the reverse form of a sag, having an increase in AC voltage for a duration of 0.5 cycles
to 1 minute’s time. For swells, high-impedance neutral connections, sudden load reductions, and a single-
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
phase fault on a 3-phase system are common sources. A swell is also prevalent when large loads are
switched out of a system. Power line conditioners and UPSs can compensate for swells.
Slide 40
According to the IEEE, overvoltage is “an RMS increase in the AC voltage, at the power frequency, for
durations greater than a few seconds. An Overvoltage is common in areas where supply transformer tap
settings are incorrectly set, and where loads have been reduced and commercial power systems continue to
compensate for load changes that are no longer necessary. This is common in seasonal regions where
communities diminish during off-season. Overvoltage conditions can create high current draw and
unnecessary tripping of downstream circuit breakers, as well as overheating and stress on equipment.
Power line conditioners and UPSs can compensate for overvoltage.
Slide 41
Many different causes of waveform distortion exist. DC Offset happens when direct current is added to an
AC power source. DC Offset can damage electrical equipment, such as motors and transformers, by
overheating them.
Harmonic waveforms are another form of waveform distortion. Harmonics appear on the power distribution
system as distorted current. Keep in mind that all equipment that does not have the advantage of modern
harmonic-correction features should be isolated on separate circuits.
Slide 42
Voltage fluctuation is a systematic variation of the voltage waveform or a series of random voltage changes
of small dimensions, namely 95 to 105% of nominal at a low frequency, and generally below 25 Hz. Power
line conditioners and UPSs can compensate for voltage fluctuations.
Slide 43
Frequency variation is extremely rare in stable, utility power systems, especially systems interconnected
through a power grid. Where sites have dedicated standby generators or poor power infrastructure,
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
frequency variation is more common especially if the generator is heavily loaded. IT equipment is frequency
tolerant, and generally not affected by minor shifts in local generator frequency.
Next, we will follow the path of power distribution in the data center.
Slide 44
Standby Power can be defined as any power source available to the data center that takes over the function
of supplying power when utility power is unavailable.
Two common forms of standby power are mechanical generators that use electromagnetism to produce
electricity, and electrochemical systems which use batteries and fuel cells to generate electrical current.
Mechanical generator systems provide power on large and small scales, for entire cities or for individual use.
Electrochemical generation is typically for smaller or temporary use.
So, how is power distributed in the data center? Let’s explore this concept next.
Slide 45
Electricians often refer to one line diagrams. One line can be very simple to very complex. At a minimum, it
should illustrate the primary electrical components of the electrical system and illustrate how they link and
interact with each other.
This one line lets us see how electrical power is distributed in the data center from a server plug to outlet
strips to Power Distribution Units (PDU) to UPS and bypass to Automatic Transfer Switch to the primary
power source (Utility) to the emergency power source (Generator).
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 46
The utility provides the primary electrical power source for the data center. Ideally, multiple utility feeds
should be provided from separate sub-stations or power grids. While not essential, this action will provide
back-up and redundancy.
An emergency, back-up power source, in the form of a generator, can be positioned to bear the load of both
data center components, as well as all essential support equipment, such as air conditioners, in case of
power disruption.
Slide 47
A circuit is a path for electrical current to flow. A branch circuit is one, two, or more circuits whose main
power is connected through the same main switch. Each branch circuit should have its own grounding wire.
All wires must be of the same gauge.
An uninterruptible power supply, or UPS, is a device or system that maintains a continuous supply of electric
power to certain essential equipment that must not be shut down unexpectedly. The UPS equipment is
inserted between a primary power source, such as a commercial utility, and the primary power input of
equipment to be protected, for the purpose of eliminating the effects of a temporary power outage and
transient anomalies.
An automatic transfer switch is a switch that will automatically switch the power supply from one power
source to another, in case of power disruption or bypass mode. For example, if the utility fails, the automatic
transfer switch would immediately switch to UPS or generator power.
Slide 48
A Power Distribution Unit (PDU) is a device that distributes electric power by usually taking high voltage and
amperage and reducing it to more common and useful rates, for example from 220V 30A single phase to
multiple 110V 15A or 110V 20A plugs. It is used in computer data centers and sometimes has features like
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
remote monitoring, and control, down to plug level. (Please note: In many countries, such as in parts of
Europe and Asia, voltages such as 220-240V and 400V are also common.)
An outlet strip is a strip of sockets which allows multiple devices to be plugged in at one time, and usually
includes a switch to turn all devices on and off. In a few cases, they may even have all outlets individually
switched. Outlet strips are often used when many electrical devices are in close proximity, especially with
audio/video and computer systems.
A server plug is the power plug or other type of electrical connector which mates with a socket or jack, and
in particular, is used with electrical or electronic equipment in the data center.
Slide 49
To summarize, let’s review some of the information that we have covered throughout the course.
Slide 50
Thank you for participating in this course.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Generator Fundamentals
Transcript
Slide 1
Welcome to the Data Center UniversityTM course on Generator Fundamentals.
Slide 2: Welcome
For best viewing results, we recommend that you maximize your browser window now. The screen controls
allow you to navigate through the eLearning experience. Using your browser controls may disrupt the
normal play of the course. Click the Notes tab to read a transcript of the narration.
Slide 4: Introduction
Consider these statistics. According to Contingency Planning Research power related events such as
blackouts and surges account for 31% of computer downtime episodes lasting more than 12 hours, power
failure and surges account for 45.3% of data loss, and according to IDC power disturbances account for
about 33% of all server failures. A standby generator is one critical equipment component that will keep you
from becoming one of these statistics. Understanding the basic functions and concepts of standby
generator systems helps provide a solid foundation allowing IT professionals to successfully specify, install,
and operate critical facilities. This course is an introduction to standby generators and the subsystems that
power a facility’s critical electrical loads when the utility cannot.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 5: Standby Generators
A standby generator system is a combination of an electrical generator and a mechanical engine mounted
together to form a single piece of equipment. The components of a generator include the prime mover, the
alternator, the governor, and the distribution system. The distribution system is made up of several
subcomponents which include the Automatic Transfer Switch (ATS) and associated switchgear and
distribution. In many instances, generators also include a fuel tank and are equipped with a battery and
electric starter.
As we begin the course let us first focus on the internal combustion engine or the prime mover.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
It is referred to as a 4-stroke engine because of the four distinct stages that occur in the combustion cycle.
These stages include intake of the air/fuel mixture, compression of that mixture, combustion or explosion,
and exhaust. When referring to generators, the 4-stroke engine is generally referred as the prime mover.
The following slide describes the core attributes of the prime mover.
Slide 8: Fuel
There are four main fuels used to power generators. These include diesel, natural gas, liquid petroleum
(LP), and gasoline. The selection of a fuel type depends on variables such as storage, cost, and
accessibility. Generator systems with diesel or natural gas engines are the most common standby power
generators utilized to support data centers. Fuel availability generally dictates the type of standby generator
selected. For example, if a generator is located in an isolated area where public utilities are not available, LP
or diesel fuel are logical choices. Additionally, the generator’s fuel type, as well as the magnitude of potential
step-load changes, or whether the generator will be expected to support an instantaneous change in load
current, from zero to full load for example, will influence the selection of the governor. Because these
factors contribute to the accuracy and stability of the prime mover’s speed, they must be considered in the
overall design.
Let’s review some of the advantages and disadvantages of the different fuel types.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 9: Diesel
Diesel fuel is often chosen in many applications because of easy onsite storage, fewer problems with long
term storage, reduced fire hazard, and more operating hours between overhauls. Disadvantages to using
diesel fuel are its low volatility at low ambient temperatures and diesel does not burn as cleanly as natural
gas or liquid petroleum and therefore it has potential harmful affects on the environment.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 12: Gasoline
Gasoline is often used in smaller engine generator sets due to the fact that it is readily available and that
gasoline powered engines start easier than diesel engines in cold temperatures. The disadvantages are
that the storage of gasoline is a fire hazard and that long term storage and usage of old gasoline can be
detrimental to the performance of the generator engine. Lastly, there is the option of using duel fuel engines.
For example, if a generator is capable of using both natural gas or liquid petroleum it offers that much more
flexibility when considering environmental safety needs and also the need for redundant power.
Now that we have examined fuel types lets look at another important aspect of generator function, cooling.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
long runtime must be supported. This is because fuel lines and filters can be isolated and changed while
the engine remains running. Not having spare parts for filters and other "consumables" can result in
downtime. Proactive monitoring of these filters is done with Differential Pressure Indicators. They show the
pressure difference across a filter or between two fuel lines during engine operation. When applied to air
filters, these proactive monitoring devices are known as Air Restrict Indicators. These provide a visual
indication of the need to replace a dry-type intake air filter while the generator engine runs.
The majority of generator systems use a battery-operated starter motor, as in automotive applications,
although pneumatic or hydraulic alternatives are sometimes found on the heaviest prime movers. The
critical element in the conventional starter is clearly the battery system. For example, the battery-charging
alternator present on some engines does nothing to prevent battery discharge during the unused periods.
Providing a separate, automatic charging system with remote alarm is considered a "best practice”. It is
also essential to keep the battery warm and corrosion-free.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Engine block heaters also contribute to the startup success rate by reducing the frictional forces that the
starter motor must work against when energized. Numerous studies have found startup failures to be the
leading cause of generator system failures.
Conscientious maintenance and design of a starter motor is absolutely critical to achieving a respectable
success rate for generator startup systems.
Let’s start by looking at some of these improvements used in today’s data center generators.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
The brushless designation refers to the fact that this design requires no contacts be placed against any
revolving parts to transfer electrical energy to or from the components. Brushes in motors and very small
generators may still be an acceptable design, but predictably the brushes wear out with use, and are
impossible to inspect in a proactive manner. A large generator design that relies on brushes is not up to the
reliability standards needed for mission-critical operation. When a generator is described as self excited it
means that the electricity used to create the electromagnetic field is created within the alternator itself
thereby allowing the alternator to produce large amounts of electricity with no other energy then what is
provided by the prime mover.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
(Image on next page)
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 22: The Governor
An isochronous (same speed) governor design maintains constant speed regardless of load level. Small
variations on the speed of the prime mover still occur and their extent is a measure of the governor’s
stability. Today governor technology exists to maintain frequency regulation to within ± 0.25% with
response times to changing loads on the order of 1 to 3 seconds. Modern electronic solid-state designs
deliver high reliability and the needed frequency regulation for sensitive loads.
Generator Fundamentals P a g e | 10
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 24: Voltage Regulator
The basic function of a voltage regulator is simply to control the voltage produced at the output of the
alternator. The operation of the voltage regulator is vital to critical loads dependent on computer grade
power. The goal is to configure a system with an appropriate response time to minimize sags and surges
that occur as the load changes. Another issue to be aware of is the behavior of the regulator when
subjected to non-linear loads such as older switch-mode power supplies. Non-linear loads draw current in a
manner that is inconsistent with the voltage waveform, while resistive loads (like a light bulb) draw current in
synch with the voltage waveform. Non-linear loads can interact negatively with a generator system thereby
jeopardizing the availability of the critical load during standby operation.
The ATS also re-transfers the load to utility when normal conditions are restored. Other common features of
ATS related equipment includes automatic generator test scheduling and monitoring of important cool-down
Generator Fundamentals P a g e | 11
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
cycles for the generator after the utility is restored. Traditionally this hardware is sourced from a variety of
vendors, including the generator manufacturers, distribution switchgear manufacturers, and specialists in
ATS design.
Generator Fundamentals P a g e | 12
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Now that we have explored the topic of environmental pollution lets a take a closer look at a few
organizations that set the regulatory standards for standby generators.
When you consider the fact that malfunctions can be almost completely eliminated, with the proper
preventative maintenance, the importance of such preventative steps hits home.
Generator Fundamentals P a g e | 13
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Here are a few steps to take to ensure that your generator is ready when you need it.
• It’s important to note that a good preventative maintenance program is going to require shutting off
all power for approximately 10-20 hours once a year. This will help to check the tightness of all
medium voltage (MV) and switchgear terminations
• If a total shut down is not a possibility, thermal imaging can be used to detect any hot spots caused
by electrical terminations and connections
• The trip setting of major circuit breakers should be tested
• Transformers and cables should be tested, and lastly
• Coolant samples should be taken for signs of insulation deterioration
Generator Fundamentals P a g e | 14
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 33: Summary
To summarize let’s review some of the information that we have covered throughout the course:
• A standby generator system is a combination of an electrical generator and an engine mounted
together to form a single piece of equipment.
• The most commonly used engine for a standby generator system is the four stroke engine also
known as the prime mover.
• There are four main fuels used to power generators, these include: diesel, natural gas, liquid
petroleum (LP), and gasoline.
• In order to ensure reliability of your generator, it is advised that a detailed maintenance schedule is
maintained.
Generator Fundamentals P a g e | 15
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Optimizing Cooling Layouts for the Data Center
Transcript
Slide 1
Welcome to the Data Center University™ course: Optimizing Cooling Layouts for the Data Center.
Slide 2: Welcome
For best viewing results, we recommend that you maximize your browser window now. The screen controls
allow you to navigate through the eLearning experience. Using your browser controls may disrupt the
normal play of the course. Click the attachments link to download supplemental information for this course.
Click the Notes tab to read a transcript of the narration.
Slide 4: Introduction
Whenever electricity is being consumed, heat is being generated. Excess heat in the data center can pose
a significant threat to availability, therefore maintaining appropriate temperature and humidity levels is
critical. A typical data center has six to eight times the heat density of a normal office space, and the
temperature can differ dramatically throughout that space. In addition, while a normal office space may
require two air changes per hour to maintain temperature, a high density data center may require up to thirty
air changes per hour. Striking a balance between the complex and drastic heat output of IT equipment and
the strict temperature range thresholds for optimal system performance, creates a significant cooling
challenge for IT and Facility Managers. Humidity and temperature levels outside the recommended range
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
can rapidly deteriorate sensitive components inside the computers making them vulnerable to future failures.
As high-density equipment becomes more prolific in IT rooms, proper configuration of cooling equipment
becomes paramount to the availability of the data center. The concept of proper cooling not only includes
the supply of adequate cool air, but also should take into account the distribution of that cool air, and the
removal of hot air. Data center cooling has emerged as a predominant area of focus for today’s IT and
Facility Managers.
Slide 5: Introduction
There are many facets of data center cooling. Types of cooling equipment, humidity levels, raised floor
designs, floor tile cooling configurations, and cabling arrangements all effect data center cooling. In this
course, we will specifically address how the layout of cooling equipment affects data center cooling.
However if you wish to learn more about these additional cooling factors, please refer back to the Data
Center University Course catalog, where these topics are addressed in more detail.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
1. Flooded Air Distribution System
2. Locally Ducted system
3. Fully Ducted system
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
The first heat removal approach involves the use of air cooled computer room air conditioners, which are
widely used in IT environments of all sizes and are considered the “staple” for small and medium rooms.
This type of system is often referred to as a DX, or Direct Expansion system or split system. In an air
cooled system half the components of the refrigeration cycle are in the computer room air conditioner (also
known as a CRAC unit) and the rest are outdoors in the air cooled condenser. Refrigerant, typically R-22,
circulates between the indoor and outdoor components in pipes called refrigerant lines. Heat from the IT
environment is “pumped” to the outdoor environment using this circulating flow of refrigerant.
The advantages of this system include low overall cost, and ease of maintenance. One of the prime
disadvantages revolves around the need for refrigerant piping to be installed in the field. Only properly
engineered piping systems that carefully consider the distance and change in height between the IT and
outdoor environments will deliver reliable performance. In addition, refrigerant piping cannot be run long
distances reliably and economically. Finally, multiple computer room air conditioners cannot be attached to
a single air cooled condenser.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
The Air Cooled DX system is commonly used in wiring closets, computer rooms and small-to-medium data
centers with moderate availability requirements.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
One of the key advantages of air cooled self-contained systems is that they have the lowest installation cost.
No components need to be installed on the roof or outside of the building. In addition, all refrigeration cycle
components are contained inside one unit as a factory-sealed and tested system for highest reliability.
Disadvantages of this system include less heat removal capacity per unit compared to other configurations.
Also, air routed into and out of the IT environment for the condensing coil usually requires ductwork and/or
dropped ceiling.
Air cooled self contained systems are commonly used in wiring closets, laboratory environments and
computer rooms with moderate availability requirements. They are sometimes utilized to address hot spots
in data centers.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 12: Glycol Cooled Systems
Glycol cooled systems contain all refrigeration cycle components in one enclosure (like a self-contained
system) but replace the bulky condensing coil with a much smaller heat exchanger. The heat exchanger
uses flowing glycol (a mixture of water and ethylene glycol, similar to automobile anti-freeze) to collect heat
from the refrigerant and transport it away from the IT environment. Heat exchangers and glycol pipes are
always smaller than condensing coils (2-piece air cooled systems) and condenser air ducts (self-contained
air cooled systems) because the glycol mixture has the capability to collect and transport much more heat
than air does. The glycol flows via pipes to an outdoor-mounted device called a fluid cooler. Heat is
rejected to the outside atmosphere as fans force outdoor air through the warm glycol-filled coil in the fluid
cooler. A pump package (pump, motor and protective enclosure) is used to circulate the glycol in its loop to
and from the computer room air conditioner and fluid cooler.
Advantages:
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
• The entire refrigeration cycle is contained inside the computer room air conditioning unit as a
factory-sealed and tested system for highest reliability with the same floor space requirement as a
two piece air cooled system.
• Glycol pipes can run much longer distances than refrigerant lines (air cooled system) and can
service several computer room air conditioning units from one fluid cooler and pump package.
• In cold locations, the glycol within the fluid cooler can be cooled so much (below 50°F [10°C]) that
it can bypass the heat exchanger in the CRAC unit and flow directly to a specially installed
economizer coil. Under these conditions, the refrigeration cycle is turned off and the air that flows
through the economizer coil, now filled with cold flowing glycol, cools the IT environment. This
process is known as “free cooling” and provides excellent operating cost reductions when utilized.
Disadvantages:
• Additional components are required (pump package, valves) and this increases capital and
installation costs when compared with air cooled DX systems.
• Maintenance of glycol volume and quality within the system is required.
• This system introduces an additional source of liquid into the IT environment.
• Glycol cooled systems are commonly Used:
• In computer rooms and small-to-medium data centers with moderate availability requirements
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
However, there are two important differences between a glycol cooled system and a water cooled system:
1. A water (also called condenser water) loop is used instead of glycol to collect and transport heat
away from the IT environment
• Heat is rejected to the outside atmosphere via a cooling tower instead of a fluid cooler.
2. A cooling tower rejects heat from the IT room to the outdoor environment by spraying warm
condenser water onto sponge-like material (called fill) at the top of the tower. The water spreads
out and some of it evaporates away as it drips and flows to the bottom of the cooling tower (a fan is
used to help speed up the evaporation by drawing air through the fill material). In the same
manner as the human body is cooled by the evaporation of sweat, the small amount of water that
evaporates from the cooling tower serves to lower the temperature of the remaining water. The
cooler water at the bottom of the tower is collected and sent back into the condenser water loop via
a pump package.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
• Condenser water loops and cooling towers are usually not installed solely for the use of
water cooled computer room air conditioning systems. They are usually part of a larger
system and may also be used to reject heat from the building’s comfort air conditioning
system (for cooling people) and water chillers (water chillers are explained in the next
section).
Advantages:
• All refrigeration cycle components are contained inside the computer room air conditioning unit as
a factory-sealed and tested system for highest reliability.
• Condenser water piping loops can easily run long distances and almost always service many
computer room air conditioning units and other devices from one cooling tower.
• In leased IT environments, usage of the building’s condenser water is generally less expensive
than chilled water (chilled water is explained in the next section).
Disadvantages:
• High initial cost for cooling tower, pump, and piping systems.
• Very high maintenance costs due to frequent cleaning and water treatment requirements.
• Introduces an additional source of liquid into the IT environment.
• A non-dedicated cooling tower (one used to cool the entire building) may be less reliable then a
cooling tower dedicated to the Computer Room Air Conditioner.
Commonly Used:
• In conjunction with other building systems in small, medium and large data centers with moderate-
to-high availability requirements.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
In a chilled water system the components of the refrigeration cycle are relocated from the computer room air
conditioning systems to a device called a water chiller.
The function of a chiller is to produce chilled water (water refrigerated to about 46°F [8°C]). Chilled water is
pumped in pipes from the chiller to computer room air handlers (also known as CRAH units) located in the
IT environment. Computer room air handlers are similar to computer room air conditioners in appearance
but work differently. They cool the air (remove heat) by drawing in warm air from the computer room
through chilled water coils filled with circulating chilled water. Heat removed from the IT environment flows
out with the (now warmer) chilled water exiting the CRAH and returning to the chiller. At the chiller, heat
removed from the returning chilled water is usually rejected to a condenser water loop (the same condenser
water that water cooled computer room air conditioners use) for transport to the outside atmosphere.
Chilled water systems are usually shared among many computer room air handlers and are often used to
cool entire buildings.
Advantages:
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
• Computer room air handlers generally cost less, contain fewer parts, and have greater heat
removal capacity than computer room air conditioners with the same footprint.
• Chilled water piping loops are easily run very long distances and can service many IT
environments (or the whole building) from one chiller plant.
• Chilled water systems can be engineered to be extremely reliable.
• Chilled water systems have the lowest cost per kW for large installations.
• Disadvantages:
• Chilled water systems generally have the highest capital costs for installations below 100kW of
electrical IT loads.
• CRAHs generally remove more moisture from data center air than their CRAC counterparts,
requiring more money be spent on humidifying the room in many climates.
• Introduces an additional source of liquid into the IT environment.
Commonly Used:
• In conjunction with other systems in medium and large data centers with moderate-to-high
availability requirements or as a high availability dedicated solution in large data centers.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Ceiling mounted systems are small (300-500 pound) (136-227 kg) precision cooling devices suspended
from the IT room’s structural ceiling. They cool 3-17kW of computer equipment and utilize any of the 5 IT
environment heat removal methodologies. One of the key benefits of ceiling mounted systems is that they
do not require floor space in the IT environment. A drawback however, is that installation and maintenance
activities are more complicated due to their overhead placement. As a result, it is recommended that IT
professionals, facilities personnel and manufacturer’s representatives or mechanical contractors handle the
specification, installation and maintenance of ceiling mounted precision cooling systems.
Floor mounted precision cooling systems usually offer the greatest range of features and capabilities. They
are increasingly being used to cool or to assist in the cooling of smaller IT environments as power
consumption of computer equipment continues to increase.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
cooling systems is highly dependent on the existing electrical, mechanical and structural capabilities of the
building they are to be operated in. For this reason it is important for IT professionals to work closely with
facilities management and manufacturer’s representatives during the specification process. Often the
services of a State-Registered Professional Engineer are required to design and certify the solution. Most
mechanical contracting firms familiar with IT environments can install and if desired, maintain the solution.
Recent developments in large floor mounted systems have reduced their energy consumption and the
overall space they require in the computer room or data center. Their outer dimensions and appearance
have changed so they fit in spaces sized for IT rack enclosures. This allows for operational cost savings
and more flexibility in IT environment planning.
The Air Cooled DX System requires roof access, an air cooled condenser and refrigerant piping for both the
ceiling and floor mounted configurations. The floor and ceiling configurations are typically found in computer
rooms, and the floor mounted arrangement is also found in medium data center applications.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
The Air Cooled Self-Contained System must have dropped ceiling or ducts installed for ceiling mounted
configurations. The floor mounted system also must have a dropped ceiling for condenser air tubes. In
addition, large floor mounted systems require outdoor heat rejection components. This system is typically
found in wiring closets, computer rooms and small data centers.
With Glycol Cooled Systems, both ceiling and floor mounted arrangements require the building to have roof
access and a 10’ (3m) floor to structural ceiling height. Fluid cooler, pump package and glycol piping is also
required. Ceiling mounted systems are found in computer rooms and small data centers; while floor
mounted versions can be used in medium and large data center installations.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Water cooled systems require that the building must have 10’ (3m) floor to structural ceiling height for ceiling
arrangements with a further requirement of a hookup to the building condenser water. In floor mounted
configurations, the building must have condenser water system with adequate capacity. Ceiling mounted
installations are not commonly seen, however floor mounted configurations can be found in medium and
large data center applications.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
In Chilled Water Systems, the ceiling mounted and floor mounted arrangements requires a reliable chilled
water system and hookup, with the ceiling mounted configuration of a 10’ (3m) floor to structural ceiling
height. Chilled water systems that are ceiling mounted can be found in wiring closets, computer rooms and
small data centers, while the floor mounted version can be used in computer rooms, ,and small, medium
and large data center installations.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
racks and large amounts of equipment are a factor, careful planning of room layouts, and continued
diligence in maintaining these layouts is imperative for maximum performance.
Before we can understand how rack placement affects performance, let’s first discuss how racks and
equipment components housed within the racks are affected by airflow in the data center.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 28: Hot Aisle/ Cold Aisle
In a hot aisle /cold aisle configuration, in addition to racks being placed face to face, CRAC units must also
be strategically placed to create a cold aisle by properly distributing the cold air to the face of the racks, and
to maximize the return of hot exhaust air out of the back of the racks and into the hot aisle.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
In a large data center application, this would mean every other row would be forward facing to create the hot
aisle/cold aisle arrangement.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 32: Thank You!
Thank you for participating in this course.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Power Redundancy in the Data Center
Transcript
Slide 1
Welcome to the Data Center University™ course: Power Redundancy in the Data Center.
Slide 2: Welcome
For best viewing results, we recommend that you maximize your browser window now. The screen controls
allow you to navigate through the eLearning experience. Using your browser controls may disrupt the
normal play of the course. Click the attachments link to download supplemental information for this course.
Click the Notes tab to read a transcript of the narration.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 4: Introduction
A key element relative to all data centers is the need for power. In most countries, the public power
distribution system is fairly reliable. However, studies have shown that even the best utility systems are
inadequate to meet the strict operating needs of critical nonstop data processing functions. Most companies,
when faced with the likelihood of downtime, and data processing errors caused by faulty utility power
choose to implement a back-up strategy for their mission-critical equipment.
Slide 5: Introduction
These strategies may involve the inclusion of additional hardware such as Uninterruptible Power Supplies
(or UPSs) and generators, and system designs such as N+1 configurations, and dual-corded equipment.
This course will address various strategies to consider when planning for redundancy in the data center.
Slide 6: Introduction
In our rapidly changing global marketplace, the demand for faster, more robust technologies in a smaller
footprint is ever-increasing. In addition, there is a further requirement that these technologies be highly
available as well.
Slide 7: Introduction
Availability is the primary goal of all data centers and networks. Five 9’s of availability of a data center is a
standard most IT professionals strive to achieve. Availability is the estimated percentage of time that
electrical power will be online and functioning properly to support the critical load. It is of critical importance,
and is the foundation upon which successful businesses rely. According to the National Archives and
Records Administration in Washington, D.C., 93% of businesses that have lost availability in their data
center for 10 days or more have filed for bankruptcy within one year. The cost of one episode of downtime
can cripple an organization. The availability of the public power distribution, while sufficient for many
organizations, is ill-equipped to support mission-critical functions. Therefore, planning for redundancy or the
introduction of alternate or additional means of support is a necessity. Redundancy can be thought of as
safety-net or Plan B should power utility fail, or be inadequate. One of the ways in which to increase data
center power availability is through a UPS.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 8: Uninterruptible Power Supplies
An uninterruptible power supply or UPS, in simplistic terms is a device which provides battery back-up
power to IT equipment should utility power be unavailable, or inadequate. UPSs provide power in such a
way that the transition from utility power to battery power is seamless and uninterrupted. UPSs can range in
size and capacity to provide power to small individual desktop computers, all the way up to large megawatt
data centers. Many UPS systems incorporate software management capabilities which allow for data
saving and unattended shutdown should the need or application warrants it.
Understanding how these various UPS designs work is critical to choosing the best UPS for a particular
application.
The Standby UPS Is the most common design configuration used for personal computers. The operating
principal behind the standby UPS is that it contains a transfer switch which by default, uses filtered AC
power as the primary power source. When AC power fails, the UPS switches to the battery by way of the
transfer switch. The battery-to-AC power converter, also known as the inverter is not always on, hence the
name ‘standby.’
The primary benefits of this type of UPS are high efficiency, small in size and low in cost. Some models
are also able to provide adequate noise filtration and surge suppression. The limitations are that this type of
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
UPS uses its battery during brownouts, which degrades overall battery life. Also, it is an impractical solution
over 2kVA.
When AC power fails, the UPS switches to the battery by way of the transfer switch. The battery-to-AC
power converter, also known as the inverter is not always on, hence the name ‘standby.’
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
The primary benefits of this type of UPS are high efficiency, small in size and low in cost. Some models
are also able with proper filter and surge circuitry, to provide adequate noise filtration and surge suppression.
The limitations are that this type of UPS uses its battery during brownouts, which degrades overall battery
life. Also, it is an impractical solution over 2kVA.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
When the input power fails, the transfer switch opens and the power flows from the battery to the UPS
output. With the inverter always on and connected to the output, this design provides additional filtering and
reduces switching transients when compared with the Standby UPS topology.
In addition, the Line Interactive design usually incorporates transformer which adds voltage regulation as the
input voltage varies. Voltage regulation is an important feature when variable voltage conditions exist,
otherwise the UPS would frequently transfer to battery and then eventually down the load. This more
frequent battery usage can cause premature battery failure.
The primary benefits of the Line-interactive UPS topology include high efficiency, small size, low cost and
high reliability. Additionally, the ability to correct low or high line voltage conditions make this the dominant
type of UPS in the 0.5-5kVA power range. The Line-interactive UPS is ideal for rack or distributed servers
and/or harsh power environments. Over 5kVA, the use of a line –interactive UPS becomes impractical.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 12: Standby-Ferro UPS
The Standby-Ferro UPS was once the dominant form of UPS in the 3-15kVA range. This design depends
on a special saturating transformer that has three windings (power connections). The primary power path is
from AC input, through a transfer switch, through the transformer, and to the output.
In the case of a power failure, the transfer switch is opened, and the inverter picks up the output load.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
In the Standby-Ferro design, the inverter is in the standby mode, and is energized when the input power
fails and the transfer switch is opened. The transformer has a special "Ferro-resonant" capability, which
provides limited voltage regulation and output waveform "shaping". The isolation from AC power transients
provided by the Ferro transformer is as good as or better than any filter available. But the Ferro transformer
itself creates severe output voltage distortion and transients, which can be worse than a poor AC connection.
Even though it is a standby UPS by design, the
Standby-Ferro generates a great deal of heat because the Ferro-resonant transformer is inherently
inefficient. These transformers are also large relative to regular isolation transformers; so standby-Ferro
UPS are generally quite large and heavy.
Standby-Ferro UPS systems are frequently represented as On-Line units, even though they have a transfer
switch, the inverter operates in the standby mode, and they exhibit a transfer characteristic during an AC
power failure.
The primary benefit of this design is high reliability and excellent line filtering. The limitations include very
low efficiency combined with instability when used with some generators and newer power-factor corrected
computers, causing the popularity of this design to decrease significantly.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
The Double Conversion On-Line UPS converts AC power to DC and then converts the DC back to AC to
power the connected equipment. The batteries are directly connected to the DC level. This effectively filters
out line noise and all other anomalies from the AC power.
Failure of the AC Power does not cause activation of the transfer switch, because the input AC is charging
the backup battery source which provides power to the output inverter. Therefore, during an AC power
failure, on-line operation results in no transfer time.
There are certainly benefits and limitations of this UPS. A benefit is that it provides nearly ideal electrical
output performance, with no transfer time. But the constant wear on the power components reduces
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
reliability over other designs. Additionally, both the battery charger and the inverter convert the entire load
power flow, resulting in reduced efficiency and increased heat generation. Additionally, the inefficiency of
electricity energy consumption is a significant part of the life-cycle cost of the UPS.
On-Line design, the Delta Conversion On-Line UPS always has the inverter supplying the load voltage.
However, in this configuration the primary power source is blended with power from the additional Delta
Converter. As the primary power varies away from its normal value the inverter comes to life to make up the
difference. The Double Conversion On-Line UPS converts the power to the battery and back again whereas
the Delta Converter moves components of the power from input to the output.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
In the Delta Conversion On-Line design, the Delta Converter acts with dual purposes. The first is to control
the input power characteristics. This active front end draws power in a sinusoidal manner, minimizing
harmonics reflected onto the utility. This ensures optimal utility and generator system compatibility,
reducing heating and system wear in the power distribution system. The second function of the Delta
Converter is to control input current in order to regulate charging of the battery system.
This input power control makes the Delta Conversion On-Line UPS compatible with generators and reduces
the need for wiring and generator over sizing. Delta Conversion On-Line technology is the only core UPS
technology today protected by patents and is therefore not likely to be available from a broad range of UPS
suppliers.
The benefits of Delta Conversion On-Line UPS include high efficiency, excellent voltage regulation, and
overall reduction in life-cycle costs of energy in large installations. It is impractical in installations under
5kVA.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
1. Capacity or “N” System
2. Isolated Redundant
3. Parallel Redundant or “N+1” System
4. Distributed Redundant
5. System plus System Redundant
Before we can fully explain these redundancy configurations, we must first talk about the concept of ‘N.’
On the other hand, if there are 5 disks and only 4 are needed for storage capacity, this is an example of an
N+1 design.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
If 4 disks are required for storage capacity, and there are 2 RAID systems, each with 4 disks, this is
considered 2N.
Occasionally, N+2 designs are seen, but often, this are happened upon by mistake, rather than a purposeful
design.
For example, a 6 disk RAID may be utilized with the thought that that much storage capacity would be used,
however, the capacity never grows larger than the capacity of 4 disks. In this case, this is actually a N+2
design. N+1, and 2N designs are those that have some form of built in redundancy.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 18: Capacity or “N” System
Now let’s get back to UPS Redundancy designs. The first is the Capacity or “N” System. An N system
simply stated is a system comprised of a single UPS, or a paralleled set of UPSs whose capacity is equal to
the load.
This type of system is by far the most common of the configurations in the UPS industry. The small UPS
under an office desk protecting one desktop is an N configuration. Likewise, a very large 400 kW computer
room is an N configuration whether it has a single 400 kW UPS, or two paralleled 200 kW UPSs. An N
configuration can be looked at as the minimum requirement to provide protection for the critical load.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
• Optimal efficiency of the UPS, because the UPS is used to full capacity
• Provides availability over that of the utility power
• Expandable if the power requirement grows (It is possible to configure multiple units in the same
installation. Depending on the vendor or manufacturer, you can have up to 8 UPS modules of the
same rating in parallel.)
Disadvantages:
• Limited availability in the event of a UPS module break down, as the load will be transferred to
bypass operation, exposing it to unprotected power
• During maintenance of the UPS, batteries or down-stream equipment, load is exposed to
unprotected power (usually takes place at least once a year with a typical duration of 2-4 hours)
• Lack of redundancy limits the load’s protection against UPS failures
• Many single points of failure, which means the system is only as reliable as its weakest point
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
In a normal operating scenario the primary UPS module will be carrying the full critical load, and the
isolation module will be completely unloaded. Upon any event where the primary module(s) load is
transferred to static bypass, the isolation module would accept the full load of the primary module
instantaneously. The isolation module has to be chosen carefully to ensure that it is capable of assuming
the load this rapidly. If it is not, it may, itself, transfer to static bypass and thus defeat the additional
protection provided by this configuration.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
• Relatively cost effective for a two-module system
Disadvantages:
• Reliance on the proper operation of the primary module's static bypass to receive power from the
reserve module
• Requires that both UPS modules’ static bypass must operate properly to supply currents in excess
of the inverter's capability
• Complex and costly switchgear and associated controls
• Higher operating cost due to a 0% load on the secondary UPS, which draws power to keep it
running
• A two module system (one primary, one secondary) requires at least one additional circuit breaker
to permit choosing between the utility and the other UPS as the bypass source. This is more
complex than a system with a common load bus and further increases the risk of human error.
• Two or more primary modules need a special circuit to enable selection of the reserve module or
the utility as the bypass source (Static Transfer Switch)
• Single load bus per system, a single point of failure
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
The system is N+1 redundant if the “spare” amount of power is at least equal to the capacity of one system
module; the system would be N+2 redundant if the spare power is equal to two system modules; and so on.
Parallel redundant systems require UPS modules of the same capacity from the same manufacturer. The
UPS module manufacturer also provides the paralleling board for the system. The paralleling board may
contain logic that communicates with the individual UPS modules, and the UPS modules will communicate
with each other to create an output voltage that is completely synchronized. The number of UPS modules
that can be paralleled onto a common bus is different for different UPS manufacturers. The UPS modules in
a parallel redundant design share the critical load evenly in normal operating situations. When one of the
modules is removed from the parallel bus for service (or if it were to remove itself due to an internal failure),
the remaining UPS modules are required to immediately accept the load of the failed UPS module. This
capability allows any one module to be removed from the bus and be repaired without requiring the critical
load to be connected to straight utility.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Advantages:
• Higher level of availability than capacity configurations because of the extra capacity that can be
utilized if one of the UPS modules breaks down
• Lower probability of failure compared to isolated redundant because there are less breakers and
because modules are online all the time (no step loads)
• Expandable if the power requirement grows. It is possible to configure multiple units in the same
installation
• The hardware arrangement is conceptually simple, and cost effective
Disadvantages:
• Both modules must be of the same design, same manufacturer, same rating, same technology and
configuration
• Still single points of failure upstream and downstream of the UPS system
• The load is exposed to unprotected power during maintenance of the UPS, batteries or down-
stream equipment, which usually takes place at least once a year with a typical duration of 2-4
hours
• Lower operating efficiencies because no single unit is being utilized 100%
• Single load bus per system, a single point of failure
• Most manufacturers need external static switches in order to load-share equally between the two
UPS modules; otherwise they will share within a wide window of 15%. This adds to the cost of the
equipment and makes it more complex
• Most manufacturers need a common external service bypass panel. This adds to the cost of the
equipment and makes it more complex
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
This design was developed in the late 1990s in an effort by an engineering firm to provide the capabilities of
complete redundancy without the cost associated with achieving it. The basis of this design uses three or
more UPS modules with independent input and output feeders. The independent output buses are
connected to the critical load via multiple Power Distribution Units and Static Transfer Switches. From the
utility service entrance to the UPS, a distributed redundant design and a system plus system design
(discussed in the next section) are quite similar. Both provide for concurrent maintenance, and minimize
single points of failure. The major difference is in the quantity of UPS modules that are required in order to
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
provide redundant power paths to the critical load, and the organization of the distribution from the UPS to
the critical load. As the load requirement, “N”, grows the savings in quantity of UPS modules also increases.
Distributed redundant systems are usually chosen for large complex installations where concurrent
maintenance is a requirement and many or most loads are single corded. Savings over 2N also drive this
configuration.
Advantages:
• Allows for concurrent maintenance of all components if all loads are dual-corded
• Cost savings versus a 2(N+1) design due to fewer UPS modules
• Two separate power paths from any given dual-corded load’s perspective provide redundancy from
the service entrance
• UPS modules, switchgear, and other distribution equipment can be maintained without transferring
the load to bypass mode, which would expose the load to unconditioned power. Many distributed
redundant designs do not have a maintenance bypass circuit.
Disadvantages:
• Relatively high cost solution due to the extensive use of switchgear compared to previous
configurations
• Design relies on the proper operation of the STS equipment which represents single points of
failure and complex failure modes
• Complex configuration; In large installations that have many UPS modules and many static transfer
switches and PDUs, it can become a management challenge to keep systems evenly loaded and
know which systems are feeding which loads.
• Unexpected operating modes: the system has many operating modes and many possible
transitions between them. It is difficult to test all of these modes under anticipated and fault
conditions to verify the proper operation of the control strategy and of the fault clearing devices.
• UPS inefficiencies exist due to less than full load normal operation
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 22: System Plus System Redundant or “2N”
System plus System, Multiple Parallel Bus, Double-Ended, 2(N+1), 2N+2, [(N+1) + (N+1)], and 2N are all
nomenclatures that refer to variations of this configuration.
With this design, it now becomes possible to create UPS systems that may never require the load to be
transferred to the utility power source. These systems can be designed to exhaust every conceivable single
point of failure. However, the more single points of failure that are eliminated, the more expensive this
design will cost to implement. Most large system plus system installations are located in standalone,
specially designed buildings. It is not uncommon for the infrastructure support spaces (UPS, battery,
cooling, generator, utility, and electrical distribution rooms) to be equal in size to the data center equipment
space.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
This is the most reliable, and most expensive, design in the industry. It can be very simple or very complex
depending on the engineer’s vision and the requirements of the owner. Although a name has been given to
this configuration, the details of the design can vary greatly and this, again, is in the vision and knowledge of
the design engineer responsible for the job. The 2(N+1) variation of this configuration revolves around the
duplication of parallel redundant UPS systems. Optimally these UPS systems would be fed from separate
switchboards, and even from separate utility services and possibly separate generator systems. The
extreme cost of building this type of facility has been justified by the importance of what is happening within
the walls of the data center and the cost of downtime to operations. Many of the world’s largest
organizations have chosen this configuration to protect their critical load.
The fundamental concept behind this configuration requires that each piece of electrical equipment can fail
or be turned off manually without requiring that the critical load be transferred to utility power. Common in
2(N+1) design are bypass circuits that will allow sections of the system to be shut down and bypassed to an
alternate source that will maintain the redundant integrity of the installation.
For example if a critical load is 300 kW, the design requires that four 300 kW UPS modules be provided, two
each on two separate parallel buses. Each bus feeds the necessary distribution to feed two separate paths
directly to the dual-corded loads. The single-corded load, illustrated in Figure 6, shows how a transfer
switch can bring redundancy close to the load. However, Tier IV power architectures require that all loads
to be dual-corded.
Companies that choose system plus system configurations are generally more concerned about high
availability then the cost of achieving it. These companies also have a large percentage of dual-corded
loads.
Advantages:
• Two separate power paths allows for no single points of failure; Very fault tolerant
• The configuration offers complete redundancy from the service entrance all the way to the critical
loads
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
• In 2(N+1) designs, UPS redundancy still exists, even during concurrent maintenance
• UPS modules, switchgear, and other distribution equipment can be maintained without transferring
the load to bypass mode, which would expose the load to unconditioned power
• Easier to keep systems evenly loaded and know which systems are feeding which loads.
Disadvantages:
• Highest cost solution due to the amount of redundant components
• UPS inefficiencies exist due to less than full load normal operation
• Typical buildings are not well suited for large highly available system plus system installations that
require compartmentalizing of redundant components
The second factor is Risk Tolerance. Companies that have not experienced a major failure are typically
more risk tolerant than companies that have not. Smart companies will learn from what companies in their
industry are doing. This is called “Benchmarking” and it can be done in many ways. The more risk
intolerant a company is, the more internal drive their will be to have more reliable operations, and disaster
recovery capabilities.
Other factors to consider are availability requirements – How much downtime can the company withstand in
a typical year? If the answer is none, then a high availability design should be in the budget. However, if
the business can shut down every night after 10 PM, and on most weekends, then the UPS configuration
wouldn’t need to go far beyond a parallel redundant design. Every UPS will, at some point, need
maintenance, and UPS systems do fail periodically, and somewhat unpredictably. The less time that can be
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
found in a yearly schedule to allow for maintenance the more a system needs the elements of a redundant
design.
Types of loads (single vs. dual-corded) are another area to examine. Dual-corded loads provide a real
opportunity for a design to leverage a redundant capability, but the system plus system design concept was
created before dual-corded equipment existed. The computer manufacturing industry was definitely
listening to their clients when they started making dual-corded loads. The nature of loads within the data
center will help guide a design effort, but are much less a driving force than the issues stated above.
Finally, budget. The cost of implementing a 2(N+1) design is significantly more, in every respect, than a
capacity design, a parallel redundant design, or even a distributed redundant. As an example of the cost
difference in a large data center, a 2(N+1) design may require thirty 800 kW modules (five modules per
parallel bus; six parallel busses). A distributed redundant design for this same facility requires only eighteen
800 kW modules, a huge cost savings.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Most of this excess cost can be recovered by implementing a method and architecture that can adapt to
changing requirements in a cost-effective manner while at the same time providing high availability.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 27: Dual Path Environments
However, equipment with a single power path (single-corded) introduces a weakness into an otherwise
highly available data center. Transfer switches are often used to enhance single-corded equipment
availability by providing the benefits of redundant utility paths. A transfer switch is a common component in
data centers and is used to perform the following functions:
1. Switching UPS and other loads from utility to generator during a utility power failure
2. Switching from a failed UPS module to utility or another UPS (depending on designs)
3. Switching critical IT loads from one UPS output bus to another in a dual path power system
2. Use a transfer switch at the point of use to select a preferred source, and when that source fails
switch to the second power path
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
3. Use a large centralized transfer switch fed from the two sources, to generate a new power bus to
supply a large group of single corded loads
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
The ATS is fed by two sources, the utility and the generator, with the utility the preferred source. When the
preferred source is unacceptable, the ATS automatically switches to the generator. Standby generator
systems are often used in conjunction with UPS systems.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Power Distribution I
Transcript
Slide 1
Welcome to the Data Center University TM course on Power Distribution I.
Slide 2: Welcome
For best viewing results, we recommend that you maximize your browser window now. The screen controls
allow you to navigate through the eLearning experience. Using your browser controls may disrupt the
normal play of the course. Click the attachments link to download supplemental information for this course.
Click the Notes tab to read a transcript of the narration.
Slide 4: Introduction
Power distribution is the key to maintaining availability in the data center. Many instances of equipment
failure, downtime, software and data corruption, are the result of a failure to provide adequate power
distribution. Sensitive components require consistent power distribution as well as power that is free of
interruption or distortion.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
The consequences of large-scale power incidents are well documented. Across all business sectors, an
estimated $104 billion to $164 billion per year are lost due to power disruptions, with another $15 billion to
$24 billion per year in losses attributed to secondary power quality problems.
It is imperative that critical components within the data center have an adequate and steady supply of power.
Slide 5: Introduction
It is important to provide a separate, dedicated power source and power infrastructure for the data center.
The building in which a data center is located could have a mixture of power requirements, such as air
conditioners, elevators, office equipment, desktop computers, and kitchen area microwaves and
refrigerators.
If the data center shares a common power source with the rest of the building, and power consumption is at
a high level, it could impact the data center’s air handlers, for example, and greatly increase the risk of
unanticipated downtime.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
This course will explore the topic of power distribution within the data center. Let’s begin with a review of
how power is transmitted to the data center.
The power generation facility at the utility generates three phases. Thus, three wires are used to transmit
power. Generating and distributing 3-phase power is more economical than distributing single phase power.
Single phase power only has one “hot” wire.
Now that we’ve reviewed the basic concepts of power transmission, let’s move on to nominal versus normal
voltage.
The voltage received, therefore, can vary depending upon the consumers’ position along the power line and
depending upon the total load that the line is expected to supply.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
By the time source power reaches the computer installation site, it can suffer voltage losses of up to 11%,
even under optimal conditions.
If the voltage coming into the data center is either too high or too low, it can impact equipment by causing it
to run hot. This is corrected with the utilization of a transformer. Now let’s explore transformers.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Stepping up or down 3-phase power requires what is called a Delta transformer. It is called Delta because
its circuit diagram looks like the Greek letter Delta.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
As the current flows through the primary coil, it induces current in the second wire (called the secondary
coil). This phenomenon is called the law of induction. The strength of the induced current depends upon the
number of times the second wire is wrapped around the iron core. By adjusting the number of turns on the
secondary coil, the transformer’s output current and voltage can be determined.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
The step up transformer works in a similar manner. The only difference is that the primary coil has fewer
turns or windings than the secondary coil. In the case of the step up transformer, the voltage coming into the
transformer is less than the voltage going out of the transformer.
The Wye transformer is different from the Delta transformer because it outputs not just three-phases but
also a neutral wire.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Wye to Wye transformers are not as common as Delta to Delta transformers, but can sometimes be found
to support distribution in cases where the utility is not the primary power source. An example would be the
upstream of a UPS and downstream of a generator.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 16: Isolation Transformer
A transformer that contains an equal number of turns or windings in both the Primary and Secondary coils is
called an isolation transformer.
The voltage coming into the transformer is equal to the voltage coming out of the transformer.
Remember that within the transformer, the law of induction dictates that transformation takes place without
any electrical connection between the input and output. The benefit of the isolation transformer is that it
filters out electrical spikes on the input, thereby providing better power quality on the output.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Now that we’ve covered transformers, let’s discuss the service entrance.
Beyond the Main Service Entrance, the power is distributed within the facility. Power distribution within the
facility can be broken down into six areas:
Main electrical service panel, transformers, feeders, subpanels, branch circuits and receptacles.
Let’s explore each of these items in more detail, beginning with the main electrical service panel.
Power Distribution I P a g e | 10
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
The service transformer is wired directly to the main Electrical Service Panel. This panel has several key
components, so let’s take a look at a typical diagram of the main electrical service panel, and examine each
of these components.
The first component is the neutral bus. The neutral bus is a bar to which all the neutral wires are connected.
This is done to keep all of the neutral wires referencing the same voltage.
Power Distribution I P a g e | 11
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 21: Main Electrical Service Panel
The earth ground connection is the next component we’ll examine. This connection acts as the ground
reference for the entire electrical infrastructure. It is made by driving a grounding electrode into the earth.
Power Distribution I P a g e | 12
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
This electrode is then connected to the Main Service Panel via the neutral bus which is bonded to the
ground bus using the neutral to ground bond.
Power Distribution I P a g e | 13
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 23: Facility Transformer
Within the facility, transformers are used to provide either Delta or Wye power for either isolation, stepping
up or down voltage or to break out a single phase from a 3-phase source.
They are also useful in breaking down a facility’s power requirements into zones. Each zone can be
provided with a dedicated transformer with a specific VA rating. Typical ratings range from 30 kVA to 225
kVA. Transformers are ideal for this partitioning effect because they isolate loads from the Main Service
Panel. Thus, power problems such as harmonics and overloaded neutrals can be isolated from the main
electrical service.
However, whenever a transformer of 1000 VA or larger is used within the facility, the secondary winding
must be grounded to building steel. In this case the transformer is considered a separately derived power
source and must be grounded as such.
Power Distribution I P a g e | 14
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 24: Subpanels
Subpanels are metal boxes that contain all the buses and breakers for distribution to receptacles and loads.
They are sized by the number of circuit breakers and bus configurations.
Typical subpanels include 240/120V single phase with three wires and 208/120V 3-phase with four wires.
Subpanels are constructed and configured to ensure that all phases are equally loaded.
Branch circuits consist of conductors and conduit. The size of the conductor cables in both the feeder and
branch circuits are outlined in National Electric Code (NEC) article 310.
Power Distribution I P a g e | 15
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
The degree to which an electrician can achieve a noise-free circuit for sensitive equipment is usually
dependent upon a number of factors:
• The quality of power delivered by the utility
• The age and design of the building
• The integrity of the grounding system throughout the building
• The amount of electrical noise generated within the building
• The degree to which electrical loads are balanced throughout the building
In some cases, it is necessary to isolate the circuit all the way back to the main distribution panel at the
service entrance.
(For more information on plugs, please refer to the Data Center University Course entitled “Fundamentals of
Power”.)
Now that we’ve addressed each of the components in the service entrance, let’s move on to the different
methods of power distribution, beginning with direct connect.
Power Distribution I P a g e | 16
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
One approach is to run conduits from large wall mounted or floor mounted PDUs to each cabinet location.
This works moderately well for a small server environment with a limited number of conduits. This doesn’t
work well for larger data centers when cabinet locations require multiple power receptacles.
Power Distribution I P a g e | 17
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 33: Optimized Power Distribution System
Data center power distribution systems have evolved in response to the needs of the modern data center.
Improvements to power distribution systems have been introduced over time.
Today, an updated power distribution system could have several enhanced features, most notably:
• Branch circuit power metering
• Overhead cable tray with flexible power cords
• Overhead fixed busway with removable power taps
• High power, pluggable rack power distribution units
• Transformerless Power Distribution Units, and
• Power capacity management software
Power Distribution I P a g e | 18
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 35: Optimized Power Distribution System
Here is another example of a similar power distribution system that distributes to IT rows using one or more
overhead busways. The busways are installed up front and traverse the entire planned IT rack layout. When
a group of racks is to be installed, a low-footprint modular PDU is installed at the same time and plugged
into the overhead busway.
Power Distribution I P a g e | 19
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
The connection to the busway is also shown in here. Instead of traditional circuit breaker panels with raw
wire terminations, the modular PDU has a “backplane” into which pre-terminated shock-safe circuit breaker
modules are installed. This arrangement allows the face of the PDU to be much more narrow, and
eliminates on-site termination of wires.
Power Distribution I P a g e | 20
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
The modular PDU initially has no branch circuit modules installed. The power circuits from the modular
PDU to the IT racks are flexible cable that are plugged into the front of the modular PDU on site to meet the
requirements of each specific rack as needed. The branch circuit cables to the IT enclosures are pre-
terminated with breaker modules that plug into the shock-safe backplane of the modular PDU.
In this system, a PDU for a new row of IT enclosures, along with all of the associated branch circuit wiring
and rack outlet strips, can be installed in an hour, without any wire cutting or terminations.
Options also exist for the deployment of transformerless, rack-based distribution units. An example of such
a deployment would include a 415 volt line to line UPS that directly feeds the transformerless PDUs that
Power Distribution I P a g e | 21
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
distribute to the racks. In the case of North America a 480 volt to 415 volt step down transformer could be
installed upstream of the UPS.
Other parts of the world typically receive 400V from the utility and convert it to 220V at the service entrance.
Power Distribution I P a g e | 22
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 38: Summary
Let’s conclude with a brief summary.
• It is imperative that critical components with the data center have an adequate and steady supply
of power. The delivery of power is the key to maintaining availability in the data center. Avoiding
instances of equipment failure, downtime, software and data corruption lies the management of
power distribution.
• As power is distributed across long distances, over power lines, losses in voltage caused by
resistance and inductive losses can occur as the power works it’s way through various
transformers. Voltage is either stepped up or stepped down by a series of these transformers.
• Transformers are essential to transmit and distribute power, because if the voltage coming into the
data center is either too high or too low, it can impact the equipment by causing it to run hot.
• There are a wide range of receptacles used throughout the world today are due to the wide range
of power requirements by all electrical loads currently in existence.
• The degree to which an electrician can achieve a noise-free circuit for sensitive equipment is
dependent on a number of factors, including: quality of power; building age/design; grounding
system integrity; electrical noise amounts; and the degree of balanced electrical loads.
• Distributed power designs are emerging as the preferred configuration for larger server
environments, because they are easier to manage, less expensive to install, and more resistant to
a physical accident than a direct connection power distribution.
This ends Power Distribution Part I. Part II will explore the issue of power distribution in new high density
data center environments.
Power Distribution I P a g e | 23
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Physical Infrastructure Management Basics
Transcript
Slide 1
Welcome to the Data Center University TM course on Physical Infrastructure Management Basics.
Slide 2: Welcome
For best viewing results, we recommend that you maximize your browser window now. The screen controls
allow you to navigate through the eLearning experience. Using your browser controls may disrupt the
normal play of the course. Click the attachments link to download supplemental information for this course.
Click the Notes tab to read a transcript of the narration.
Slide 3: Objectives
At the completion of this course, you will be able to:
• Identify physical infrastructure challenges for incident, availability, capacity, and change
management
• Summarize physical infrastructure management strategies for Enterprise Management Systems
(EMS) and Building Management Systems (BMS), you will be able to
• Recognize physical infrastructure management standards, and you will be able to
• Provide examples of physical infrastructure management solutions
Slide 4: Introduction
The key to managing physical infrastructure is to employ the same strategies used in the management of
servers, storage, switches, and printers. The core issues of maintaining system availability as well as
managing problems and change are similar, although each device may have specific problems based on its
unique characteristics.
Essential categories of management for physical infrastructure include Incident Management, Change
Management, Capacity Management, and Availability Management. Implementing the strategies,
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
suggested in this course, will contribute to a successful application of the ITIL (Information Technology
Infrastructure Library) framework to all aspects of data center operations.
The purpose of this course is to demonstrate a systematic approach to identifying, classifying, and solving
the management challenges of next-generation data centers.
For more information about physical infrastructure, please participate in the DCU course: An Overview of
Physical Infrastructure
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
ITIL is a set of guidebooks defining models for planning, delivery, and management of IT services, created
by the British Standards Institute and owned by the UK Office of Government Commerce. ITIL is not a
standard but a framework whose purpose is to provide IT organizations with tools, techniques, and best
practices that help them align their IT services with their business objectives.
IT organizations typically select and implement the pieces that are most relevant to solving their business
problems. The categories and guidelines defined by ITIL can be extremely helpful in determining and
achieving IT service management objectives, and many IT vendors such as HP, IBM, and Microsoft have
used ITIL as a model for their operations framework.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Although the ITIL processes are all related in one way or another, it is not necessary to analyze the entire
spectrum of processes and flows. Identifying which ones are critical and relevant to managing physical
infrastructure is a helpful aid in achieving success in the “Zero Layer” of the data center hierarchy. ITIL is a
wide-encompassing framework, and a complete explanation of it is outside the scope of this course.
Throughout this course, we will identify the most critical management processes as defined by ITIL for
management of physical infrastructure, and outline key problems as well as requirements for effective
physical infrastructure management in each area.
(You are encouraged to visit www.itil.co.uk for further information on ITIL itself.)
The remainder of this course addresses the key management challenges that each of these processes
presents.
Slide 11: Physical Infrastructure Management Challenges
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Using the ITIL process model, the challenges and underlying problems in the physical infrastructure layer
are presented in four charts corresponding to the four key ITIL management processes.
Physical infrastructure, like any other IT equipment, should be monitored, with events fed into an incident
management process, either via a physical infrastructure incident management system or a general-
purpose incident management tool such as a network management or building management system.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Specific Incident Management challenges include identifying the problem location, identifying the resolution
owner, prioritizing incident urgency, as well as executing proper corrective action. Let’s begin with the
challenge of identify problem location.
Different people can be responsible for different locations at different times of the day or week. A solution is
to establish a management system that provides the ability to set and assign owner roles.
A solution is to have a management tool that alerts the user to the impact, urgency, and priority of individual
events that threaten system availability.
The final challenge of incident management is to execute proper corrective actions. Because it can be
difficult for one person to have all the expertise necessary to troubleshoot all issues, a system that provides
recommended actions and guidance can help ensure the proper corrective action is executed.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
The next challenge deals with availability management. Availability management is concerned with
systematically identifying availability and reliability requirements against actual performance, and when
necessary, introducing improvements to allow the organization to achieve and sustain optimum quality IT
services at a justifiable cost
Once physical infrastructure requirements have been established, service levels must be monitored, with
particular care given to understanding the potential downtime that can result from individual components
failing and their impact on the entire system
Availability metrics are necessary in order to track achievement against service levels agreed upon between
IT and the internal business customer. A solution is to provide a tool that reports uptime and downtime,
physical infrastructure versus non-physical infrastructure downtime summaries, causes of downtime,
incident timestamp and duration, as well as time to recovery.
Easily correctable problems with physical infrastructure often go unnoticed until a failure occurs. A solution
is to use a system that does not require training or expert knowledge, which provides alerting and global
thresholds for UPS runtime, power distribution unit load by phase, battery health, as well as rack
temperature and humidity.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Moving on we find the third availability management challenge of planned downtime.
Planned downtime is necessary for many data centers, but tools that do not take into account for planned
downtime can have two negative effects. The first being; false alerts leading to incorrect actions by
personnel; followed by maintenance modes left uncorrected after maintenance is complete, such as a UPS
left in bypass, or a cooling unit offline. A solution is to provide a system that allows scheduled maintenance
windows, both suppressing alerts during the window and alerting the user to any maintenance conditions left
uncorrected after the window has closed.
Power, cooling, rack space, and cabling are all IT resources that require capacity management. Product
architectures that allow incremental purchases of these resources on short time frames are preferable to
legacy architectures that require specifying, engineering, purchasing, and installing over yearlong
timeframes, especially with regards to Total Cost of Ownership (TCO) considerations.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Specific Capacity Management challenges include: Monitoring and recording data center equipment and
infrastructure changes; providing physical infrastructure capacity; optimizing the physical layout of existing
and new equipment; as well as incrementally scaling the data center infrastructure. Let’s discuss each one,
beginning with the challenge to monitor and record data center equipment and infrastructure changes.
As additional equipment is added to the data center over time, existing power and cooling capacity may be
inadvertently exceeded, resulting in downtime. UPS batteries age and may need servicing. A solution is to
have a system that monitors current draw for each branch circuit or rack and alerts the appropriate person
to potential overload situations. The system could also monitor UPS runtimes and load thresholds.
Because IT refreshes tend to be dynamic and are difficult to predict, physical infrastructure capacity
requirements often go unnoticed until it is too late. Physical infrastructure capacity can be managed with a
system that provides trending analysis and threshold violation information on UPS load, runtime, power
distribution, cooling, rack space utilization, and patch panel port availability. Such a system can ensure
adequate advance notice and information necessary for procurement and deployment of additional capacity.
Sometimes, when a data center is updated, the new configuration is not efficient. A poorly configured data
center may use more space and cost more to operate than is necessary. A management tool can optimize
updates and reconfiguration by analyzing placement and layout of new IT equipment, to meet power, rack
space, cooling, and cabling needs.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
And finally, data center managers need to solve the challenge to scale data center infrastructure
incrementally.
As infrastructure is added to the data center, it can be difficult to reconfigure tools to monitor the new objects.
To avoid these problems, use tools that leverage existing IT infrastructure investment and monitor additional
new physical infrastructure devices in an economical, simple and quick way.
Maximizing the ratio of planned to unplanned work in a data center requires a formalized change
management processes for all aspects of operation. Changes such as relocating a server, rewiring a patch
panel, or moving equipment from a warmer area of the data center to a cooler area are examples of
changes requiring preparation, planning, simulation, and an audit trail.
Moving servers can cause problems for power, cooling, rack space allocation, etc. These problems can be
avoided by using a physical infrastructure management tool that can recommend workflows for planning,
executing, and tracking changes.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Implementing firmware changes in individual physical infrastructure components is our next challenge.
Choices need to be made regarding when to perform physical infrastructure firmware upgrades. Some
businesses opt for scheduling upgrades during off hours and weekends. This approach, however, can tax
the personnel who need to work overtime hours and opens the door for more possible human error.
Other businesses opt for scheduling upgrades during normal operating hours. If the data center is not
properly equipped to operate physical infrastructure equipment in bypass mode, the risk of downtime during
peak data center demand could be increased. If, however, the data center is equipped with modern physical
infrastructure equipment and management support systems, that risk is greatly diminished.
Firmware upgrades are increasingly complex to manage. Using a system that notifies the administrator
whenever new bug fixes or feature enhancements to firmware are available can provide mass remote
upgrade capabilities.
Maintain spares at compatible firmware revision levels is the last change management challenge we’ll
discuss.
When spares are swapped into a modular architecture they may not be at a supported firmware revision or
combination, causing downtime. Resolve this challenge by using a physical infrastructure management
solution that ensures that spares match production equipment.
Now let’s put all the pieces together and develop a physical infrastructure management strategy.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
• Implement an incident management system
• Set and measure availability targets
• Monitor and plan for long term changes in capacity
• Then, get change management processes in place
Organizations typically focus on fully implementing each management process for three to six months
before moving on to the next one.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 32: Management Focus
As mentioned previously, physical infrastructure is the foundation upon which Information Technology and
Telecommunication Networks reside. This diagram shows how physical infrastructure supports Technology,
Process, and People to provide a Highly Available Network.
An EMS handles “device centric” information, based on individual network IP addresses. EMS information
may be the status of a single server, a networking device, or a storage device and is communicated over an
existing IT network.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
the temperature that the sensor reports. A BMS typically uses its own serial-based network, using either
proprietary communication protocols, or some level of standard protocols, such as MODBUS.
Best practices dictate that the following list of devices be monitored at the rack level:
• A minimum of two temperature data points
• Individual branch circuits
• Transfer switches
• Cooling devices, and
• UPS systems
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
in physical infrastructure management. Since branch circuit faults can occur from time to time, active branch
management contributes to increased availability. Investing in a quality brand of circuit breaker also helps to
minimize the instances of downtime. As IT equipment density increases, monitoring cooling devices is also
increasingly critical to availability.
Starting in the late 1990s, several organizations quickly installed IT systems to solve urgent business needs.
This quick effort created multiple point solutions. As a result, in many installations, IT departments tend to
manage equipment using “element managers” for different categories of equipment.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 41: Element Managers
As shown in this illustration, it is common to utilize a ‘storage manager’ such as EMC ControlCenter, for
storage, a ‘network manager’ such as CiscoWorks, for the networking equipment, and a server manager,
such as HP Insight Manager, for servers.
The advantage of these ‘element managers’ is that they are generally easy to deploy and use since they are
focused on managing one category of devices – in many cases devices specific to an individual vendor. The
limitation of this strategy is that there is no coordination of the different element managers.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Similar to EMSs, BMSs frequently manage some of the data points of physical infrastructure, such as
building power, comfort air, building environment, or building security. However, in small and medium data
centers, BMS systems are not as sophisticated as physical infrastructure management systems when it
comes to monitoring and controlling devices such as in-rowTM CRACS, PDUs, and UPSs.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 45: Physical Infrastructure Element Manager
This diagram shows how a physical infrastructure element manager fits into the high-level management
system.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
The physical infrastructure element manager provides detailed information through direct connection the
same way that server, storage, and network element managers provide detailed information through direct
connection.
However, physical infrastructure element managers have the advantages of being easier and less
expensive to install. Physical infrastructure element managers automatically collect all individual device
information and are pre-programmed with select rules and policies to manage physical infrastructure.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 47: Summary
Let’s conclude with a brief summary.
• Physical infrastructure is the foundation upon which Information Technology and
Telecommunication Networks reside. Physical infrastructure includes power, cooling, racks and
physical structure, cabling, physical security and fire protection, management systems and
services.
• The challenge for physical infrastructure incident management is to return systems to their normal
service level, with the smallest possible impact on the business activity of the organization and the
user.
• The challenge for physical infrastructure capacity management is to provide the required IT
resources at the right time, at the right cost, and to align those resources with the current and
future requirements of the internal business customer.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Rack Fundamentals
Transcript
Slide 1
Welcome to Data Center University™ course on Rack Fundamentals.
Slide 2: Welcome
For best viewing results, we recommend that you maximize your browser window now. The screen controls
allow you to navigate through the eLearning experience. Using your browser controls may disrupt the
normal play of the course. Click the attachments link to download supplemental information for this course.
Click the Notes tab to read a transcript of the narration.
Slide 4: Introduction
As technology compaction has evolved from mainframes to blade servers the need for power, cooling and
space optimization has dramatically increased. In their simplest form, racks and enclosures are the building
blocks of a data center. Cutting edge rack technology streamlines the cable management and affords the
vertical stacking of IT equipment, reducing server sprawl and maximizing IT real estate. Therefore, the role
of the rack has become strategic to the availability of a given network. How these racks and enclosures are
selected and configured affects a data center’s availability and agility for years after an installation is
completed. This course will put you one step closer to understanding the importance and the impact racks
have on a data center.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 5: Industry Standards
Two types of standards for racks and enclosures are:
1. The 19 inch standard
2. Earthquake standards
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
The 19 Inch Standard defines important dimensions for racks, enclosures, and rack mounted equipment.
For example, EIA-310 defines minimum enclosure opening between rails to be 450 mm (17.72 inches), to
provide clearance for equipment chassis widths.
The width between the centers of the equipment mounting holes is 465 ± 1.6 mm (18.31 inches ± 0.063
inches).
The minimum enclosure width to provide clearance for equipment front panels/ bezels/ faceplates is 483.4
mm (19 inches).
The trend for open frame racks is to have threaded holes. There are many thread sizes, but #12-24 is the
most common thread size. The main advantage of threaded holes placed directly into the rack is that
deployment is fast, since there are no cage nuts to install.
The Network Equipment Building System (NEBS) and the European Technical Standards Institute (ETSI)
standards have more stringent requirements than the UBC and Eurocode, and specify floor anchoring and
reinforced frame structures for enclosures.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 11: Open Frame Rack
The Open Frame Rack comes in two basic types: Two Post and Four Post.
Depending upon the manufacturer, common rack accessories may include shelving, vertical cable
organizers, brackets for power distribution, and baying kits which permit several racks to be joined together.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 13: Four Post Frames
The Four Post frame allows equipment to be supported from the front and back, making it a more versatile
option than the two post frame. It is typically used for server, networking, and telecom applications in IT
environments. The obvious advantage to the Four Post frame is that it is physically stronger than the Two
Post frame and can support heavier equipment. Depending upon the manufacturer, common rack
accessories may include light and heavy-duty shelves, vertical cable organizers, brackets for power
distribution, and baying kits.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
The Open Frame rack typically relies on natural convection to dissipate heat from equipment. As the density
of rack mounted equipment increases, natural convection has a limited ability to remove the heat that needs
to be dissipated. Enclosures, discussed in the next section of this course, provide an improved means to
control and manage airflow.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 17: Open Frame Racks vs. Enclosures
Compared to open frame racks, enclosures offer improved static load capacity, cooling, security, and multi-
vendor compatibility for rack mounted equipment.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Next, we will discuss some common enclosure types.
Server applications most commonly use 42U high x 600mm wide x 1070 mm deep. Server enclosures have
been getting deeper to support the higher densities of power and cabling. Some applications that have high
cable density, combine network switches with server equipment, or use side-to-side cooling instead of front-
to-back cooling. Those applications will require enclosures that are wider than 600mm.
Some rooms that have high ceilings may permit enclosures to be as tall as 47 U. Some 47U applications
may also require wide enclosures. When using tall enclosures, be cautious about safety regulations and
overhead fire suppression sprinklers.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 20: Networking Enclosure
As shown in this illustration from behind the networking enclosure, networking applications require wider
racks than server applications, to give room for cabling. A fully loaded networking enclosure can require up
to 2000 Category 5 or Category 6 network cables.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 21: Seismic Enclosure
Here is an example of a seismic enclosure. Seismic enclosures are specially reinforced to protect
equipment from earthquakes. To ensure equipment and personnel safety, seismic enclosure installations
should conform to regional standards, such as NEBS or ETSI for Zone 4. Most commercial data centers and
telecom central offices that are not in high risk zones, utilize less stringent standards like the UBC or
Eurocode, rather than the stricter NEBS or ETSI standards.
Rack Fundamentals P a g e | 10
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Wall mount enclosures are useful when only a couple of pieces of rack equipment need to be enclosed. One
of the key features of the wall mount enclosure is its double-hinged frame construction, which allows easy
access to the rear of the rack mounted equipment.
Wall mount enclosures conserve floor space and provide a neat, clean installation for wiring closets.
The most common problems that pose a challenge to the optimization of lifecycle costs with regard to rack
systems are:
• Non-standardized racks Non-standardized racks lead to a higher total cost of ownership, due to the
unique design features dictated by the IT equipment manufacturers. These non-standard design
features result in difficulty with moves and the integration of multi-vendor equipment. A much better
solution is to purchase vendor-neutral racks with guaranteed universal compatibility. Vendor
neutral racks allow for greater flexibility when purchasing and mounting equipment, and more
standard processes for mounting and servicing equipment.
• Slow speed of deployment. The time and work involved in the assembly of non-standard
equipment (racks) or even in migration and refreshes are costly, both in downtime and labor. Pre-
engineered solutions save time and simplify planning and installation.
Rack Fundamentals P a g e | 11
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 25: Availability
The survey revealed that optimizing availability was also an important requirement. The most common
problems that pose a challenge to optimizing availability are:
1. Inadequate airflow to IT equipment damages hardware. This problem has increased over the last
few years with the dramatic increase in heat densities. And it is important to note that there is no
standard for measuring cooling effectiveness when comparing enclosures.
2. Inadequate power redundancy to the rack. The solution is to bring dual power paths to single or
dual-corded IT equipment.
3. Lack of physical security. Because of the increased demands to provide ample air, power, and data
to racks, the number of individuals accessing enclosures for service tasks has increased, leaving
the units more vulnerable to human error. Enclosures need to be physically secured with locking
doors and locking side panels to prevent unauthorized or accidental access.
4. Non-compliance with seismic requirements. The solution is to have all racks that are located in
Zone-4 regions to be in compliance with seismic building standards.
5. The following slides offer solutions for improving airflow as a means of increasing availability.
Rack Fundamentals P a g e | 12
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 27: Improving Airflow: Blanking Panels
Blanking panels are covers that are placed over empty rack spaces. Keeping blanking panels snugly in
place prevents heated exhaust from being re-circulated and entering IT equipment intakes. The main reason
why blanking panels are not commonly used is that the benefits of blanking panels are not always
understood. People often fail to realize the cooling benefits that they provide, and mistakenly think that they
are for aesthetic purposes only or that they are difficult to install.
Rack Fundamentals P a g e | 13
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 29: Improving Airflow: Air Distribution Unit (ADU)
This slide show’s an Air Distribution Unit (ADU) installed in a rack system.
An ADU is a cooling device for raised floor applications that mounts at the bottom 2U of any EIA-310 19 inch
rack that has an open base. The blue lines represent cooling airflow. The ADU connects into the raised floor
and pulls supply air directly into the enclosure. This prevents the conditioned air from mixing with warmer
room air before reaching the equipment. The ADU minimizes temperature differences between the top and
bottom of the enclosure. It also prevents hot exhaust air from re-circulating to the inlet of the enclosure.
This is a detailed view of an ADU. An ADU is only recommended as a problem-solver for heat densities of
up to 3.5 kW per rack. An ADU is good for overcoming low ventilation pressure under raised floors.
Rack Fundamentals P a g e | 14
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 30: Improving Airflow: Side Air Distribution Unit
This slide shows a side ADU installed above a rack mounted device with side-to-side airflow.
The blue lines represent cooling airflow. The red lines represent warm airflow. The side ADU pulls air in
from the cold aisle, and redirects and distributes it to the equipment inlet, located on the right side.
Rack Fundamentals P a g e | 15
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
The ARU is a scalable cooling solution, because it can be added to an existing rack enclosure, and requires
no internal rack space or raised floor connections to install. It replaces the rear door of an enclosure. This
example shows a unit with a redundant fan for improved availability.
Cool air enters the rack, exhausts out the rear of the rack equipment, is pulled through the rack Air Removal
Unit, and is released through the top.
The high powered fans in the Rack Air Removal Unit overcome the air resistance of cables in the rear of the
rack, and prevent exhaust air re-circulation. An optional, ducted exhaust system delivers hot air to the space
above a drop-down ceiling or some other type of enclosed overhead space, and eliminates the possibility of
hot air mixing with room air.
Rack Fundamentals P a g e | 16
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
comply with the Electronic Industries Alliance (EIA) 310-D standard for rack mounting IT and
networking equipment.
Rack Fundamentals P a g e | 17
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
clustering is increasingly used in mission-critical environments. IT personnel want a solution to
centrally manage all equipment from one location.
• Lack of security at the rack level. A solution is to provide rack locks as well display screens and
automatic notification to report and manage rack level security breaches.
Rack Fundamentals P a g e | 18
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
• Enclosures enhance rack system cooling by preventing hot and cold air from mixing
• Enclosures should be universal, modular, organized, and scalable
• Racks should be arranged to form alternating hot and cold aisles
Rack Fundamentals P a g e | 19
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Choosing Between Room, Row, & Rack Based Cooling For Data Centers I
Transcript
Welcome to Choosing Between Room, Row, and Rack-Based Cooling for Data Centers I.
Slide 2: Welcome
For best viewing results, we recommend that you maximize your browser window now. The screen
controls allow you to navigate through the eLearning experience. Using your browser controls may
disrupt the normal play of the course. Click the Notes tab to read a transcript of the narration.
Slide 3: Objectives
Slide 4: Introduction
The conventional approach to data center cooling using uncontained room-based cooling has
technical and practical limitations in next generation data centers. The need of next generation
data centers to adapt to changing requirements, to reliably support high and variable power density,
and to reduce electrical power consumption and other operating costs have directly led to the
development of containment strategies for room, row, and rack-based cooling. These
developments make it possible to address operating densities of 3 kW per rack or greater.
© 2014 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Choosing Between Room, Row, & Rack Based Cooling For Data Centers I
Transcript
The conventional room-based approach has served the industry well, and remains an effective and
practical alternative for lower density installations and those applications where IT technology
changes are minimal. However, latest generation high density and variable density IT equipment
create conditions that traditional data center cooling was never intended to address, resulting in
cooling systems that are oversized, inefficient, and unpredictable. Room, row, and rack-based
cooling methods have been developed to address these problems. This course describes these
improved cooling methods.
Slide 5: Introduction
Nearly all of the electrical power delivered to the IT loads in a data center ends up as waste heat
that must be removed to prevent over temperature conditions. Virtually all IT equipment is air-
cooled; that is, each piece of IT equipment takes in ambient air and ejects waste heat into its
exhaust air. Since a data center may contain thousands of IT devices, the result is that there are
thousands of hot airflow paths within the data center that together represent the total waste heat
output of the data center - waste heat that must be removed. The purpose of the air conditioning
system for the data center is to efficiently capture this complex flow of waste heat and eject it from
the room.
The historical method for data center cooling is to use perimeter cooling units that distribute cold air
under a raised floor with no form of containment. This is known as targeted supply and flooded
return air distribution. To learn more about these topics, please consider participating in our
Fundamentals of Cooling Architectures or our Optimizing Cooling Layouts courses.
In this approach, one or more air conditioning systems, working in parallel, push cool air into the
data center while drawing out warmer ambient air. The basic principle of this approach is that the
© 2014 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Choosing Between Room, Row, & Rack Based Cooling For Data Centers I
Transcript
air conditioners not only provide raw cooling capacity, but they also serve as a large mixer,
constantly stirring and mixing the air in the room to bring it to a homogeneous average temperature,
preventing hot-spots from occurring. This approach is effective only as long as the power needed
to mix the air is a small fraction of the total data center power consumption. Simulation data and
experience show that this system is effective when the average power density in data is on the
order of 1-2 kW per rack, translating to 323-753 W/m² (30-70 W/ft²). Various measures can be
taken to increase power density of this traditional cooling approach, but there are still practical
limits. With power densities of modern IT equipment pushing peak power density to 20 kW per rack
or more, simulation data and experience show traditional cooling (no containment), dependent on
air mixing, no longer functions effectively.
To address this problem, design approaches exist that focus on room, row, and rack-based cooling.
In these approaches the air conditioning systems are specifically integrated with the room, rows of
racks, or individual rack in order to minimize air mixing. This provides much better predictability,
higher density, higher efficiency, and a number of other benefits. In this course, the various
approaches are explained and contrasted. We will show that each of the three approaches has an
appropriate application, and in general a trend toward row-based cooling for smaller data centers
and high density zones and toward room-based cooling with containment for larger data centers
should be expected.
Every data center air conditioning system has two key functions: to provide the bulk cooling
capacity, and to distribute the air to the IT loads. The first function of providing bulk cooling
capacity is the same for room, row, and rack-based cooling, namely, that the bulk cooling capacity
© 2014 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Choosing Between Room, Row, & Rack Based Cooling For Data Centers I
Transcript
of the air conditioning system in kilowatts must exhaust the total power load (kW) of the IT
equipment. The various technologies to provide this function are the same whether the cooling
system is designed at the room, row, or rack level. The major difference between room, row, and
rack-based cooling lies in how they perform the second critical function, distribution of air to the
loads. Unlike power distribution, where flow is constrained to wires and clearly visible as part of
the design, airflow is only crudely constrained by the room design and the actual air flow is not
visible in implementation and varies considerably between different installations. Controlling the
airflow is the main objective of the different cooling system design approaches.
Here we can see the 3 basic configurations depicted in generic floor plans. The green square
boxes represent racks arranged in rows, and the blue arrows represent the logical association of
the computer room air handler (CRAH) units to the loads in the IT racks. The actual physical
layout of the CRAH units may vary. With room-based cooling, the CRAH units are associated with
the room; with the row-based cooling the CRAH units are associated with rows or groups, and with
rack-based cooling CRAH units are associated with the individual racks.
© 2014 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Choosing Between Room, Row, & Rack Based Cooling For Data Centers I
Transcript
With room-based cooling, the CRAH units are associated with the room and operate concurrently
to address the total heat load of the room. Room-based cooling may consist of one or more air
conditioners supplying cool air completely unrestricted by ducts, dampers, or vents, or the supply
and/or return may be partially constrained by a raised floor system or overhead return plenum. For
more information on these topics, please consider participating in our course: Fundamentals of
Cooling Architecture I: Heat Removal Methods.
During design, the attention paid to the airflow typically varies greatly. For smaller rooms, racks
are sometimes placed in an unplanned arrangement, with no specific planned constraints to the
airflow. For larger more sophisticated installations, raised floors may be used to distribute air into
well-planned hot-aisle / cold aisle layouts for the express purpose of directing and aligning the
airflow with the IT cabinets.
© 2014 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Choosing Between Room, Row, & Rack Based Cooling For Data Centers I
Transcript
The room-based design is heavily affected by the unique constraints of the room, including the
ceiling height, the room shape, obstructions above and under the floor, rack layout, CRAH location,
and the distribution of power among the IT loads. When the supply and return paths are
uncontained, the result is that performance prediction and performance uniformity are poor,
particularly as power density is increased. Therefore, with traditional designs, complex computer
simulations called computational fluid dynamics (CFD) may be required to help understand the
design performance of specific installations. Furthermore, alterations such as IT equipment moves,
adds, and changes may invalidate the performance model and require further analysis and/or
testing. In particular, the assurance of CRAH redundancy becomes a very complicated analysis
that is difficult to validate. Here we can see an example of a traditional room-based cooling
configuration.
© 2014 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Choosing Between Room, Row, & Rack Based Cooling For Data Centers I
Transcript
Another significant shortcoming of uncontained room-based cooling is that in many cases the full
rated capacity of the CRAH cannot be utilized. This condition occurs when a significant fraction of
the air distribution pathways from the CRAH units bypass the IT loads and return directly to the
CRAH. This bypass air represents CRAH airflow that is not assisting with cooling of the loads; in
essence a decrease in overall cooling capacity. The result is that cooling requirements of the IT
layout can exceed the cooling capacity of the CRAH despite the required amount of nameplate
capacity.
For new data centers greater than 200 kW, room-based cooling should be specified with hot-aisle
containment to prevent the issues we just discussed. This method is effective with or without a
raised floor and the cooling units can either be located inside the data center or outdoors. For
existing data centers with room-based raised-floor cooling, cold aisle containment is recommended,
since it is typically easier to implement. Both hot and cold aisle containment are being used to
minimize mixing in data centers. Each of these solutions has its own unique advantages that are
described in further detail in our course Optimizing Cooling Layouts for the Data Center. Here we
can see two examples of a next-generation room-based cooling.
© 2014 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Choosing Between Room, Row, & Rack Based Cooling For Data Centers I
Transcript
With a row-based configuration, the CRAH units are associated with a row and are assumed to be
dedicated to a row for design purposes. The CRAH units may be located in between the IT racks
or they may be mounted overhead. Compared with the traditional uncontained room-based cooling,
the airflow paths are shorter and more clearly defined. In addition, airflows are much more
predictable, all of the rated capacity of the CRAH can be utilized, and higher power density can be
achieved.
Row-based cooling has a number of side benefits other than cooling performance. The reduction
in the airflow path length reduces the CRAH fan power required, increasing efficiency. This is not a
minor benefit, when we consider that in many lightly loaded data centers the CRAH fan power
losses alone exceed the total IT load power consumption.
© 2014 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Choosing Between Room, Row, & Rack Based Cooling For Data Centers I
Transcript
A row-based design allows cooling capacity and redundancy to be targeted to the actual needs of
specific rows. For example, one row of racks can run high density applications such as blade
servers, while another row satisfies lower power density applications such as communication
enclosures. Furthermore, N+1 or 2N redundancy can be targeted at specific rows.
For new data centers less than 200 kW, row-based cooling should be specified and can be
implemented without a raised floor. For existing data centers row-based cooling should be
considered when deploying higher density loads (5 kW per rack and above). Our course,
Deploying High-Density Pods in a Low Density Data Center, discusses the various approaches for
deploying high density zones in an existing data center. Here we can see examples of row-based
cooling.
Both of the cooling systems shown here can also be configured as a hot-aisle containment system
that extends the power density capability. This design further increases the performance
predictability by eliminating any chance of air mixing.
The simple and pre-defined layout geometries of row-based cooling give rise to predictable
performance that can be completely characterized by the manufacturer and are relatively immune
to the affects of room geometry or other room constraints. This simplifies both the specification
and the implementation of designs, particularly at densities over 5 kW per rack.
© 2014 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Choosing Between Room, Row, & Rack Based Cooling For Data Centers I
Transcript
With rack-based cooling, the CRAH units are associated with a rack and are assumed to be
dedicated to a rack for design purposes. The CRAH units are directly mounted to or within the IT
racks. Compared with room-based or row-based cooling, the rack-based airflow paths are even
shorter and exactly defined, so that airflows are totally immune to any installation variation or room
constraints. All of the rated capacity of the CRAH can be utilized, and the highest power density
(up to 50 kW per rack) can be achieved. Here we can see an example of rack-based cooling.
Similar to row-based cooling, rack-based cooling has other unique characteristics in addition to
extreme density capability. The reduction in the airflow path length reduces the CRAH fan power
required, increasing efficiency. As we mentioned, this is not a minor benefit considering that in
many lightly loaded data centers the CRAH fan power losses alone exceed the total IT load power
consumption.
A rack-based design allows cooling capacity and redundancy to be targeted to the actual needs of
specific racks, for example, different power densities for blade servers vs. communication
enclosures. Furthermore, N+1 or 2N redundancy can be targeted to specific racks. By contrast,
© 2014 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Choosing Between Room, Row, & Rack Based Cooling For Data Centers I
Transcript
row-based cooling only allows these characteristics to be specified at the row level, and room-
based cooling only allows these characteristics to be specified at the room level.
As with row-based cooling, the deterministic geometry of rack-based cooling gives rise to
predictable performance that can be completely characterized by the manufacturer. This allows
simple specification of power density and design to implement the specified density. Rack-based
cooling should be used in all data center sizes where cooling is required for stand-alone high-
density racks. The principal drawback of this approach is that it requires a large number of air
conditioning devices and associated piping when compared to the other approaches, particularly at
lower power density.
Nothing prevents the room, row, and rack-based cooling from being used together in the same
installation. In fact, there are many cases where mixed use is beneficial. Placing various cooling
unit in different locations in the same data center is considered a hybrid approach as we can see
here. This approach is beneficial to data centers operating with a broad spectrum of rack power
densities.
Another effective use of row and rack-based cooling is for density upgrades within an existing low
density room-based design. In this case, small groups of racks within an existing data center are
outfitted with row or rack-based cooling systems. The row or rack cooling equipment effectively
isolates the new high density racks, making them “thermally neutral” to the existing room-based
cooling system. However, it is quite likely that this will have a net positive effect by actually adding
cooling capacity to the rest of the room. In this way, high density loads can be added to an existing
low density data center without modifying the existing room-based cooling system. When deployed,
this approach results in the same hybrid cooling depicted here.
© 2014 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Choosing Between Room, Row, & Rack Based Cooling For Data Centers I
Transcript
Another example of a hybrid approach is the use of a chimney rack cooling system to capture
exhaust air at the rack level and duct it directly back to a room-based cooling system. This system
has some of the benefits of a rack-based cooling system but can integrate into an existing or
planned room-based cooling system. Here we can see an example of this equipment.
© 2014 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Choosing Between Room, Row, & Rack Based Cooling For Data Centers I
Transcript
Let’s review some of the information that we have covered in this course.
The historical method for data center cooling is to use perimeter cooling units that distribute cold air
under a raised floor with no form of containment. This is no longer useful or beneficial in today’s
high density data centers.
There are four basic cooling configurations for how air is distributed to the IT loads
Room cooling methods, which are beneficial to new data centers greater than 200 kW. It’s
important to note that room-based cooling should be specified with hot-aisle containment to
prevent issues.
Row cooling methods, which are valuable to new data centers less than 200 kW. Row-based
cooling allows cooling capacity and redundancy to be targeted to the actual needs of specific rows
© 2014 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Choosing Between Room, Row, & Rack Based Cooling For Data Centers I
Transcript
Rack-based cooling methods, which allow cooling capacity and redundancy to be targeted to the
actual needs of specific racks
Hybrid cooling methods, which use room, rack, row cooling together in combination. This cooling
method is beneficial to data centers operating with a broad spectrum of rack power densities and
supports density upgrades within an existing low density room-based design.
© 2014 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.