You are on page 1of 44

Volume 23 | May 2012

Designing a Mega
Data Center

zine
a
ag at
J M g m
DC erin al.co
ive ist urn
ce reg rjo
e
o r e bycent
e
t
p om ta
u
n t h .da
ig
S a w w
w
Flexible power solutions in minutes. Not weeks.

Expanding your power distribution capacity shouldnt


be a hardship. And with the flexible STARLINE Track Busway,
it wont be. Our overhead, scalable, add-as-needed system
can expand quickly, with no down time, and no routine maintenance. Make dealing with the jungle
of under-floor wires a thing of the past, and reliable expansion and reconfigurations a part of your future.
To learn more about STARLINE Track Busway and to find a representative near
you, just visit www.StarlinePower.com or call us at +1 724 597 7800.
DESIGN 32 How to make Private
Cloud Initiatives Matter
Corner to your CEO
by Jason Cowie
Chief information officers understand private
2 the rise of the mega data clouds and the benefits associated with the
14 Designing a Mega data technology, but they still need to sell the
center center
by Jeffrey R. Clark, Ph.D. initial investment to other executives CEOs
by Don Beaty and chief financial officers. To C-level
Mega data centers sprawl over hundreds of The optimal designs for mega data centers
thousands of square feet and can exceed leaders, the CIOs ability to explain concrete
require a holistic approach since there is so return on investment and demonstrated
10 megawatts of power, with some much to be potentially gained by considering
approaching a million square feet or 100 value makes all the difference between
a broader range of tradeoffs across all fields. embracing cloud computing or abandoning
megawatts. But what drives companies to
build these gargantuan facilities? Ultimately, cloud initiatives that could have been hugely
20 Steps to Moving Towards beneficial.
as in almost any business trend, you need Building Systems
only follow the money.
Integration
by Alan Dash
ITbusiness
24 Top 10 Green Features of
a Mega Data Center 35 Make the Most of
by Dan Prows Application Delivery
Controller Deployments
by Jesse Rothstein

IT CORNER Over the past decade, application delivery


controllers (ADCs) have become an integral

FACILITY 26 Some data centers are


part of the IT infrastructure. I played a role in
developing ADC technology at F5 Networks,

Corner getting smaller?


by Jeffrey R. Clark, Ph.D.
where I led the design of the BIG-IP v9
product and TMOS platform. Based on my
At the top of the data center world are previous experience with ADCs and my
the mega data centersfacilities whose current focus on application performance
6 How to Justify the Cost massive scale dwarfs the computer closets management (APM), I have distilled nine best
of New Backup and DR (by comparison) of most companies. But practices for maximizing ADC performance.
Solutions the trend toward larger data centers among
by Chris Poelker some large companies contrasts with
In order to understand whether outsourcing another trend elsewhere in the industry:
37 Report Analytics:
backup and DR to a cloud provider makes many data centers are getting smaller. Complementing Business
sense, you need to understand how much it Intelligence For
costs you today to provide a similar service Data-Driven Business
level. This holds true for determining whether
the cost of that new solution is internal or ITOPS Decisions
by Michael Morrison
outsourced.

10 10 Power Protection
30 Natural Disasters are
not the only reason to a Yourturn
Myths Debunked Disaster Recovery plan
An Uninterruptible Separation of Fact and Fiction by Robert Gast
by John Collins
40 Strategies for Data
With a continuous stream of factors Center Consolidation
Budgeting for electricity, securing adequate necessitating both planned and unplanned
supplies of it and finding ways to use less Part 2 of 3: White Space
downtimes, having a plan in place for all
of it are all common topics of conversation by Ed Spears
IT disasters is essential to ensure quality
among data center operators. Ensuring the service, regardless of the circumstance, in
power their IT resources rely on is both todays IT-dependent world. Calendar
dependable and clean, sadly, can sometimes Vendor Index
be an afterthought.

All rights reserved. No portion of DATA CENTER Journal may be reproduced without written AN EDM2R ENTERPRISES, INC. PUBLICATION ALPHARETTA, GA 30022
permission from the Executive Editor. The management of DATA CENTER Journal is not PHONE: 678-762-9366 FAX: 866-708-3068 | WWW.DATACENTERJOURNAL.COM
responsible for opinions expressed by its writers or editors. We assume that all rights in DESIGN : NEATWORKS, INC ALPHARETTA, GA 30022
communications sent to our editorial staff are unconditionally assigned for publication. TEL: 678-392-2992 | WWW.NEATWORKSINC.COM
All submissions are subject to unrestricted right to edit and/or to comment editorially.
FEATURE
story

Mega data centers sprawl over hundreds of thousands of


square feet and can exceed 10 megawatts of power, with some
approaching a million square feet or 100 megawatts. But what
drives companies to build these gargantuan facilities? Ultimately,
as in almost any business trend, you need only follow the money.

The Rise of the


MEGA
Data Center by Jeffrey R Clark, Ph.D

2 | THE DATA CENTER JOURNAL www.datacenterjournal.com


The Logical Result of reduce complexity through centralization,
Consolidation maintaining a more singular management

V
structure. And not surprisingly, a less
irtualization has taken the complex infrastructure is generally less
data center industry by storm, expensive to build and to operate. Simpler
and it is closely followed by is (often) cheaper, but for large companies
its cousin: consolidation. to meet their IT needs with just a few data
Companies have sought to centers, those facilities must be very large.
reduce infrastructure (and thus capital and
operating costs) by focusing IT operations Local incentives
onto fewer, more highly utilized machines. The operating costs of servers over
Naturally, then, this process leads to a their lifetimes now outweigh their associ-
broader view of data centers in general: ated capital costs, meaning that the price
large companies operating multiple data of energy is as great a concern as the cost
centers may choose to focus their facilities of the equipment. So when an opportunity
into fewer and ever larger implementations to tap into low energy prices or other cost
in hopes of reducing costs and complexity. incentives, it pays companies to do so at
Depending on their locations, implement- large scale. In particular, low energy prices
ing fewer mega data centers may enable a are a tremendous draw, particularly when
company to take advantage of certain local utilities provide long-term price guaran-
benefits, like (relatively) low energy prices, tees. Another major enticement is local tax
tax incentives, climate, or availability of incentives. Many governmentsthanks to
alternative energy sources. their profligate spending, which has earned
Ultimately, mega data centers are them crushing debt loadsare looking
the result of a push to minimize cost and for any and all sources of revenue to keep
thereby maximize profit. Although the operating. Mega data centers bring the po-
reasons that companies like Apple, Face- tential for a flood of property tax revenue,
book, Google, Microsoft, Oracle and other as well as jobs and an impetus for further
organizations (like the U.S. federal govern- development, to such locations. Thus, these
ment) are moving toward a small number governments are often willing to offer
of giant data centers instead of a hodge- breaks on property taxes or sales taxes
podge of smaller facilities are not always for equipment. And companies are often
expressed in terms of dollars. Motivations ready to oblige by sourcing their mega data
like a reduced environmental impact (say, centers in such locations, taking advantage
from locating a data center in a lower- of these savings. And again, the larger scale
temperature climate to reduce energy spent the deployment, the greater the savings
on cooling), improved green reputation, companies can in effect stock up on data
availability of alternative power sources center capacity, just as any good shopper
(like hydroelectric or geothermal) and oth- would.
ers may be trotted out. And to some extent,
certain companies may pursue these at the Free cooling
expense of some profit, but companies are Although ASHRAE guidelines enable
ultimately in business to make money, and free cooling year round for many loca-
that motivation will underlie most of their tions, these benefits only apply to certain
actions, regardless of any stated reasons. allowable temperature classes. For the
Thus, data center consolidation recommended range (even though it was
which is driven by the desire to eliminate recently expanded) some locations offer
unneeded infrastructure and to reduce greater potential for free cooling than oth-
energy consumption, thus lowering capital ers. These conditions thus push the trend
and operational costsis generally moti- toward centralization. By locating facilities
vated by the same or similar factors relative in fewer deploymentspreferably in
to consolidation at the server or rack level. cooler climatescompanies can scale their
The following are some specific reasons for savings. Why not build a single large data
this trend. center in a cool climate with relatively low
energy rates and tax incentives (if you can
Reduced complexity grab all those in the same place!) instead
A distribution of IT resources across of a number of data centers in a variety of
multiple locations has its benefitsbut it locations having a variety of economic and
also has concomitant complexities. The other conditions.
general trend of consolidation seeks to

www.datacenterjournal.com THE DATA CENTER JOURNAL | 3


Alternative energy mobile devices, they run into the problem centers, but tax incentives may become less
Locations with a solid sources of al- of syncing data across their numerous elec- of a factor.
ternative energy (like hydroelectric power) tronic gadgets. The cloud posits itself as the
enable companies to expand their green logical solution: it can store data remotely, Conclusions
image. The greater their percentage of reli- and users can access it anytime and from
ance on an alternative energy source, the anywhere, as long as they have an Internet Mega data centers are ultimately the
less their percentage reliance on coaland connection. result of consolidation in search of a more
that pleases many environmental groups, In addition to just data storage capa- profitable operating structure than takes
in addition to being a PR bonus. As with bility, however, bandwidth demands are in- advantagewhenever possibleof local
the other factors above, scale is critical: a creasingly quickly, driven heavily by video incentives at large scale. Beyond factors
larger data center means a greater ability to content. Infrastructure is needed to support like local tax incentives, low energy prices,
exploit the advantages of a particular area. this growing demand, and large companies climate advantages (to enable free cooling)
are in position to capitalize on this demand and alternative energy sources (for com-
The Cloud through economies of scale. pany PR purposes), the cloud is the major
driver of centralization of the IT resources
The cloud is another major driver Balancing Forces of large companies into mega data centers.
of mega data centersperhaps the main Seeking to exploit commodity hardware
driver. Indeed, although they are not always Not everything favors the trend at tremendous scales, these facilities target
precisely the same thing, cloud data cen- toward mega data centers. Although the broader customer bases than data centers
ters and mega data centers are two cat- factors listed above provide plenty of designed to serve just a single company
egories that overlap significantly. The cloud motivation, some difficulties may arise. internally. The public cloud enables these
is essentially an outsourcing of IT resources Energy prices, for instance, are seldom low companies to essentially sell idle time on
to another (potentially undefined) location. in urban areasbut cities are more likely to their resources to ensure maximum utiliza-
From the service provider perspective, the offer wide talent pools for staffing data cen- tion, avoiding the operating costs (and
cloud is a means of garnering value from ters, as well as broader infrastructure, in- wasted capital) of unused equipment. In
a pool of resources by allocating those cluding power and networking bandwidth. some sense, then, the cloud enables a large
resources among some number of custom- To some extent, the employee pool is not a company to pay for its data center opera-
ers. In the case of a private cloud, there may tremendous problemlocating a data cen- tionand maybe profit a little bit, too.
be just one customerbut the approach ter in a rural area may pose some restric- The cloud is in turn driven by in-
still makes sense (in theory, anyway), since tions on hiring, but many professionals are creasing demand for mobile services. As
a central resource pool is divided among willing to relocate for a good employment such, locations of these data centers are
users, thus saving the idle time (and thus opportunity. Indeed, rural areas offer some not relevant per se, thus enabling com-
energy waste) associated with numerous benefits to workers, such as a lower cost panies to seek areas that provide the best
distributed resource pools. The public of living. But the benefits of rural areas, combination of incentives and conditions.
cloud is the same, conceptually: it just including greater availability and lower cost Furthermore, the constantly growing de-
expands the customer pool. of land, make these locations irresistible to mand for bandwidth leaves plenty of room
But with a larger potential base of many companieshence the regular news for companies to implement ever-larger
customers comes greater demandand updates of a new mega data center being capacities in their data centers in an effort
hence, the need for more resources. Public built in a virtually unheard of town. to meet this demandand profit from it.
cloud infrastructure is therefore, almost by In addition, data centers may have Perhaps the only question is how large will
definition, necessarily larger in scale. Large limited leverage when it comes to obtain- these data centers become. With the largest
companies thus seek to scale profits by ing tax incentives. Much of the economic facilities operating in the 100 megawatt
scaling their data centers. One of the keys development factor may be limited to range, talk of gigawatts seems to be on the
to success in this regard is commoditization symbolism: less developed (rural) regions horizon. Already, facilities with millions of
of equipment: lower capital expenses means may be able to cite a new giant data center square feet are no longer overly surprising
less cost to amortize over the customer base. as proof of growth, but the gains in terms (although they are still impressive). And
The need for cloud data centers of jobs are often less than the size of the economic instability in western nations
leading some large companies to build their buildings would indicate. Data centers are which could potentially stem growth in the
gargantuan mega data centersis in part heavily automated, so 100,000 square feet data center sectorseems counterbalanced
driven by the ever-expanding demand for of data center space doesnt necessarily by growth in the emerging nations in the
mobility. Cell phones, tablets and other mean hundreds of new jobs in the area. As east. Particularly, Asian nations like China
mobile devices are building in faster (and such, tax incentives and other gimmicks and India are seeing growth in business
often multicore) processors, giving them from local governments may be a passing and consumer demand for resources, and
greater capabilities and making them more fadparticularly as these governments they are also seeing their own crop of mega
like computers than, for example, just become increasingly insolvent in a down data centers. No doubt, the trend toward
phones with a few nice extra features. But economy that simply doesnt provide the consolidation into huge data centers will
as consumersboth inside and beyond tax revenues that it once did. Other drivers continueat least among some very large
Copyright 2012, Cablesys

enterprisesrely more heavily on their may still drive the growth of mega data companies. n

4 | THE DATA CENTER JOURNAL www.datacenterjournal.com


Datacenters Weakest Link

Pre-bundled MPO

Patch cords are the weakest link in datacenter cabling.


Cables At Cablesys, our mission is to make it the strongest.
ys com
. /dcj

800.5
55.71
76
We manufacture high performance, zero defect
patch cords with a 15 year performance warranty.

Pre-bundled CAT 6

We offer 11 colors, 20 lengths, fiber, copper, CAT 5e, 6, 6A, 10G, 40G, 100G,
.com

custom labeling, bundling and kitting. We have millions in stock that


blesys

can be shipped same day and best of all, 20% less than name brands.

We help thousands of integrators with custom tailored patching solutions


cs@ca

so they can focus on whats important their network. Give us a try, and
Copyright 2012, Cablesys

you will be surprised how easy it is.

www.datacenterjournal.com THE DATA CENTER JOURNAL | 5


FACILITY
corner

How to Justify the Cost of


New Backup and
DR Solutions
By Chris Poelker

I
feel your pain. Your current backup application came out Top five issues in analyzing backup and
when PONG was all the rage in computer games, and you are DR costs:
keeping it running with spit, glue, duct tape, bailing wire, and
many long nights and weekends. You have no real disaster 1) The backup target:
recovery (DR) strategy to speak of other than shipping tapes Tape sucks for recovery, but its great for archives. For every
off site, and the last time you really tested DR with any reasonable 20 terabytes of production data with a 2 percent change and 3 per-
hope of success was when your kids were still in grammar school. cent growth rate and using a traditional backup operation of daily
The amount of data you need to protect is doubling, your tape incremental and weekly full backups, you need to manage 110 tera-
drives work occasionally, and the person who wrote all the scripts bytes of tape media. Therefore, a shop with 100 terabytes would
to provide at least some level of automation just found another job. need 550 terabytes of tape (100 x 110 = 550). Your goal should be
Happy New Year. Although this sounds like a comedy of errors, to include tape as an archive storage tier, so you can minimize the
these are the cards that are dealt to many a backup administrator. need to recover from tape.
An organizations data is its lifeblood. Any corporate and
government executives understand this but still are unwilling to 2) Infrastructure and time:
allocate the right budget to the IT department to protect it. Why? The time required to backup your datacenter is equal to the
Backup and DR are insurance policies. They do nothing to current capacity of all the applications needing protection divided
enhance a business organizations bottom line or its mission. The by the performance in terabytes/hour of your current backup solu-
process is a necessary evil, just like going to the bathroom. If IT tion. The performance metrics must include the entire data path:
managers could, they would enjoy the meal and pay someone else network, backup server, storage, and backup target (tape or disk) as
to go potty for them. Why do you think cloud computing is mak- a whole. You could buy the fastest backup target on the planet, but
ing an impact? It allows IT to outsource the hard stuff to someone if you cant feed it fast enough it doesnt help a bit. Remember that
else so the core team can focus on the mission at hand which the time to recover using traditional approaches is usually twice
brings up the reason I am writing this. the time to back up.
In order to understand whether outsourcing backup and
DR to a cloud provider makes sense, you need to understand how 3) The network:
much it costs you today to provide a similar service level. This Your wide area network (WAN) needs to be robust enough to
holds true for determining whether the cost of that new solution is handle the amount of data change you have on a daily basis. WAN
internal or outsourced. If the solution will replace your old one and bandwidth should be measured for peak loads so that critical batch
make your quality of life much better, how do you justify the cost processing periods, such as end-of-month or end-of-year process-
to accounting? The goal is to create a win/win message for finance. ing data, are not at risk. This is why WAN costs can be the most
Yes, the solution does cost $X, and it will enable me to stay home expensive part of a good DR strategy.
drinking beer at night, but I can prove it will save us $Y in the long
run while providing a much more robust recovery paradigm for 4) The building and the plan:
our applications. The DR target site must be able to handle peak workloads,
Here are a few handy tips to validate any new technology even if in a degraded fashion, for all the critical applications the
solution to determine whether the cost of implementing it would company requires; and it must be available in a timeframe that
be justified. supports continuity of operations at minimal impact to the busi-

6 | THE DATA CENTER JOURNAL www.datacenterjournal.com


Corning Cable
Systems is
EXPANDING
OPPORTUNITIES
When opportunities emerge, Corning Cable Systems
wants you to have a solution. We are expanding our
Pretium EDGE Solutions offering you new products
for your data center needs. More density, more rack
applications and more mounting options are all
made possible with the new Pretium EDGE Solutions.
When your data center has a demand, Corning Cable
Systems has the answer. Pretium EDGE Solutions:
Expanding Opportunities

2012 Corning Cable Systems LLC


ness. This is a hard number to determine, so using readily available How to calculate the benefits:
information from the industry analysts can help you determine the
hourly cost of downtime for your industry. Phase 1: Add dedupe
Tale 1 shows an example of lost-revenue data based on pub- These calculations do not cover the cost of replacing backup
licly available data. It may be outdated, but it is useful for at least software, as it is difficult to rip and replace. Just remember that,
determining a baseline outage cost. More accurate and up-to-date using standard backup software, for every 20 terabytes of produc-
data can be obtained from the major analyst firms. tion data you may need 110 terabytes of tape based on your data
retention requirements.
Table 1: Example of Lost Revenue Assuming LTO3 drives and 20 terabytes of production data:
110TB/400GB (capacity of LTO3) = 1,126,420/400 = 282 tapes
Lost Revenue per
INDUSTRY 282 x $70 = $19,740
Hour (U.S. dollars) 80MB/s (speed ofLTO3) = 6.91TB per day
Energy 2,800,000 To backup 20TB in a 12-hour window, you need 6 drives
Telecom 2,000,000 6 drives = $4,500 x 6 = $27,000
Manufacturing 1,600,000 Total = $46,740 for each 20TB.
Price to provide 40TB of tape capacity = $46,740 (base costs) +
Finance 1,500,000
$19,740 (media) = $66,480
Information Technology 1,350,000
Insurance 1,200,000 Dedupe to the rescue:
Retail 1,100,000 Implement a single 4TB dedupe appliance that will hold 40TB
Pharmaceutical 1,100,000 at 10:1 ratio = approx. $17,700
$66,480 $17,700 = $48,780 in savings, or 375 percent!
Chemical 700,000
Purchase another dedupe solution to replicate the data off site to
Transport 670,000
eliminate off-site tape movement, recall costs, eliminate array-
Utilities 640,000 based replication licenses for that data, and reduce the WAN
Healthcare 640,000 requirements to replicate that data by 90 percent, which is a 10:1
Media 340,000 dedupe ratio. Deduped replication savings:
Cost of off-site tape storage contract $17,700 = net savings
Cost of array licenses and storage for replication = array license,
5) Intelligent recovery: storage, and maintenance savings
You need to have the right infrastructure at the DR site to WAN costs to replicate 20TB of data versus 2TB of data (10:1
handle operations. And you need to understand how to bring back dedupe ratio) = annual WAN savings
up applications that have crashed so they actually work. There are
typically many interdependencies between application servers Phase 2: Implement snapshots and CDP
and databases for most modern applications, and they need to be Lets say your current backup window is currently 12 Hours
brought up on the right platform in the correct sequence. Clients for your 20 terabytes of production data to tape. Adding more tape
need to be able to connect to the new servers as if they were con- drives may not fix the problem if the network is your bottleneck.
necting to the old servers; so infrastructure servers such as direc- Implementing snapshot-based backup will reduce the backup
tory services, network routers, and phone lines, if required, need to window from the current 12 hours to a couple of seconds. Moving
be available. from traditional bulk backup to snapshot backup eliminates the
Lets take each of these issues one at a time. Well use a phased physical data movement from Point A to Point B to copy the data,
approach to optimize the process and reduce current costs, so you which means the physics of the process changes. With snapshots,
can justify a new solution to replace your existing systems. you can backup more often and data recovery is typically much
faster. CDP takes that paradigm even further by moving from more
Phases to optimize backup and DR: frequent periodic backup to always backed up, which means there
is zero data loss.
1. Add data deduplication to your backup operations to reduce Now lets calculate the benefits of implementing Phase2. As-
backup costs and optimize replication. suming the cost of downtime is calculated using the numbers pro-
2. mplement snapshots and continuous data protection (CDP) vided in my chart for a media company and an average recovery
to eliminate bulk data movement for backup and DR. time of only four hours to rebuild a critical server, the calculations
3. Leverage WAN optimization and delta versioning with en- are as follows:
cryption to reduce risk and WAN requirements by 90 percent. $340,000 x 4 = $1,360,000 - cost of new CDP solution (assume 2
4. Use CDP to reduce recovery times to a few minutes and sites at $100,000 per location = $200,000)
eliminate data loss. $1,360,000 $200,000 = $1,160,000. Recovery time for CDP
5. Virtualize your storage and servers to reduce infrastructure solution = 30 minutes ($340,000/2 = $170,000
costs. $1,160,000 - $170,000 = $990,000 savings per outage

8 | THE DATA CENTER JOURNAL www.datacenterjournal.com


Phase 3: WAN optimization Server virtualization commoditizes servers and enables server
If you already have purchased the dedupe solution in Phase 1, consolidation.
you may not even need to purchase anything else to reduce WAN Storage virtualization commoditizes storage and enables com-
requirements. Many have found that disk-to-disk-based replica- plete data mobility.
tion from a CDP or dedupe solution enables them to turn off their The ability to replicate data between unlike storage reduces stor-
current expensive array-based replication systems. CDP solutions age costs by up to 50 percent.
tend to replicate data very efficiently, and dedupe reduces WAN Consolidating multiple applications onto virtual servers reduces
requirements by 90 percent or more. Lets say you can turn off just infrastructure costs by 90 percent.
half of the current WAN bandwidth you need to replicate your Dense virtual storage and servers reduces data center power,
20 terabytes thats a 50 percent WAN savings over the current cooling, and floor space requirements.
solution.
For example, a single blade server may be able to run 50 to
Phase 4: Use CDP to speed recovery and 100 applications versus 50 to 100 physical servers. Virtual storage
eliminate data loss simplifies provisioning and enables storage pooling and tiering.
The benefits of Phase 4 are already outlined in the savings With the right data, you can justify your purchase of innova-
of Phase2. The added benefit of reducing data loss to zero is also tive technology that can optimize your IT infrastructure to help
difficult to calculate. The savings depend on the type of business. the business and its mission. And an additional benefit of that
As an example, in a hospital loss of data may mean loss of life. If purchase just might be reduced stress and some free time to take in
you are running a trading application for a major stock exchange, a a game over the weekend! n
single second may include thousands of individual financial trans-
actions worth millions of dollars. If your company writes software, About the Author: FalconStor VP of Enterprise Solutions Christopher
an outage may mean hours of lost coding that may be irreplaceable. Poelker is a highly regarded storage expert with decades of experience,
I recommend using an industry chart to determine the benefits including positions as storage architect at HDS and lead storage architect
of reducing the possibility of data loss and the improved recovery at Compaq. Poelker also worked as an engineer and VMS consultant
times provided by CDP. at Digital Equipment Corporation. Recently, Poelker served as deputy
commissioner of the TechAmerica Foundation Commission on the
Phase 5: Virtualize servers and storage Leadership Opportunity in U.S. Deployment of the Cloud (CLOUD).
Implementing virtualization can have a huge impact in mul-
tiple areas:

advanced design

versatile
solution
By investing in state-of-the-art manufacturing
technology, the BURNDY tradition of
innovation continues with products that are
better suited for todays needs.
For example, our engineers saw a way to replace multiple
grounding products with just one SUPER-CLAMP RaisedTM

Floor Pedestal Ground Connector, thanks to its:


Ability to install on round or square pedestals from
3/4-inch through 2-inch
Open-face design that can be installed without
disassembling u-bolts or hardware
Accommodation of multiple sizes of either one or two
wires in a parallel or cross-grid configuration
Compatibility with most types of socket-type tools
Thats what makes my job so exciting, says Mitch,
a production assistant. Were always looking ahead
to tomorrow.

GroundingSuperstore 1-800-346-4175 USA | 1-603-647-5299 International | 1-800-387-6487 Canada | www.burndy.com


The

BURNDY LLC, 2012


TM

www.datacenterjournal.com THE DATA CENTER JOURNAL | 9


FACILITY
corner

10 Power Protection
Myths Debunked
An Uninterruptible Separation of Fact and Fiction By john collins

B
udgeting for electricity, securing adequate supplies cent to 8.3 percent under absolute specifications. That means that
of it and finding ways to use less of it are all common what utility services promising 208-phase voltage actually deliver
topics of conversation among data center operators. can range from 191 to 220 volts.
Ensuring the power their IT resources rely on is both And, in the U.S., utility power is only 99.9 percent reliable,
dependable and clean, sadly, can sometimes be an which translates into a likely nine hours of utility outages every year.
afterthought.
The truth is, the problems and risks associated with power
are intensifying as virtualization and movement to the cloud place A brief outage wont hurt my
an increasing amount of stress on storage systems, servers and MYTH bottom line
#2
network devices, which use components so miniaturized that
they fault and fail under power conditions that earlier generation Reality Losing power for as little as a
equipment easily withstood. quarter second can trigger events that may
At one point in time, IT merely played a supporting role in keep IT equipment unavailable for anywhere
the enterprise. However, it is now absolutely central to how most from 15 minutes to many hours.
companies compete. When IT systems are down, core business pro- And downtime is costly. Some experts believe the U.S. economy
cesses quickly come to a standstill. The cost of power and cooling loses between $200 billion and $570 billion a year due to power
has spiraled out of control in recent years, and data center managers outages and other disturbances.
are typically held responsible for achieving high availability while According to Price Waterhouse research, after a power outage
simultaneously reducing power costs these can be two very dif- disrupts IT systems:
ficult and conflicting aspects to manage, however, highly-efficient 33+ percent of companies take more than a day to recover
UPS systems can help with that goal, and products are available 10 percent of companies take more than a week
today that were not an option even after a few years ago. It can take up to 48 hours to reconfigure a network
In order to rationalize the costs associated with such up- It can take days or weeks to re-enter lost data
grades, and to properly select a system for your facilities needs, 90 percent of companies that experience a computer disas-
common misconceptions associated with UPS systems must first ter and dont have a survival plan go out of business within
be debunked. The resulting realities, a combination of factors that 18 months
have been influenced by industry trends and technology advances, Financially, power outages can mean substantial losses for
are often complicated by marketing hype, which can make that the the company affected. According to the US Department of Energy,
decision very difficult. when a power failure disrupts IT systems:
The following will address a few particular myths weve 33 percent of companies lose $20,000$500,000
heard, and use the subsequent truths to outline the best practices 20 percent lose $500,000 to $2 million
for selecting a power protection solution for any IT application. 15 percent lose more than $2 million

Utility power is clean and My business is too small for


MYTH reliable MYTH proactive power protection
#1 Reality Utility power isnt clean, nor
#3 Reality Power problems are equal-
is it 100% reliable. opportunity threats.
By law, electrical power can vary widely enough to They hit small businesses as often as big ones. Your
cause significant problems for IT equipment. According to current PCs, servers and network are just as critical to your business as a
U.S. standards, for example, voltage can legally vary from 5.7 per- data center is to a large enterprise. Chances are, you rely on more

10 | THE DATA CENTER JOURNAL www.datacenterjournal.com

DA
If you dont have
you might as well just
open the window.
gForce from Data Aire not only saves your data it saves
your money. Its energy-efficient fans, coils and advanced
airflow are designed to save on energy costs.

This is your window of opportunity to make a difference in


your bottom line.
The Reliable Choice in Precision Cooling
Combine gForce with our unique Unity Cooling System Equipment 7149216000
and the savings soar. Use your smart phone to take a
Shortest lead times in the industry
picture of the QR code to see a video about gForce Unity Built to customers specifications
Cooling, or visit DataAire.com to learn more about this Advanced control systems
breakthrough in mission critical cooling.
www.datacenterjournal.com
DataAire.com Ultra reliable technology

DAT 12-1002_Rev Out the Window.indd 1 1/13/12 11:57 AM


sophisticated and expensive data equipment than ever, and these Line-interactive UPSs provide a higher level of protection
systems run unattended much of the time. against power anomalies than standby systems, but must typi-
Consider how much investment is at risk. Even a small server cally resort to battery power when transferring between normal
configuration and company LAN (local area network) represents and regulated modes or coping with the instabilities of generator
an investment of tens of thousands of dollars. Even a basic server, startup.
such as a Compaq Proliant DL320, costs more than $2000. Figure Double-conversion UPSs are easy on batteries. Within broad
on $7000 for an IBM Xseries 360, $15,000 for a Compaq Proliant tolerances for input voltage, the UPS rectifier/inverter combination
DL590, $22,000 for an IBM Xseries 380, and $37,000 for a Sun can regulate output without resorting to batteries. Additionally,
Sunfire Blade System. Add operational applications, management transfer from normal to battery mode is instant, so theres no risk
systems, critical databases and networking equipment. Clearly of power interruptions freezing IT systems.
the IT infrastructure is a significant company asset that deserves With new high-efficiency, multi-mode UPSs, battery usage
adequate protection. duration and frequency are similar to a double-conversion UPS
Downtime is also very costly. Your IT hardware may be and in some instances may even be lower. Furthermore, these UPSs
insured, but what about the potential loss of goodwill, reputation operate at up to 99 percent efficiency under normal use. Higher ef-
and sales from downtime? Consider the number of transactions or ficiency translates into longer battery runtime and cooler operating
processes handled per hour, and multiply that by the value of each temperatures, which both extend battery service life.
one and the duration of an anticipated power incident. Add the
delays that inevitably occur when rebooting locked-up equipment,
restoring damaged files, and re-running processes that were inter- UPS load has no effect on
rupted. Then add the cost of lost revenue from being disconnected MYTH efficiency
from your suppliers, business partners, and customers.
#6 Reality Manufacturers usually state
UPS efficiency ratings at full load, but
I have a generator and a surge most of todays UPSs are markedly less
MYTH suppressor I dont need a efficient under lighter loads, which is how they are
#4 UPS likely to be used.
Since so many IT systems use dual-bus architecture for redun-
Reality Generators and surge suppres- dancy, the typical data center loads each power bus (and each
sors are a good start, but remain incomplete corresponding UPS) at less than 50 percent capacity, often as little
solutions for systemic problems. as 20 to 40 percent.
Backup generators address power outages but provide no As a result, it is important to know UPS efficiency across the
protection against the eight other power disturbances. Further- entire load range, not just under theoretical ideal UPS operating
more, critical systems can lock up in the 10-30 seconds it takes to conditions. While many UPSs drop off markedly in their efficiency
switch to backup power. Generators themselves can create harmful under light loads, others can perform at 99 percent efficiency even
power effects when switching between utility and generator power. when lightly loaded, as much as 15 percentage points better than a
Surge suppressors address the power surges, but have no traditional UPS.
effect on the under-voltage and variance conditions that can erode
equipment health over time or zap it in an instant.
Uninterruptible Power Systems (UPSs) go beyond these I dont need service if my UPS is
power-protection strategies while presenting a compelling busi- MYTH working
ness case in any commercial environment. UPSs protect your IT
systems by conditioning incoming power to smooth out the sags
#7 Reality The old adage of, If it aint
and spikes that are all too common on the grid and other primary broke, dont fix it may be feasible in some
sources of power. UPSs also provide ride-through power to cover circumstances, but applying it to the mainte-
for sags or short-term outages (30 60 minutes, typically), by nance of a UPS can have devastating consequences.
selectively drawing power from batteries, backup generators and Because a company relies on a UPS to deliver continuous power
other available sources. without any disruption to its business, proper service is a critical
component to ensuring optimal performance from a UPS while
minimizing the risks of downtime.
All UPSs have the same battery Research indicates that regular preventive maintenance
MYTH service life which affords the opportunity to detect and repair potential

#5 Reality The design of a UPS dictates


problems before they become significant and costly issuesis
crucial in order to achieve maximum performance from your
how frequently it uses battery power, equipment. In fact, studies show that routine preventive mainte-
which in turn affects battery runtime and nance appreciably reduces the likelihood that a UPS will succumb
service life. to downtime. A Study of Root Causes of Load Losses compiled by
Standby UPSs shift frequently to battery mode, which can reduce Eaton revealed that customers without preventive maintenance
battery runtime and service life. Furthermore, the brief interrup- visits were almost four times more likely to experience a UPS
tion of power during those frequent transfers could lock up IT failure than those who complete the recommended two preventive
systems, and their typically wide output voltage regulation window maintenance visits per year.
may cause the IT power supplies to shut down.

12 | THE DATA CENTER JOURNAL www.datacenterjournal.com


I dont need to monitor my UPS brief periods of emergency power, generators draw on a supply of
MYTH diesel fuel to keep IT systems operational for anywhere from 10

#8 Reality Even with a UPS, your IT system


could still go down in case of an extended
minutes to seven days or more.
PDUs: As an essential component of any power quality
power failure or if the UPS gets overload- infrastructure, power distribution units (PDUs) distribute power
ed for too long. to downstream ITE load equipment. Most companies use both
Communication software can not only provide real-time notifi- floor-mount PDUs, which provide primary distribution to server
cation of UPS status, but also lets you assign automatic actions racks, and rack-mount PDUs (also known as ePDUs), which
to perform in case of a power event. This is extremely useful if distribute power to individual servers and other devices. PDUs can
your system operates continuously without users being present to be equipped with optional devices like surge suppression and indi-
manually shutdown affected equipment. vidual breaker (branch) monitoring systems to monitor energy use.
Additionally, virtualization is now bringing a new set of com-
plexities, as the bond between operating system and physical hard- A few points in energy
ware is no longer the standard. Some suppliers of UPS software MYTH efficiency isnt going to save
#10
must ensure that shutdown software agents are installed on each much
virtual machine as well as on the host machine. This can be quite
tedious if the number of virtual machines is large, which is becom- Reality The latest UPS technology can
ing the standard in many virtualized environments. Leading-edge aid in immense utility savings.
UPS manufacturers have developed new software platforms that Today, UPSs are helping to maximize uptime
reduce this management complexity by integrating their software while being extremely energy-efficient and scalable. New UPSs
into virtualization management platforms like VMwares vCenter, maximize efficiency by operating in multiple modes, changing
Microsoft SCVMM or Citrix XenCenter. In these environments, their operating characteristics to adapt to the electrical conditions
one single software installation can control and shutdown any clus- of the moment. By engaging internal components only as neces-
ter of servers. Another benefit is the enablement of automatic live sary, these multi-mode UPSs can achieve exceptional efficiency, up
migration of virtual machines in case of a power outage, as you are to 99% across a broad load range. Replacing legacy UPSs with the
no longer limited to the option of shutting down the servers and latest energy-efficient technology is one way to reduce maintenance
stopping operations; business continuity is now possible through and energy costs and ensure power reliability to a greater degree.
this integration. For example, in a 1-MW data center, a 10-year-old UPS could
be wasting 120 kW or more of utility power and dissipating a lot
of added heat. Replacing that vintage equipment with new, high-
MYTH Once I have a UPS, my power efficiency UPSs can free up to 120 kW of power to support new IT
protection solution is equipment and reduce the burden on cooling systems. Replacing
#9 complete just one 550-kW UPS from a redundant UPS configuration with
a high-efficiency model could save more than $40,000 in power
Reality A complete power protection solu- and cooling costs each year, while eliminating 190 tons of carbon
tion typically includes more than just a UPS. dioxide emissions ad netting substantial utility company rebates.
Here are some accessories worth considering:
UPS energy storage: Today, most UPS products use
lead acid batteries to store emergency standby power. A proven
technology with many decades of successful service in a variety
of industrial settings, the lead acid battery remains the most
n Conclusion
cost-effective energy storage solution as measured by dollars per
minute of backup time. Businesses today invest large sums of money
Yet despite these merits, lead acid batteries are unpopular in their IT infrastructure, as well as the power
among data center managers due to their size, weight, mainte- required to keep it functioning. They count
nance requirements, toxic contents and relatively short lifespan, on this investment to keep them productive
among other issues. As a result, UPS makers have long been and competitive. Leaving that infrastructure
searching for an alternative standby power technology thats defenseless against electrical dips, spikes and
smaller, simpler and greener than lead acid batteries, yet no interruptions, therefore, is a bad idea.
more expensive to operate. Today, that hunt just may be nearing
its end. Several exciting new standby power solutions, all rapidly A well-built power protection solution, featuring
approaching mainstream commercial viability, appear poised to high-quality, highly efficient UPS hardware, can
give the lead acid battery a run for its money, including flywheels, help keep your business applications available,
ultracapacitors, fuel cells and lithium ion batteries. your power costs manageable and your data safe.
Generator: During a utility failure, a UPS gives you the By familiarizing yourself with the basics of power
few minutes of time you need to shut down servers gracefully. protection systems, and how to choose the right
These days, however, many companies cant afford to be without IT one for your needs, you can ensure your mission-
systems for the hours or even days that may elapse before electrical critical systems always have the clean, reliable
service is restored. Such organizations almost always include a gen- electricity they need to drive long-term success. n
erator in their power protection architecture. While UPSs provide

www.datacenterjournal.com THE DATA CENTER JOURNAL | 13


DESIGN
corner

DESIGNING A
MEGA
Data center
by DON BEATY, P.E.

14 | THE DATA CENTER JOURNAL www.datacenterjournal.com


Introduction:
The first clarification required when considering the design of a mega data
center is what the definition of mega is. Large data centers can be thought of
in terms of physical footprint of a building or campus OR in terms of the amount
of critical power (i.e. power to IT equipment) that the data center provides. This
article is relevant to data centers as small as 15 MW of critical power and as
large as a data center campus consisting of multiple data center buildings
as well as other support structures such as security / administration buildings,
warehousing, training, housing, etc.

The optimal designs for mega data centers require a holistic approach since
there is so much to be potentially gained by considering a broader range of
tradeoffs across all fields. In our experience, we have always found it beneficial
to invest in a microclimate analysis due to the high concentration of heat
sources (cooling system heat rejection, generator heat rejection, electrical
equipment heat rejection, etc.). The microclimate analysis becomes even more
important and challenging as the density (e.g. KW / rack) increases.

www.datacenterjournal.com THE DATA CENTER JOURNAL | 15


Figure 1 Some of the Fields Associated with the Design of a Mega Data Center
The Power of a Performance
Centric Approach
The opportunities for greater perfor-
Acoustics
mance increase by the combination of a)
considering more variables for tradeoffs
and b) have those variables driven by one
Security Architectural or more performance statements that
are geared towards encouraging a more
holistic approach. The following are some
IT Civil oversimplified examples of performance
statements just to provide a sense for the
Data kind of design guidance and influence that
Geotechnical/
Telecom Center Environmental
is possible:
Design
Data center to be Tier 1 but easily
FireProtection Structural upgradable at any time to Tier 3 with
a fractional cost premium
Data center to be able to affordably
handle 100 to 200 watts / square foot
Mechanical SpacePlanning
(upgradable or down gradable).

Mega data centers are interesting be-


Electrical cause MORE often than NOT, they inten-
sify MOST decisions and MOST mistakes.
On the surface, their scale appears to war-
rant an economy of scale centric approach
(e.g. using a design that can simply be
Solving a Multi-Variable It stands to reason that tradeoffs are replicated in a module or some megawatt
Equation valuable. Therefore, why would you not increment). Also there is an opportunity
want to maximize the number of effec- to simply use the purchase power of buying
There are interdependencies and / or tive tradeoff opportunities? The industry larger quantity of any given product, equip-
inter-relationships amongst many fields defaults to a very prescriptive mode of ment, material, systems, etc. as a means of
associated with the design of a mega data thinking but this effectively creates require- saving cost.
center (Figure 1). Sometimes there seems ments that are too limiting; especially for Often, a MORE effective way to view
to be no correlation or influence but it is mega scale projects. this is to consider the increases in a quan-
very common for this to become the case at tity of scale to make each decision MORE
some point during the design evolution.
The fields shown in Figure 1 are
not intended to be all inclusive but if you Figure 2 Some Variables Associated with the Security Field
consider the fact that each of these 12 fields
has various parts or attributes to them, it is
not difficult to draw the conclusion that a
typical mega data center could easily have
hundreds, if not thousands, of variables
to consider during the design tradeoffs.
Some of the variables for one of the fields is
shown in Figure 2.
The cost centric industry trend con-
tinues to commoditize almost everything
including professional services. This in
turn encourages in design professionals to
typically reduce their overall scope to only
consider a few variables at most (e.g. PUE,
Tier Level, Density).
This certainly is an approach that
saves on professional design fees but often
it is a costly approach from a holistic
perspective (e.g. total cost of ownership
and / or return on investment).

16 | THE DATA CENTER JOURNAL www.datacenterjournal.com


important and MORE valuable. This once Figure 3 Example of the Underlying Differences Between Two Data Center Industries
again ties into the notion of considering
MORE significant variables rather than ITEQUIPMENT
TOPIC FACILITYCONSTRUCTION INDUSTRY
MANUFACTURINGINDUSTRY
LESS.
FieldFabricated
FactoryFabricated EndProduct
What is the Right (typicallycustom,seldomprototype)

Contracting Strategy? InsideaFactory EnvironmentWhereThe


OutdoorsandIndoors
(withlimitedenvironmentalcontrol,
(desiredenvironmentalcontrol) ProductIsProduced
suchasweather)
On any project, there are a number
of team members with different contract- FactoryWorker
Labor
ConstructionWorkers
ing arrangements (e.g. some with direct (performancewellmeasured) (littleperformancemeasurement)
contracts with the owner / client and some
ManyMoreDefects
with subcontracts with one of the compa- MinimalDefects QualityControl
(thanFactoryFabricated)
nies that has a direct contract).
On mega scale projects, those team ProductBasedOn
Common Seldom
members with the largest scope such as APrototype
a construction manager, owners rep, IT Regulations
manufacturer/provider, or mechanical/ LessPredictable
ReasonablyConsistent Interpretation&
(BuildingInspectorsviewsoftendiffer)
electrical consulting engineer are likely to Enforcement
have a greater influence over the design and
therefore are more likely to have a direct how costly these selfish behaviors are. As Cost (capital expenditure, operating
contract with the owner / client. This can a result, in a number of situations, it is expenditure, TCO and ROI)
lead to an overreliance on the owner / cli- ultimately discovered that the smart play is Schedule (speed to market / speed to
ent to play the role of project manager and to invest MORE time and money in assess- deployment)
coordinate the scopes and desires of the ment and design because it is seen as such a Reliability / Availability
various team members. great ROI (Return on Investment). Future Flexibility (avoiding premature
Our experience has shown that there Just like the well-known fact that obsolescence)
is NO RIGHT or wrong method regard- the majority of ALL data center outages / Green Design (Energy Efficiency &
ing which companies have direct contracts failures can be traced to human behavior Sustainability)
with the owner / client versus which ones / performance either directly or indirectly,
are subcontracts. The key is simply the the same can be said for the project team. Some tradeoff examples that you may
following: As a result, on our projects, we have em- or may not be familiar with include:
A contracting strategy that results phasized directly and indirectly the great Ease of IT Operations vs. Facility
in a clear definition of roles and ROI associated with an increased focus on Operations this can include con-
responsibilities the human behavior of the project team sideration of the staffing caliber and/
A contracting strategy that avoids including the owners internal team, design or quantity as well as the strategy for
selfish or independent behavior in the team, construction team, and commission- monitoring and reporting.
decision making process ing team. Ease & Impact of Future Adds,
In our data center experience, we Moves, Changes this involves an
have worked as a prime professional cover- inherent understanding of the drivers
ing EVERYTHING (architecture, engineer-
Holistic Tradeoffs and trends behind the IT hardware
ing, IT infrastructure, commissioning, and The overall data center industry is and software to be able to better
construction) as all the way to simply being actually made up of a wide variety of indus- provision the right level of future-
the mechanical / electrical consulting engi- tries (e.g. Facilities, IT, utility, construction, proofing in the power, cooling, net-
neer working either directly for the owner etc.) that each has independent targets and work systems as well as the logistical
or for one of the other project participants. significantly different emphases. A mega aspects of disruption, risk, etc.
Both roles cab be successful and again, data center project exposes and escalates Ease & Efficiency of Handling Drasti-
especially on mega data centers, it is not so the points of interface between these indus- cally Varying Loads this again
much a matter of the contracting method, tries and it follows that success therefore involves an understanding of the IT
but rather to ask how to get the team to can be measured by the degree of seamless hardware and software but speaks to
operate in an unselfish, barrier-free way. integration of these traditionally isolated the flexibility in the power cooling
industries. and network infrastructure and archi-
An example of two of these isolated tecture to be able to deliver capacity
A Focus on Human Behavior industries and how different their way of in a fluid and unrestrictive manner.
The added pressure for commoditized thinking is shown in Figure 3.
fees acts as a strong motivator to operate All of these industries play a critical The following are a couple of more
independently and selfishly. The good role and need to be considered holistically detailed examples of great mega data center
news is that the mega data center intensi- (silica to utility) including the assessment holistic tradeoffs that are relatively un-
fies the situations and therefore can expose of tradeoffs in aspects such as: known to traditional data center design.

www.datacenterjournal.com THE DATA CENTER JOURNAL | 17


Holistic Tradeoff Detailed Example 1: in terms of multi-variable, human behavior, The incremental cost of exceeding these
interdependencies, effective tradeoffs, etc. minimum expectations, especially when
Structural Impact We are NOT structural engineers but have considered holistically to the cost of a
Structural engineers are often accused observed that when a structural engineer mega data center construction cost, would
of overdesign. Often, some of this is caused works as a subconsultant to us, the amount be very insignificant and yet yield incred-
by the structural engineers obligation and of fee that we pay them is insignificant ible benefit / savings to the downstream
fear to have a 100% success rate that the compared to the benefit. Our goal has design and construction process (i.e. fewer
structural system has NO chance of failing been to create very favorable contracting constraints).
will NEVER harm anyone. It is hard to ar- conditions and financial benefits to the
gue with that logic but individuals all have structural engineer to remove the barriers
different levels of competence, experience, and achieve optimum results.
Closing Comments
and personal level of risk aversion. As a The design of a mega data center is
result, the approach to structural design a complex multi variable equation. A tre-
Holistic Tradeoff Detailed Example 2:
is NOT absolute but rather is impacted by mendous edge can be attained by consid-
human behavior and human performance Constructability ering more variables in a holistic manner
of an individual. through challenging the commoditized
The other perception of overdesign is For mega data centers and mega data design and construction approaches that
the complexity associated with structural center campuses, we have observed an are self-serving and stifle the opportunity
design in general. Much like the data overall behavior during the site selection for innovation and optimization.
center as a whole, the structural design is process of thinking too small. The cost of Through the introduction of de-
NOT a single variable equation and some land is very insignificant in comparison to tails and elements that are not typically
of the variables are LESS understood by its impact on overall project cost. It can considered in data center design such as
the greater project team. Therefore, there easily impact professional services fees, performance centric approaches, human
are both human behavior and technical construction logistics, construction, opera- behavior, holistic tradeoffs, contracting
issues associated with the structural design tion, and adds/moves/changes. Often the strategies, etc. it is hoped that this article
approach. impact is a compounded impact resulting has helped open your mind to a much
NO professional wants to design in a disproportionately high impact on total broader approach to the design of a mega
something and then be forced to redesign project costs. data center. n
for free because of coming in over budget. We have seen significant double and
Years ago, some clients paid for redesign triple handling on a site just based on NOT About the Author: Don Beaty started DLB
but the overall trend is for redesign to be having enough site space during construc- Associates Consulting Engineers in 1980. The
uncompensated. This essentially places tion. This is particularly true when a site is firm is active globally and has provided services
additional risk on the professional. With developed in phases. You can have the op- for mission critical facilities in over 35 states
the trend towards commoditized fees, the eration staff and construction staff there on and throughout the world. DLBs experience
natural tendency is going to be to overde- site at the same time period. Concurrent includes over 4GW (GIGAWATTS) of critical
sign sufficiently to mitigate the risk of any operation and construction can quickly (IT) power, data center campuses that total
mistakes in the design assumptions and yet become a logistical nightmare that compro- over 4,500 ACRES and thousands of mission
have a significant concern of being charged mises security, reliability, access, cleanli- critical projects with a combined raised floor
with overdesign. ness, traffic flow, etc. all of which lead to (white space) area of over 16 MILLION square
This overall combination of condi- poor cost, schedule and quality metrics. feet. DLB have provided design, commissioning
tions really drives a behavior for a struc- For example, just the space for park- and operations support services for a wide
tural engineer to operate in a defensive, ing the construction workers cars can variety of large and small data center clients
independent way based on the challenges become complicated. In some cases we including eight of the largest Google data center
they face in this commoditized world. have had to implement extreme measures campuses worldwide.
We have experienced firsthand that such as valet parking for the construction While the data center industry is global, Don
spending 25% - 50% premium on struc- workers because of mismatch of site space is also very active globally having presented in
tural construction costs resulted in a total to site construction. 70 cities in 28 unique countries over the past
project savings that easily offsets the 25% Many people have used the term decade. Don is an ASHRAE Fellow and the 2012
- 50% increase on structural costs through Do NOT want to build a Swiss watch. recipient of the ASHRAE George B. Hightower
the flexibility it provides for the other sys- Essentially what that is saying is that it is Technical Achievement Award in recognition
tems and infrastructure. just too intricate and too sensitive to build for his TC 9.9 leadership and contribution. He
Looking at things holistically can in an extremely confined space. The ease has served in leadership positions on ASHRAE
really make a big difference. In the case of design, construction, commissioning, committees on energy, equipment performance,
of the structural changes, it can impact IT operation, and adds/moves/changes can and data centers. He was the co-founder and
costs, number of racks (impact on CapEx, be significantly and positively impacted by first chair of ASHRAE Technical Committee
OpEx, and revenue), electrical system costs, being passionate about sizing a site to avoid TC 9.9 (Mission Critical Facilities, Technology
cooling system costs, etc. building a Swiss watch. Spaces and Electronic Equipment) and, has
Often this is low hanging fruit, espe- Often, the real estate team is seeking been a major contributor to over 10 data center
cially in a mega data center. However, it is to provide a site that meets the minimum books. He continues to be a major driving force
rarely exploited since it requires thinking expectations at the lowest possible cost. on TC 9.9.

18 | THE DATA CENTER JOURNAL www.datacenterjournal.com


Sentry Power System .
TM

More Intelligent.
More Reliable.
Easier To Manage.
The industrys only data center rack-level power system.
Only from Server Tech.
You want the worlds most reliable power distribution
units. You also want data center power monitoring,
management and analytics, for multiple locations. And, you
want to auto discover, group configure and manage your
entire PDU network from a user friendly dashboard. You need
Server Techs Sentry Power System.
Its a SNAP. Weve combined our Sentry PDUs with
our award-winning Sentry Power Manager to give you one
incredible critical system. Featuring our exclusive
SNAP technology with Plug & Play functionality,
you can configure hundreds or thousands of PDUs
with a mouse click. You get an entire system to help
you drive energy efficiency, uptime and ROI.
The new Sentry Power SystemTM.
Only from Server Tech. Learn more today at
www.servertech.com

www.servertech.com
1-800-835-1515

2012, Server Technology, Inc.


www.datacenterjournal.com THE DATA CENTER JOURNAL | 19
DESIGN
corner
Steps to Moving Towards
Building Systems
Integration
By Alan Dash

B
uildings are filled with multiple found running on IP-based networks. This have been made better with the creation
systems including IT, security, is great news for data center owners and of separate IP networks under the guise of
telephony, building manage- managers since taking the step to merging integrated systems; but what has also been
ment, building monitoring, separate systems onto the IP network saves created are some real IP-based monsters
automation and much more. time and money. Yet, there are still some in our data centers and buildings. Ideally,
These systems have always had their own aspects to carefully consider as the road to it would be optimal to get these different
management staff, infrastructure, software integration poses challenges along the way. topologies under real measurable control
and hardware, licenses, reporting features For building systems integration to work, using cost-justifications such as shared
and capabilities, maintenance agreements, certain steps must be taken. resources, shared staff, shared data, shared
head-end space, monitors, servers and reporting and increased buying power as
other peripheral devices. However, techni- Moving to IP Systems Is Only primary drivers. But something is prevent-
cally speaking, theres no need to have all of the First Step, Heres Step ing this, or at least slowing it down. What
these separate components to run a facility. Two force is so powerful that it can stop us from
In fact, today more can be done with less; saving money, sharing data, and mitigating
and Internet Protocol, a common means Theoretically, if systems were moved downtime in the data center environment?
for systems to communicate, is the enabler over to IP, they should now be integrated.
for this. Right? Not quite. When looking at all the What is the Speed Bump?
In the past few years, theres been a separate networks in todays facilities, data
shift of non-IT and formerly non-IP sys- center or otherwise, many companies are The problem of true integration of
tems porting over to the Internet Protocol surprised to see that there is still a negative data center, IT, and building systems is not
and traditional IT Network. Additionally, impact even with the change to IP. And technology, nor even technical. Rather,
there has always been a slow shift of change what is that negative impact? While IP itself it is cultural. The root of the problem
for systems to move on to unified plat- is a graceful and relatively elegant transport is competition, as competitiveness has
forms, even though the technical capabili- means, it can result in multiple disparate been rooted into the internal workings of
ties have existed to make this shift long ago. IP networks running through facilities, corporations right down to the department
However, this shift is moving much faster with lots of them still managed by separate level bowling team. Whats been created
now. Todays building and data center departments, using separate infrastructure, are separate departments with separate
management systems, automation and running on separate servers with differ- responsibilities, with different reporting
monitoring systems, security and access ent software packages and licenses, over structures, with the staff placed in separate
control networks, overhead paging, even different wired and wireless pathways. In rooms, in separate buildings, some with
clocks and employee time-keeping can be short, a bunch of separate non-IP networks separate entrances, with separate operating

20 | THE DATA CENTER JOURNAL www.datacenterjournal.com

Eat
i t hought i was
t he m ost e fficient
m achine h ere,
t hen t he e aton
9e ups a rrived.

Backup Power (UPS) IT Racks Airflow Management


The Eaton 9E UPS offers Integrated airflow components Decrease your CRAC energy
98% efficiency. protect critical IT equipment. cost by 30%.

streamline your power protection


while you increase your savings.
Your desk toys might feel a bit 35% smaller footprintwith
defensive, but youll embrace Eatons impressive reliability.
the efficient Eaton 9E UPS When youre being asked to
all-in-one solution. The 9E do more with less, the savings
solves a triad of data center realized over the life of the unit
challenges: energy costs, floor will remind everyone who the
space, and premium protection. true efficiency expert is in
It delivers 98% efficiency in a your company.

Learn more about the Eaton 9E UPS at


switchon.eaton.com/datacenterjournal5

Eaton and PowerAdvantage are trademarks of Eaton Corporation.


2012 Eaton Corporation. All rights reserved.

Eaton_DCJ_ may june july_FINAL.indd 1 3/30/12 3:28 PM


budgets. This is exactly what has been done down a path of integration that today can The first steps initiated by healthcare
with building systems and technologies. be seen as a model for other industries. to bring integration to their facilities were
Technology topologies have mirrored staff- The drivers were the same as we have for in the physical integration of pathways and
ing topologies, and then theyre expected to data centers today too much information spaces. Program space was being used up
work together without intervention. (data) in too many places (silos) that were for IT closets; separate space was used for
It gets worse. Not only have compa- not shared (stored). While there is not PA system head-ends and equipment; nurse
nies and departments been compartmen- enough space in this article to explain the call had a space; and building facilities
talized, but the same thing happens with entire process, it will review some steps in systems and other clinical and non-clinical
extracurricular events. Many companies the right direction of building and systems systems were all hidden away in whatever
have golf, softball, and other so-called integration, using the healthcare industrys space or area the responsible department
teaming events, with these teams created evolution to integration as an example. was able to find. These spaces were not
from separate departments right down The primary thing to know is that considered a shared common space; rather,
the line. Does the facility staff invite the there are three elements of a system: 1) they were silos following along with the
administrative staff onto their team to play The head-end whereby the processing silos of departments which set them up.
against the other departments? Of course and storage of data occur (including the Considered to be very valuable real estate,
not! They will play one person short before server, collection points, software, and reducing program space in a hospital by
they play with the enemy on their side. licenses); 2) the user or end-interface merging these non-clinical spaces into a
Did IT invite the lone golfer from ship- (normally a desk-top PC, mobile device, or shared environment was the first step on
ping and receiving onto their department point of use data collection interface along this long and successful trip of physical
team for the annual golf outing? No. The with the software and licenses); 3) and convergence. By first providing a single,
shipping department is the enemy and if the infrastructure between (pathways and secure, audited space for back of house sys-
they dont have golfers then thats one less spaces, conduit, physical infrastructure, tems, hospitals were able to free this newly
team in the way of the trophy and the year- and media (copper and fiber, wireless) used unused space for clinical use and often
long bragging rights associated with being to house equipment and cable). revenue-generating functions.
number one!
Building technologies run the path
of building departments, which run the Figure 1 Separate head-end yet shared data
gauntlet of a competitive atmosphere. They
could be integrated but remain separated.
Though this is not intentional, it has been
recognized by some, with most firms at-
tempting to resolve it and make it better.
Countless articles have also been describing
the problem. Many of us read them, nod,
blink once or twice, and then move on with
our day because weve already integrated
right along departmental lines. This is
more often the case within a data center
which is part of a corporate office building,
other contiguous facility or campus envi-
ronment. Stand-alone data centers appear
to get the message; and this may be that
because they are away from the competitive
environment, there may be less interde-
partmental competition and therefore
more of a feeling of common goals between
departments.

Proven Concept in Another


Business Type
We cannot help with the cultural
problems associated with the barrier to
true building integration, but the concept
of unifying technologies has been proven
successful in some genres of business and is
a feature that can work in any campus en-
vironment, data center, and building type.
This lesson comes from the healthcare in-
dustry. Many years ago, healthcare started

22 | THE DATA CENTER JOURNAL www.datacenterjournal.com


The next step was to share head-end
data integration and actually get these
yet the other departments are not or, if
they are, it happens too late for them to
SaveTime...
systems to communicate together, now act and to mitigate an outage. Since other SaveEnergy...
that healthcare had a handle on what they systems running in that space can also be
all were. The concept of middleware - negatively impacted, companies should SaveMONEY...
thought of as a traffic cop of sorts - was use integration to alert all stakeholders.
born and matured in hospitals around Event correlation provides the opportunity
the world, where software was placed as to proactively warn other (integrated)
a focal point of all alerts and events from stakeholders of the event so that they can
these dissimilar systems. This software, respond early and mitigate non direct-
which has the capability of receiving alert event related outages. This topic and the
and event data from completely different possibilities around this are fodder for
systems, understands that data and routes another article. However, the only way
it based on pre-defined metrics to appro- that this type of alert management can be
priate staff. This enabled a one-stop-shop successful is when conditions of techni- Sealraisedoorcableopenings
of sorts for managing events on a common, cal capability are congruent with cultural withAirGuardgrommetsand
shared platform. While this step will not acceptance when we can integrate the
mitigate head-end devices, its main func- departments and the technologies and
realizeimmediatecostsavingsin
tion is in reporting from separate systems align them to a common goal. yourdatacenter!
to one point.
Because the prior step does not Where Most Buildings are Datacentersrequirecoolairto
mitigate head-end devices, departments Today maintainopmaltemperaturetokeep
still have their full responsibilities and still serversfunconingproperly.Air
maintain the same systems. While middle- As mentioned earlier, though theres migraonorleakagethroughcable
ware can be used to control other systems, a growing shift moving towards a unified holesindatacenteroorsbypassing
thereby reducing a number of costly areas platform for building systems, there are serverscanleadtodrascinecient
of control, the intent from a healthcare many buildings that havent evolved there
useofcoolairandwastedenergy
perspective is to collect the data - not yet, per the reasons just explained.
dollars.
eliminate collection and control points. In order for buildings to move to a
Figure 1 indicates an example where a BMS true uniform platform, data center and WiththeinstallaonofAirGuard,an
network is still a separate environment, yet facility leadership need to take action and immediateenergysavingsbenetis
tied to share data with the IT network. If create an environment whereby different realized.AirGuardsignicantly
the IT network fails, the facilities network departments can come to rely on each minimizescoolairloss,while
remains intact. This is true for healthcare, other in the commission of their job. Too improvingstacairpressuretocool
and can be equally applied to a data center many employees have become comfortable
datacenterequipment.
environment. with the concept that the tools, comput-
Clinical staffs (who had too few ers, systems and networks necessary to
members, large areas to cover, and several complete their work belong them for their
different system types to learn) were the sole benefit to meet their departmental
first to benefit from the integration of these goals. Too many employees have also taken Extreme,
systems onto a common logical platform. an attitude that none shall pass when it withSafetyCover
The ability to collect data and display it on comes to the needs of other departments.
one interface, eliminating the need to use In reality, these components, systems and
AirGuardExtreme
multiple systems for the same data, sped outcomes belong to the company - not the
processes and decreased the incidence of departments. Its time to share responsibil-
errors. This efficient process can benefit ities, data, infrastructure, pathways, spaces
AirGuardgrommetsareasimple
any company or business type, especially and success of the enterprise - not just the inexpensivesoluontopromote
data centers where quick access to informa- success of the department meeting annual moreecientcoolingofyour
tion from multiple sources is crucial. budget goals and the team bowling trophy. datacenter
The final step for integration, and the
step that is continually being developed, is About the Author: Alan Dash RCDD/
Calltodayfora
to use collected data to provide event cor- OSP, CCSE, Global Technology Lead and
relation so that the impact of an event spin- Associate Partner, Syska Hennessy Group, has FREEAirGuard
off could be pre-determined. For example, over 30 years of experience on information
an overheat condition within a controlled communications technology consulting and brochure
space may impact more than just the facili- designing for a diverse array of structures,
ties HVAC department. It could impact any including data centers, laboratories,
1.866.631.4238
department with equipment located in that healthcare facilities, sports/entertainment
space. Common practice is that the facili- venues, multi-tenant offices and mixed-use real
ties HVAC staff is alerted to the trouble, estate projects.

1.866.631.4238 www.pducables.com
www.datacenterjournal.com
DESIGN
corner

Green Design
Features for
Data Centers
By dan prows

M
any companies are working Green data centers can come in
toward reducing their en- many shades of green as is relative to
ergy consumption and en- their level of sustainable design features
vironmental impact of their and construction. Achieving a specific
facilities, and sustainability level of a green building certification
is fast becoming an important part of makes it easier for most to relate to the
company culture. The increasing number of greenness of a project. However, at the
data centers nationally and the emergence present time green building rating sys-
of the mega data center has pushed data tems are not specifically designed to ad-
center energy consumption to a new level. dress the true nature of data centers, i.e. a
This energy consumption and its effect on no frills building with few occupants and
the environment, has data center compa- large energy consumption. The credit an existing building. Moreover,
nies seeking greener and more sustainable weighting of the rating systems is not in renovation most often utilizes existing
ways to address their concerns. balance when it comes to rating the value infrastructure and local services within
Every data center project has differ- of credits relative to their environmental a developed community. In short, reuse
ent requirements and circumstances, and benefit in data centers. To accomplish trumps recycling, energy efficiency design,
finding the appropriate balance between equilibrium, the majority of the credits and sustainable product selection when
corporate responsibility, sustainability, need to address the issue with the largest deciding which green features provides the
marketability, first costs, and return on environmental impact, energy. most substantial environmental benefit.
investment, will always be a challenge. It is important to understand that
Because sustainable design is relatively new
and corporate leadership is usually limited
energy is more than operational consump-
tion. While energy efficiency in operations
Location
Location, location, location! Data
in high level sustainable design experience, is a vital piece to solving the sustainability centers are built in different locations for
companies tend to struggle with under- puzzle, the most important type of energy different reasons. If reducing heat load with
standing where to begin when designing to consider for the highest levels of sustain- the least amount of power is the primary
a green project that provides substantial ability is embodied energy. To address objective, go with the simplest solution.
environmental benefit. embodied energy as well as other chal- Free cooling! It costs nothing to use the
The first step in achieving a sustain- lenges relative to green data center design, natural environment to cool and there
able project is to obtain the Owners com- consider the following as a starting point in are no mechanical parts to fail. Select a
mitment to reaching a certain level of sus- achieving your sustainability goals. location in a cold climate (preferably in an
tainability and then establishing associated established area) reducing the external heat
goals to reach that level as early as possible.
Several of the greenest options involving
Renovation
This may come as a surprise to many,
loads on the building. If deepest darkest
Canada isnt the right fit, find a location
data center design and construction are but renovating an existing building is argu- with prevailing winds to take advantage of
applicable only to the pre-design phase. Lo- ably the greenest feature a company can passive cooling, or a cold water lake that
cation, site selection, and new construction make when choosing to bring a new data can facilitate a closed loop chilled water
vs. renovation decisions are all decisions center online. Regardless of how energy system.
that must be addressed early in the project efficient a new building is designed, the
timeline and play a vital role in developing
a sustainable project. So when these deci-
embodied energy required to design and
construct a new facility dwarfs the amount
One
IT Systems
hidden opportunity that IT
sions are being made, think energy. of embodied energy needed to renovate systems enjoy is the speed at which new

24 | THE DATA CENTER JOURNAL www.datacenterjournal.com


hardware and information management rack and mixes with warm room air before
techniques are improved. IT hardware has returning to the AHU. Both conditions Electrical Systems
a short life cycle and as such, new energy cause the total air flow to increase in order Rotary uninterruptable power sup-
reduction benefits can be realized every 2-5 to meet the rack cooling load. The solution ply (UPS) streamlines the conversions in
years. Newer hardware either uses far less is hot isle/cold isle configuration, following voltage and the amount of AC to DC to AC
energy to process its workload or it can in- best practices in rack management, and uti- power conversions that normally occurs in
crease its workload with the same amount lizing air containment systems. Proper use traditional UPS systems. Power conversion
of energy. Coupling improved hardware of these measures will significantly reduce improvement can lead to reductions in
performance with server virtualization can warm/cool air mixing with a corresponding total data center IT load up to 6-8%. Ad-
generate substantial reductions in pro- decrease in overall air flow requirements. ditional opportunities can be found in high
cess energy consumption. This improved voltage DC power distribution. Compared
performance may also allow a reduction in
total hardware requirements.
Cooling
Reclaimed Water for
Towers
to standard 48 V systems, higher voltages
allow installation with much less copper
(smaller conductors) and higher transmis-
Durability
Durability is the dark horse in the
Reclaimed water and cooling towers
go together like milk and cookies. While
sion efficiencies.
Although it may be inconsequently
green world and is a key feature in energy not available in many areas and only ap- compared to the overall power usage of a
reduction. The longer a building lasts the plicable to facilities that utilize cooling data center, using energy efficient lighting
less energy is used to build, repair and/ towers, using reclaimed water for cooling fixtures (T-8, T-5, or LED) and general
or rebuild a structure. Data centers by tower is truly a top ten green practice in lighting controls including motion detec-
the nature of their design intent which data centers. While saving potable water tors provide excellent energy reduction at a
is to safely and securely store vital data, at the vanities and toilets is always good reasonable cost.
lend themselves to durable design as they practice, it would take hundreds of data
require a greater amount of protection than
other commercial buildings. Data centers
centers with water efficient fixtures to even
begin to make a dent in the sheer volume of
Verification
Measurement and
(M&V)
tend to have a simple box shape which re- potable water savings that is accomplished
duces potential water leaks when compared by using reclaimed water in cooling towers Actively monitoring energy and water
to buildings with more complex design on a large data center. usage is a vital component of any sustain-
features such skylights, large number of ability plan. Measurement and verifica-
windows, decorative facades. Data centers
exterior components tend to be construct-
Cooling Systems
Deciding on which cooling system to
tion (M&V) reduces maintenance cost,
increases equipment life, and maintains
ed with concrete and steel. The end result use in a data center is not an easy deci- maximum efficiency of equipment by iden-
is buildings that have a far greater life span sion to make. System choice is relative to tifying trends in efficiencies, and providing
than traditional commercial buildings. facility size, environmental conditions, and an early warning to potential failures. Op-
first costs. Smaller facilities tend to utilize erating data centers at maximum efficiency
Co-Generation
Although this feature is not very
direct expansion (DX) systems, while mid-
size and large facilities use water cooled
and maintaining sustainability goals can
only be accomplished with active monitor-
common, it is definitely a top five feature computer room air conditioner (CRAC) ing (M&V).
on the green shading scale. Cogeneration units with cooling tower(s). The use of Sustainable design in data centers
comes in two flavors, heat reclamation and water cooled equipment provide the most provides companies the ability to address
peak shaving. Heat reclamation allows data tonnage of cooling for the least amount of the energy consumption and environmen-
centers to utilize waste heat from genera- energy and, life cycle costs tend to run far tal impact of their facilities. Forward think-
tors to run absorptive chillers which in less than DX systems. ing design teams are able to incorporate
turn provide cooling for the data center. Another alternative is chilled water the use of existing infrastructure with high
Although the primary purpose of genera- air conditioning (CRAH) units. Since the efficiency operating equipment to meet
tors is redundancy in the case of primary chilled water units do not include refrigera- company goals and limit the impact of
power loss, they can also be used to reduce tion cycle components, they have fewer data centers on the environment. Although
demand loads. Peak shaving not only saves moving parts, less maintenance, and longer technology provides many solutions in
the owners substantial amounts of money service life. Since the refrigeration function reducing energy consumption, most of the
on the utility costs, it also reduces the is provided by a central water chiller rather answers will be found in energy reduction
large scale peak power demand on power than localized compressors, the energy through increase life cycle performance of
providers. The sustainable benefit here is consumption per ton of refrigeration is the building and equipment in data center
localization of power production reduces reduced. facilities. n
the size of required power delivery infra- Projects can further reduce energy
structure and greatly reduces power losses consumption in their mechanical sys- (About the Author) As the Department
in transmission. tems by adding high-efficiency variable Manager of Morrison Hershfields Southeast
frequency drive (VFD) into both pumps Building Consultation Group, Dan Prows
Air Flow Management
One of the largest culprits of opera-
and chillers and utilizing condenser water
reset. As noted above, use of free cooling
is responsible for the overall development,
operation, and growth of sustainable services
tional energy loss in data centers is mixing with air side economizers or water side and building consultation in the Southeast
cold and warm air. This loss is caused economizers can also be used in areas United States. He teaches sustainable
by mixing bypass air flow around a rack. with accommodating climate conditions. design and green construction methodology
Warm air from the rear of the rack is drawn Humidification and mechanical controls to architects, engineers, and construction
into the front of the rack where it mixes for the cooling system should always be personnel throughout the country. Dan
with cooler air before it enters the rack to considered in design and will be a factor specializes in sustainability consultation on very
perform its desired cooling. To complicate with overall system efficiencies. large, high process energy consuming buildings
the issue, some cool air passes around the and operations.

www.datacenterjournal.com THE DATA CENTER JOURNAL | 25


it corner

Some data centers


are getting smaller?
by jeffrey R. clark, phd

At the top of the data center world are the mega data centersfacilities whose
massive scale dwarfs the computer closets (by comparison) of most companies. But
the trend toward larger data centers among some large companies contrasts with
another trend elsewhere in the industry: many data centers are getting smaller. The
means and motivations of these companies that are downsizing their IT operations
vary, but a number of driving factors play a major part in most cases.

Does Consolidation Make many data centers into a smaller number of Economic/Financial
Data Centers Bigger or facilities will most likely end up with larger Factors
Smaller? data centers; a smaller company consolidat-

M
ing within a data center (i.e., eliminating Money (or, more specifically, a lack
any larger companies excess server capacity; replacing older of it) is a major motivation in companies
(and organizations like servers with fewer, higher-power servers; moving generally toward smaller data
governments) are seek- and so on) is likely to end up with a smaller center designs. Here are a few of the driving
ing to reduce costs and data center footprint (for example, it might factors.
operating complexities by convert freed space into an office area).
consolidating their data centers into fewer, Thus, consolidation seems to be driv- The economy isnt recovering as
larger facilities. In some sense, then, con- ing two disparate trends, but a number of much as you might think.
solidation is driving increased data center factors demonstrate clearly the reasons why Although employment numbers may
size. But it is also a driver toward smaller many companies are aiming smaller instead be down slightly, they are still high (and
data centers, since within a data center it of larger. Individually, smaller may simply they dont really represent the employment
involves reduction in the amount of infra- mean designing a smaller data center than situation accurately), the overall economy
structurean attempt to do more with less. might otherwise be expected, moving to a is seeing little by way of recovery. Many
The difference in the result of consolidation smaller facility or consuming less coloca- companies are still struggling with tight
depends on the situation: in some sense, tion space, or actually reducing data center budgets, and for those that view IT as a
consolidation can mean two different footprint within an existing facility. means and not an end, data center expenses
things. A large company consolidating are often viewed as necessary evils. IT bud-

26 | THE DATA CENTER JOURNAL www.datacenterjournal.com


You have questions?

Enterprise Backbone
Horizontal
LOW LOSS
Loose Tube Pre-Terminated

LC
Singlemode

SC MPO

Multimode

OM4
Channel
Campus

BI-MMF
Field-Terminated High Density

FANOUT
OS2 50m Tight Trunk
OM3

Buffer

Belden knows ber.


Thinking about deploying Belden IBDN FiberExpress
ber to support your Solutions include:
needs in virtualization and
cloud computing? Pre-Terminated and
Field-Terminated Systems
Talk to Belden. We have decades High-density connectivity systems
of experience in ber infrastructure designed for superior cable
design and product development. Our management and ease of use
comprehensive family of ber-optic Virtually any optical ber cable
products has been designed for type required for backbone and
end-to-end high performance and ease horizontal applications
of use. We know network architectures Connectors that offer fast and easy
and how to optimize a ber-optic ber termination in the eld
cabling infrastructure for maximum Global network of certied installers
performance and value. for reliable, worry-free design
and installation
Best-in-Class Warranty and Support

Visit www.belden.com
to learn more.

One Enterprise.
One Infrastructure.
One Partner.
2012, Belden Inc.
gets are thus unlikely to see much in the tion of running a data center a headache nies in extreme cases. Like electricity, the
way of increases, requiring the data center to companies concerned about expenses. cost of water is unlikely to see a decrease
manager to do more with less. Further- For those that do operate or plan to build anytime soon, further driving downsizing
more, the lack of funds makes construction a data center, smaller sounds better from as a means to stay within budget.
costs even more burdensome, and with the perspective of cost: less infrastruc-
outsourcing offering a pay-as-you-go op- ture generally means lower costs. Higher Virtualization
tion, many companies see a way out of the operating costs also naturally puts a pinch
capital costs of building a data center. on available capital, meaning a companys If consolidation is peanut butter,
maximum capacity is decreased. then virtualization is chocolatein many
Energy costs continue to rise. ways, they go together. Virtualization pools
Name your reasonpolitical tensions Data centers eat water, too. compute, storage and networking resources
in the Middle East, inflation, regulations, Energy is the most widely reported and reallocates them according to demand
lack of new energy production infra- resource consumed in large quantities by in a manner independent of the capacity of
structure, growing demand in emerging
1X The Data Journal data centers, but water is used by many a given server or storage drive, for example.
5.375 x 7.85economies
with photoor all300
of thedpi
aboveenergy as well. Some consume large amounts of This approach thus enables greater utiliza-
costs continue to rise, making the proposi- water, putting pressure on utility compa- tion rates for equipment, meaning a facility
can generally get by with less equipment
(and thats where consolidation comes
Sherman, TexaS in). As long as the combined resources
can handle peak workloads, a company
can minimize its IT infrastructureand,
concomitantly, supporting infrastructure
like power distribution and UPS systems,
cooling and floor space.

Free Cooling
Exceptional 693,404 sq. ft., fully air-conditioned, high-tech
ASHRAEs recently expanded temper-
manufacturing facility on 76.765 acres
ature and humidity guidelines enable wider
Concrete floor with 100-200 PSI throughout (8"-12") use of free coolingyear round in many
locations (for certain allowable ranges). In
Ceiling heights ranging from 16' to 45'
any case, greater reliance on air-side or wa-
Steel I beam columns; masonry and steel walls with brick veneer ter-side economization means less need for
175,028 sq. ft. includes manufacturing area, administrative appendage, mechanical cooling infrastructure. In turn,
central utility building, and shipping/receiving this means a smaller data center. Likewise,
Compressed air supplied to the manufacturing portion via three compressors even for traditional cooling infrastructure,
best practices enable greater efficiency,
Water and Sewer supplied by the City of Sherman (water - 1.633 million
which reduces the amount of equipment
gal/day, sewer - 1.141 million gal/day)
needed to maintain a given operating
100% wet sprinkler system temperature. In both cases, best data center
Fiber optic capable and served by two T-1 lines (two fibers in a bundle) practices enable reduction in data center
source circuit with no redundancy size for a given capacity relative to a facility
Electrical power supplied by TXU/Oncor Electric: Total of 93.4 MVA with two that doesnt implement such practices.
independent 138KV feeds and four substations
One CAT 3516 back-up generator; separate A. Manufacturing: 625' x 250'
Modularity
B. Office: 250' x 250' x 342'
switch house, equipped with an C. Shipping/Receiving:
A modular approach to data center
automatic transfer capability 150' x 150'
design and construction enables right-
Truck docks available for shipping/ sizing of infrastructure so that a company
receiving, the central utility building, need not invest as much capital now to
chemical storage building, manufac- meet future needscapital that essentially
turing, and administrative appendage is wasted (and loses value) over the time
Conveniently located on Highway during which the equipment goes unuti-
75 and just 45 miles north of Dallas/ lized (or underutilized). By only adding
Fort Worth Metroplex capacitywhether it be uninterruptible
For more information contact: power supplies, cooling infrastructure, IT
BINSWANGER resources or even floor spaceonly when
1200 Three Lincoln Centre, 5430 LBJ Freeway, Dallas, TX 75240 it is needed, companies can operate a data
972-663-9494 E-MAIL: HDAVIS@BINSWANGER.COM
Worldwide Coverage www.binswanger.com/sherman center that is as small as possible, saving
both capital and operational expenses

28 | THE DATA CENTER JOURNAL Ad supplied by www.datacenterjournal.com


Rittenhouse Marketing
Michelle Komraus, Graphic Designer
Two Logan Square
Philadelphia, PA 19103-2759
(since, for example, unutilized servers still Colocation: completely sidestepping the need for a
consume power as long as they are turned Colocation is the older brother of ba- capital-intensive data center expansion.
on). Modular expansion may require a sic hosting services. Instead of counting on Outsourcing enables companies to re-
little more planning from the start of a data an Internet company to provide everything duce their data center sizes (either by elimi-
center project, but it can pay significant but the HTML, however, colocation cus- nating expansions, by building facilities
dividends in savings. Thus, even in the tomers count on the provider to essentially that only partially meet company demand
face of increasing demand for data center be responsible for all of the infrastructure and outsourcing the rest, or by not building
services, companies can still aim small with they would need to run their servers in a a data center at all). But in a sense, this is
the confidence that they can expand when data center. The company just installs the a ghost effect: the net data center capacity
needed, should growth require it. servers, purchasing floor space, bandwidth, of the industry at large is not decreasing (al-
cooling capacity, security and power distri- though larger data centers may offer greater
Higher Densities bution from the provider. The colocation efficiency compared with smaller facilities).
provider is able to amortize the infrastruc-
IT architectures like blade servers can ture costs across a wide base of customers.
vastly increase power densities in a data The result for the customer is a smalleror
center, reducing the required amount of nonexistentdata center, at least in a sense. n Conclusions
floor space for a given compute capacity. Visitors to the companys campus wouldnt
Mega data centers make
Although this may require greater cooling find a building or room dedicated to server
big headlines, but for many
infrastructure (beyond certain power densi- hardware; they would just find individual
companies, smaller is better
ties, even air conditioning can be insuffi- computers with network connections. Co-
cient), it can be worth the tradeoff in some location is thus outsourcing of data center
when it comes to IT. A
instances. infrastructure, short of the IT equipment. number of factors are driving
companies to build smaller
New Process Technologies Public cloud: data centers compared with
The public cloud is probably what what they might have done
In accordance with Moores Law, pro- most industry observers would think of several years ago. These
cess technologies for processors continue to first when they see the words IT outsourc- factors include a stagnant
shrink, providing a given level of comput- ing. The public cloud goes a step beyond (at best) economy, rising
ing capability in a smaller area and smaller colocation: it involves outsourcing of energy prices, improvements
power envelope. A server can thus pack the data center infrastructure and the IT in cooling technology and
more of a compute punch in a given volume equipment as well, leaving customers with a greater reliance on free
and for a given power budget. Unfortunate- nothing but a bill for services rendered. cooling, virtualization and
ly, however, demand is scaling right along Capital expenses are removed entirely from consolidation, modular data
with compute capabilitiesthus, smaller, the equation. (Again, this doesnt mean data center design strategies,
faster processors dont enable as much of center capacity evaporates like, well, wa- higher-density deployments,
a data center shrink as one might hope. tersomeone must carry the resource load, and improved semiconductor
Nevertheless, Moores Law has enabled it just isnt the customer.) Furthermore, process technology.
companies to avoid sprawling data centers. the customer isnt required to deal with Colocation and the cloud
software or hardware problems that might collectively, outsourcingalso
Outsourcing arise on the servers; its all the responsibil-
play a major role in decreasing
ity of the cloud service provider. This is the
data center size for many
The ultimate data center downsizing ultimate data center downsizing: shove off
companies, as they eliminate
involves outsourcing. Outsourcing options the entire mess on someone else, and just
the need for these companies
run the gamut from one or two services pay for the end result: networking, compute
such as backup in the cloud or colocation of and storage services. As with anything that
to maintain infrastructure on
serversto complete elimination of a data sounds too good to be true, the cloud car- premises. But outsourcing is
center in favor of cloud computing. For ries with it a number of concerns, includ- something of a ghost effect:
some companies, the demands of con- ing matters of security and cost. (Whether the data center capacity
structing and operating a data center are cloud computing is truly less expensive and infrastructure doesnt
just too much, particularly if IT is highly in the long term is a matter still under evaporate, it is simply moved
peripheral to the companys business. To debate. In all likelihood, it depends on the to another company. And this
some extent, a company that outsources to customer.) The public cloud is not necessar- shift is part of whats driving
shrink its data center is really just passing ily an all-or-nothing proposition, however: the growth of mega data
the buck: someone must provide the data companies can always go for something in centers that provide cloud
center capacity. (Ah! Maybe thats why between. If a company has an existing data services. Nevertheless, for
mega data centers are becoming more com- center but needs greater capacity, it can many companies, smaller
mon!) The following are some noteworthy outsource that extra capacity to the cloud, facilities are indeed better. n
options for IT outsourcing: maintaining some resources in house but

www.datacenterjournal.com THE DATA CENTER JOURNAL | 29


IT OPS

Natural Disasters
Are Not the Only Reason to Have a
Data Recovery Plan By robert gast

I
ts no surprise news stories are Although unplanned downtime (such frustration for Blackberry users. But in
prioritized. The more violent they as those associated with natural disasters addition to users being inconvenienced in
are, the more public interest and or hardware and software failures) can be situation like this one, the financial losses
the higher they rank. So it stands devastating, the majority of system and that result from a maintenance related out-
to reason disasters of magnitude, data unavailability is actually the result age can be substantial.
especially natural disasters make front of planned downtime due to required
page news. But another reason newspapers maintenance. In fact, statistics show that 2. Maintenance:
lead with disasters is because stories of this unplanned downtime accounts for only These are tasks involved with
nature trigger empathy. As human beings about 20 percent of all downtime, whereas program and software activities, such as
we readily identify with others affected by planned downtime is close to 80 percent. program fixes. Software, with its many op-
tragedy, even envision our own survival in Take the case of a 24/7 enterprise. Per- erational layers, is a very complicated piece
comparable situations. forming critically important hardware, of the computing puzzle. Rarely are any
For IT professionals, theres a similar software, security and data maintenance two implementations of business-critical
correlation when it comes to IT disasters. can be tricky when production servers software systems exactly the same. Apply-
As a group, we understand the magnitude cant be taken down during business hours. ing bug fixes, point releases or full releases
of responsibility involved in restoring pri- Yet, even with a round-the-clock busi- can be sketchy, and sometimes serious
mary production servers or recovering lost ness, downtime is essential for backups problems go unnoticed for days or weeks.
data in the aftermath of an IT disaster. This and implementing various software If the base software systems have been
is because we know all too well that unin- upgrades to protect the business and keep it customized, the implementation window
terrupted flow of accurate, real-time infor- competitive. and success uncertainty factor increase
mation can critically cripple a business. When it comes to planned downtime, exponentially.
Yet, while events like fires, floods, hur- there are three primary subsets:
ricanes and tornadoes are perhaps the most 3. Unique periodic events:
news-worthy for the general public, in the 1. Normal IT-infrastructure This refers to deployments of hard-
IT world, theyre just a few of the circum- Operations: ware and software, which can usually be
stances responsible for bringing a company These are activities that are performed scheduled with substantial lead-time. The
to a standstill. Downtime from power fail- on a regular basis to maintain system pro- reality is several undertakings have to go
ures, malicious software attacks, disgruntled tection and health. For instance, e-mail is a right for a server migration or new software
employees and a host of other problems can mission-critical system for most companies implementation project to go well. Its a
be just as devastating to a corporate data and for good reason. Recently, a delay in mistake to think any piece of hardware or
environment. For the most part, when busi- e-mail delivery for BlackBerry users on all software will come alive after following a
ness stops, costs mount quickly. U.S cell phone networks caused enormous few simple steps. Often despite good plans

30 | THE DATA CENTER JOURNAL www.datacenterjournal.com


and good intentions, problems arise, and the ing upon the type of business, in every case,
result is that these types of procedures take some systems are more critical than others.
much longer to complete than planned. In For instance, most businesses would find
general, technicians acknowledge this reality, it difficult to survive without the following
yet one unintended consequence can be a systems:
kneejerk reluctance to embark on such initia- Disaster Can Strike
Any Business...
tives in the future. Accounting
Regardless of the type of downtime, Basic office applications

R
mitigating the risks is essential for a company ERP (Enterprise Resource Planning) ight now, your data center is under
to stay afloat in todays competitive market. CRM (Customer Relationship attack! You may not believe it
By following a few basic steps, protecting Management) because things appear to be running
your data can be a fairly straight-forward and HRM (Human Resource Management) smoothly. The servers are accessible and
doable task. Email users are not complaining. Everything
Web apps and content management appears to be under control. However...
just beyond that wall, above those
Establish a Co-location systems
ceiling tiles, or under the raised floor is
Facility a disaster waiting to strike. Even worse,
Keeping this in mind, begin by deter- it has your name written all over it!
Most IT professionals agree its just mining and ranking the companys systems
good business practice to have a plan B when from most critical to least. Then, start with a No one really knows when or how a
it comes to IT protection. For companies phased approach and steadily set up the co- disaster will strike. We just know the
with an approved business continuity plan location facility with the most critical first, potential is always there. So preparation
in place and a project budget to support it, rolling out the less critical ones over time. is critical to minimizing its impact on
establishing an off-site co-location facility is computers, networks, users and the
entire organization.
often the first step. Centralize Operations
Because a co-location facility is es-
Room Alert & TemPageR
sentially a completely duplicated hot site Finally, following the prioritizing of
ready to go at a moments notice, locality critical systems, consolidate and centralize AVTECH is the worldwide leader
and accessibility are primary considerations. remote servers to reduce the need for a sepa- in making monitors and sensors that
Equally important is having a clear vision on rate set of technical experts at each location
allow easy remote monitoring of
environmental conditions in computer
how the entire data center configuration will for regular maintenance. This also allows
rooms, data centers and other types of
be duplicated. Ultimately the goal in creating for a more efficient centralized management facilities. These conditions include:
a co-location is to provide a place to move a team to maintain your primary communica-
complex web of data and applications with- tion and maximize uptime. Temperature & Heat Index
out users experiencing any downtime regard- Whether in the aftermath of a di- Humidity
less of the reason for the switch. For instance, saster or recouping following a scheduled Flooding / Water
the same co-location facility used in disaster maintenance, the most important business Smoke / Fire
Main & UPS Power
recovery efforts can also be used for maxi- continuity task is resuming business critical
Room Entry, Motion
mizing uptime for business-critical systems operations. Start by determining where vul-
Relays, Cameras & More
and utilized to provide better management of nerabilities lie. Then, establish a co-location
the data center day-to-day operations. site ready for any interruption where critical When threatening conditions are
Once the location is chosen, hard- systems are given top priority. detected, AVTECH monitors will
ware and infrastructure need to be set up With a continuous stream of factors immediately notify managers via
to facilitate the next step, the transfer of necessitating both planned and unplanned
todays most advanced technologies,
enabling a timely support response
systems. This can be accomplished using downtimes, having a plan in place for all IT
or auto corrective action. This allows
tape, a removable disk, or with a continuous disasters is essential to ensure quality service, organizations to avoid or minimize
data protection product which enables you regardless of the circumstance, in todays IT- downtime, protect against costly
to replicate data from one or many locations dependent world. n hardware damage and eliminate other
and additionally allows the systems to be related expenses. With prices from $195
switched on demand. When this is com- About the Author: Robert Gast is an IT industry to $1195, there is a solution for every
pleted, the next task is determining when to analyst with Vision Solutions. As a technologist, organization and budget.
bring the servers online, to test, or configure author and speaker, he has been firmly anchored
them. in the IT industry for nearly three decades. Vision Call For A Free Catalog
Solutions is the worlds leading provider of
Protect Critical Servers information availability software and services for
First Windows, Linux, IBM Power Systems and Cloud
Computing markets. For more information, visit
Although it stands to reason IT systems www.visionsolutions.com. 888.220.6700 401.628.1600
vary from company to company depend- AVTECH.com

Protect Your IT Facility...


www.datacenterjournal.com Dont Wait Until Its Too late!
are also actively trying to sell it internally.
To do so, CIOs need to think like busi-
ness leaders, and prepare for the questions
CEOs are likely to ask:

Didnt we just invest in

How to Make Private


virtualization? Virtualization was on
the IT must-have list years ago, and it
required CEO and CFO convincing back

Cloud Initiatives
then, too. CIOs need to explain that
virtualization was only the beginning of
their companies journeys to private clouds.

Matter to Your CEO


Yes, virtualization delivered increased
flexibility, agility and cost-savings. CIOs
should tout those benefits when they bring
up private cloud investments, and focus
By Jason Cowie on leveraging the current investment in
the virtual data center as a key component
to transforming the way they deliver and
manage infrastructure.

But are we about to lose technology


for which weve already paid? For
CIOs who pledge to embrace a pragmatic
path toward a true private cloud, the
answer to this question should be no.
No CEO wants to rip and replace recent
IT investments, and the right steps toward
the cloud should not require that kind
of sacrifice. The smart CIO answers this
question quickly and emphatically.

Is infrastructure a critical need


right now? The way we think about
private clouds is evolving from an
infrastructure investment that supports
the business to a strategic enabler that
not only drives down costs but also
improves customer service and the
speed of delivery. Private clouds are
about far more than infrastructure and

C
fundamentally about leveraging the speed
and agility of virtualization with repeatable,
hief information officers this is less necessary than it once was. Last standardized, and highly optimized
understand private clouds and fall, for example, the Computing Technol- processes. Therefore, CIOs should move
the benefits associated with ogy Industry Association (CompTIA) infrastructure conversations to more
the technology, but they still surveyed a group of 500 IT and business compelling arguments about competitive
need to sell the initial invest- professionals and 400 IT firms. Among advantage in the market.
ment to other executives CEOs and chief its findings were positive indicators about
financial officers. To C-level leaders, the executive understanding of the cloud. The What challenges would private
CIOs ability to explain concrete return on 58 percent of executives and IT staff who cloud introduce? Communication is
investment and demonstrated value makes said they had significantly increased their the key to avoiding the pitfalls that can
all the difference between embracing cloud knowledge of the cloud suggests that the derail valuable initiatives before they ever
computing or abandoning cloud initiatives CIOs role as educator should actually be begin. CIOs should work hard to avoid
that could have been hugely beneficial. So shrinking. confusion about expectations, results
how can CIOs present the business-friendly However, there is still a serious dis- and deliverables. The creation of a cloud
requires people, processes and technology,
benefits of the private cloud to their bosses? connect threatening the adoption of private
and CEOs should be made aware of that
clouds. CIOs and business-focused execu- fact from the start. Process re-engineering,
The value of private cloud tives are not necessarily always on the same aligning business goals with IT objectives,
education page about this technology or its benefits. and ensuring the right measurements and
CIOs not only understand private clouds people are in place are essential to eventual
For starters, CIOs need to look at and the benefits associated with automa- success. Is the CEO on board? The CIO
themselves as educators. Some might feel tion and management solutions, but they needs to know early on in the initiative.

32 | THE DATA CENTER JOURNAL www.datacenterjournal.com

DCW
Choosing the
Data Center World
conference was the
right choice to find
solutions for our
data center needs
and issues.
Jeffrey, Data Center Supervisor,
Baptist Health South Florida

The Best Investment Youll Make


In Your Data Center in 2012
Data Center World is the premier conference for data center and facility management professionals. It is the only
data center event where professionals can hear directly from peers in the trenches, dealing with real issues of
managing a data center today. The educational program covers everything from management, to DCIM, data center
builds and design, facilities, power and cooling, and much more.

Save $100 by registering before August 3.


www.datacenterworld.com

September 30-October 3, 2012 | Nashville, Tennessee


www.datacenterworld.com

DCWFall2012_DCJ.indd 1 4/18/12 8:11 AM


How long is this going to take? in costs (if chargeback is being used), and also be moved by the ability for monitor-
CIOs should go into meetings with their improved customer service through self- ing, measuring and managing IT consump-
CEOs and CFOs with recommendations service management portals. tion and show back that comes with cloud
on solutions that quickly and effectively IT teams can meet CEOs financial deployments.
deliver quantifiable private cloud expectations once they hit more strategic
computing competencies. If you cannot
measure or commit to actual results
phases of virtualization deployments by Bridging the
in a reasonable timeframe, expect to integrating features such as IT cost visibility understanding gap
be met with reluctance and resistance. and chargeback. In terms of transparency,
Clearly document objectives that align administrators and executives want to Private cloud adoption can transform
to organization requirements with a know what resources are going into the the way IT delivers services to end-users
common understanding of the deliverables private cloud and how to curb consump- but typically starts with buy-in from
associated with a private cloud (self-service tion wherever possible. executives beyond the data center. If CIOs
provisioning, IT costing & showback, This is an opportunity to promote the and CEOs are committed to private cloud
service catalogs, etc.). In doing so, value of speed, agility, and the simplifica- adoption, expectations, perceived value,
expectations and timeframes should be tion of data center management since cloud and deliverables of the cloud must be clear
aligned with a unified vision of what is
management solutions automatically calcu- from the beginning. By anticipating the
actually getting delivered.
late costs, automate routine administrative questions of business leaders, by commu-
tasks and help optimize the performance nicating honestly and completely, and by
Benefits for IT, benefits of configuration of the virtual data center.. embracing the role of educator, CIOs can
for business With constant access to the benefits of the bridge gaps that might otherwise derail
cloud and IT costing at their fingertips, business-critical cloud pursuits. n
Once the questions and concerns are CEOs and organizations are better equipped
addressed, the CIO can turn the focus to to quantify, justify, and when appropriate, About the author: Jason Cowie is the vice
the benefits of delivering IT in a private charge back infrastructure investments. president of product management at Embotics
cloud. The conversation between IT and The efficiency story should also ap- and oversees product direction and strategy.
business should focus on quantifiable and peal to CEOs. When IT is more productive Previously, Jason was the general manager at
tangible benefits of moving towards a pri- because business users can appropriately EMC responsible for the server management
vate cloud. For example, improvements in self-provision resources, the business business, and he played a key role in the
provisioning and approval times, reduction overall is better off. C-level leaders should acquisition of Configuresoft.

YOUR EQUIPMENT
IS SAFE WITH US
AmeriCool Portable Air Conditioners create and maintain an
optimal environment for all your Mission Critical Cooling needs.

INDUSTRY LEADING QUALITY & RELIABILITY


24 / 7 COOLING CAPABILITY
COOLS DOWN TO 64
AUTOMATIC RESTART FUNCTION
EASY INSTALLATION
NATIONAL DISTRIBUTOR NETWORK

800.680.0725 | WWW.AMERICOOLINC.COM

34 | THE DATA CENTER JOURNAL www.datacenterjournal.com


IT BUSINESS

Make the Most of


Application Delivery
Controller Deployments
By jesse rothstein

O
ver the past decade, ap- efficiency of the IT infrastructure. In short, apply the following nine best practices for
plication delivery control- ADCs demonstrate what is possible when ADC deployments.
lers (ADCs) have become IT harnesses exponential increases in pro-
an integral part of the IT cessing capacity. 1. Establish performance
infrastructure. I played a role baselines.
in developing ADC technology at F5 Net- However, as Uncle Ben once told
works, where I led the design of the BIG-IP Peter Parker in Spider-Man, With great Because ADCs affect both the
v9 product and TMOS platform. Based on power comes great responsibility. ADCs network and applications, they are often
my previous experience with ADCs and my are powerful but complex products that blamed for performance problems. Per-
current focus on application performance have the potential to impact network and formance baselines for the virtual server,
management (APM), I have distilled nine application performance for better or or VIP, and for servers in the back-end
best practices for maximizing ADC perfor- worse. They perform significant content pool can help to verify whether the ADC
mance. These best practices provide insight transformations, rewrite IP addresses, is in fact responsible for performance
into tuning parameters, offer troubleshoot- change HTTP headers, split and aggregate degradations. If the VIP and back-end
ing tips for ADCs, and are supported by an transactions across TCP connections, pool members are processing transactions
APM solution that delivers comprehensive and directly serve cached content. If not over a similar time distribution, then these
real-time transaction analysis from L2 to carefully configured to match the unique performance baselines absolve the ADC
L7 across the network, web, database, and network topology and mix of applications and point to the network or application as
storage tiers of applications. running in a data center, ADCs can intro- the source of the issue.
duce performance issues that are incredibly
ADCs evolved from load balancers difficult to isolate and fix. 2. Analyze transactions in
that typically employed a simple catch-and- real time.
release system to buffer packets, make a Like many other in-line network-
load-balancing decision, and then release infrastructure devices, ADCs excel at Although ADCs use periodic service
the packets with some form of network intelligently moving and manipulating checks to monitor server health, this
address translation. ADCs brought applica- large volumes of traffic at high speeds but method seldom detects intermittent failures
tion awareness as well as a new architec- offer limited metrics to help in tuning or degraded performance. As a result, the
ture based on a high-speed, transparent performance or troubleshooting problems. ADC will continue to direct requests to
full-proxy, enabling IT teams to precisely Management and quantification of the these poorly performing servers. IT teams
manage the delivery of application traffic performance of a deployment is often a dis- can avoid this inherent undersampling
within the data center. ADCs can inspect, tant afterthought. These shortcomings can problem by continuously monitoring trans-
direct, and transform network traffic; cache be addressed with an APM solution that action performance in real time with an
content; scan for security threats; perform measures in real time how well applications APM solution. With trend-based alerts that
SSL encryption or decryption; and execute are delivered over complex network infra- detect anomalous behavior, IT administra-
an abundance of other tasks to dramati- structure. Equipped with these real-time tors can proactively fix problems before
cally improve the availability, speed, and performance metrics, IT teams can easily they affect end users.

www.datacenterjournal.com THE DATA CENTER JOURNAL | 35


3. Monitor IP servers can be overwhelmed. APM solu- options represent best guesses for different
fragmentation. tions that perform real-time analysis of environments, suboptimal configurations
server processing performance can provide can degrade performance. To determine
While the issue is simplified in IPv6, both the historical trends needed to plan the correct settings in the TCP profile
IPv4 devices and routers use IP fragmenta- capacity as well as early warning alerts for especially Nagles algorithm, proxy buffer
tion to deal with lowered maximum trans- excessive loads. thresholds, and ACK-on-pushIT teams
mission units (MTUs) between endpoints. need advanced TCP analysis to detect con-
Significant IP fragmentation can degrade ditions such as receive-window throttles
performance by requiring reassembly pro- 6. Adjust load-balancing and zero-window advertisements.
cessing and more packets sent over the net- methods and ratios.
work and also increases the performance-
degrading impact of even relatively minor Just like other high-performance 9. Evaluate HTTP-caching
packet loss. The problem is exacerbated systems, ADCs come with many virtual and HTTP-content-
when firewalls and ADCs do not forward knobs and dials that allow IT teams to compression policies.
crucial Internet Control Message Protocol tune performance. One of the best ways
(ICMP) messages and hamper the path to tune performance is to modify the Caching, web-acceleration, and
MTU discovery process used to avoid IP load-balancing method used on the ADC, content-compression features in ADCs
fragmentation. To detect excessive IP frag- typically set by default to round-robin or can dramatically improve user experience
mentation that can degrade performance, ratio-LB methods. When dealing with a but also can cause unintended problems
IT teams must monitor L3 metrics in real pool of heterogeneous servers, IT teams with applications. IT teams should monitor
time, including IP fragments and ICMP should adjust pool-member ratios accord- HTTP requests to determine appropri-
destination unreachable fragmentation ingly. For example, using the Fastest setting ate content to cache or compress, as the
required messages. in the F5 BIG-IP will direct traffic to the compression of less-compressible content
server with the fewest outstanding requests. can degrade page render time. The HTTP
This configuration will result in a smoother analysis should show the distribution of
4. Provide sufficient SNAT allocation of workloads across servers of content type delivered, along with indi-
pools. different capacities. To make the correct vidual URIs and the number of times they
adjustments, IT teams need to compare are requested.
ADCs may provide some metrics to processing time for servers in the load- ADCs, which have proven to be valu-
help allocate the appropriate number of balancing pool. able building blocks for use in modern data
unique IP addresses for Secure Network centers, introduce unique IT management
Address Translation (SNAT), but these complexities. Knowing the best practices
metrics do not take internal connections 7. Verify the profile and for ADC deployments is the first step in
and service checks into account. Addition- mode. maximizing the performance and efficiency
ally, ADCs do not offer automatic alerts of this important infrastructure compo-
when IP addresses are exhausted. This ADCs offer different modes of perfor- nent, which in turn helps to maximize the
limitation can be problematic because if mance depending on the profile assigned value of your investment. An application
the SNAT pool of IP addresses is too small to the virtual server, such as the FastL4, performance management solution that
to handle spikes in connections, then new performance-HTTP, and full-proxy modes provides real-time analysis of transactions
connections may fail intermittently. For in BIG-IP. The FastL4 and performance- from L2 to L7 makes it much simpler to
short-lived connections, IT teams should HTTP modes offload the operational implement these best practices and make
size their SNAT pool by measuring the total complexity of TCP to the endpoints and the most of ADC deployments.
new connections per second to each back- require fewer resources, such as processor To read more about these best
end server and then divide by the number and memory, on the BIG-IP itself. Depend- practices, download the white paper Ten
of unique IP addresses. A value less than ing on the network topology and applica- Best Practices for Optimizing Applica-
approximately 1,000 indicates a properly tion mix, an incorrectly selected mode tion Delivery Controller (ADC) Deploy-
sized SNAT pool. Real-time TCP analysis can negatively impact user experience. To ments. n
can detect performance problems caused determine whether the correct mode is
by rapid connection recycling, which can employed, IT teams require detailed TCP About the author: Jesse Rothstein is the
occur when the SNAT pool is too small. analysis to detect problems with congestion CEO and co-founder of ExtraHop Networks,
control, slow starts, and PAWS drops. which provides the most powerful network-
based application performance management
5. Plan capacity for load- solution on the market. The ExtraHop system
balancing pools. 8. Optimize TCP settings. analyzes hundreds of thousands of simultaneous
transactions from L2 to L7, at speeds up to a
While ADCs help to ensure applica- When operating in full-proxy sustained 10Gbps. Previously, Jesse architected
tion availability by balancing workloads mode, ADCs offer a range of TCP-related and led the development of the BIG-IP v9 product
across a pool of servers, even pools of configuration settings. While the default at F5 Networks.

36 | THE DATA CENTER JOURNAL www.datacenterjournal.com


IT BUSINESS

Report Analytics:
Complementing Business Intelligence
For Data-Driven Business Decisions
By Michael morrison

With the explosion of available data at an organizations disposal, and despite the
overwhelming offerings available to harness these assets, companies of all sizes
continue to grapple with capturing and presenting the information they need in an
engaging and understandable format.

I
n an effort to make more informed business decisions, many people who use or rely on data derived from BI reporting software,
departments and individual line-of-business users have turned BI complexity is an ongoing problem. Cost was cited as the most
to Business Intelligence (BI) solutions as a way to mine data important factor in the BI software selection process (29%) followed
for both reporting and forecasting purposes and to address closely by ease of use (25%). The fact that cost and ease of use are
increasing, business issues. Unfortunately, with failure rates the most important drivers of software selection underscores the
hovering between 50% and 70% according to some studies, and challenges that complex BI solutions present to the people who
business users continually looking for new tools to help them easily consume the intelligence for making data-based decisions. These
get at the data they need, BI alone is not always the answer. challenges are driving the emergence of a new category of solu-
Its well known that BI software can provide advanced report- tions within the BI market that industry analysts refer to as Report
ing and predictive analysis, but for many day-to-day reporting tasks, Analytics.
its sophisticated functionality is overly complex. For tasks like From ERP-generated reports to even mainframe reports,
these, business users and self-service data consumers may find the organizations of nearly all sizes generate thousands of reports
technical challenges of BI a deterrent and as a result, often rely on monthly. But those reports are largely unusable without invest-
the IT department to generate custom reports. This not only creates ing time and money into burdensome processes. Report Analytics
more work for an already over-taxed IT group, but also introduces software leverages an organizations existing reports and reporting
delays and frustration for the users who require accurate and timely processes and provides consumers with a self-service environ-
information in order to perform their job effectively. ment. This allows them to extract the relevant intelligence from
The underlying issue is that business users often have all the any combination of these existing documents themselves and
information they need but that information resides in existing transform that information into dynamic, interactive reports for
reports and business documents scattered throughout the organiza- easy analysis and visualization. Whether the reports or business
tion. Since they have no easy way to dynamically organize, integrate documents originate inside an enterprise or from external sources
and analyze the intelligence trapped in these static documents, they like customers or suppliers, Report Analytics allows business users
are often left with less than ideal options. For instance, business us- to create, distribute and publish these reports without time delays
ers can rely on predefined BI reports from a data warehouse or en- or involving IT. And lets face it, with increasing cost pressures and
list the expertise of the IT department to program custom reports. decreasing revenues, anything that bolsters productivity is critical in
According to a recent Computing poll of more than 250 todays real-time business environment.

www.datacenterjournal.com THE DATA CENTER JOURNAL | 37


Harnessing Business Assets plementary, yet key differences between the two exist. For instance,
in BI deployments a few staff members, largely from IT, are charged
In most organizations, the volume of data is enormous, creating with managing the data and creating both content and reports for
a challenge that lies not in amassing more data, but rather in integrat- business users. So while BI systems are an invaluable asset for many-
ing and using the meaningful data that already exists. Lets face it, -allowing them to discern patterns in customer behavior and align
data exists in various static reports, transaction systems, data stores, the business behind a common goal--it may not always be the best,
and formats which forces professionals to spend valuable time ag- or only, tool for generating reports. Of the IT staff questioned in the
gregating information from various sources. And thats not all. Once Computing study, 31 percent said their department was required to
they have access, they still need to manipulate the data in familiar produce up to 100 reports a month while 16 percent generated up to
programs like Excel in order to present the information in the right 500. Even worse, four percent estimated that they were required to
format, to the right people, instead of focusing on the more value- produce more than 1,000 reports every month. The time that IT staff
added activity of analyzing and acting on the data itself. This creates spent on producing these reports varied, with half taking a day or
an issue for many businesses according to a recent Ventana Research more and 10 percent estimating an average time of weeks or months.
study, which found that leveraging reports from BI systems is im- With such a limited group responsible for developing reports
portant to 57 percent of organizations, and getting to the data from for the entire organization, back-logs undoubtedly will occur. In oth-
source ERP, CRM and other applications is important to 71 percent er words, it requires lots of IT support to make information usable.
of respondents. And, often, while IT is off programming the custom reports, the
To complicate matters, existing reports and business docu- business requirements changemeaning the data consumers receive
ments come from a variety of sources: internal transaction systems in their custom report is once again incapable of helping them solve
that generate canned ERP, HR or CRM reports, external customer their particular business problem. This leads data consumers to the
or supplier systems, and personal productivity tools such as Excel or ever-popular BI workaround Excel. All too often, business users
BI systems. And these reports come in multiple formats: main- dump the data they need into Excel; import additional information
frame green bar, text, ASCII, PDF, HTML, spreadsheet, log files and from transactional systems or other sources that are not in the data
semi-structured documents that are stored in content management warehouse; and then perform analytics from there. This cumber-
systems. All of these sources and formats of existing reports cre- some manual process, while painful, is necessary as there is often
ates a major challenge for business users that need to make timely, little analytic value otherwise because of the missing and incomplete
informed business decisions based on available data. data. And while it solves the business users immediate need get-
ting the data they need to answer specific questions it is fraught
Making Data Meaningful with peril. On a tactical level, moving data from a trusted system to
the unsecure environment of Excel also introduces the possibility of
Today, the enormous amount of data has made it difficult to compromising data accuracy due to simple human error. On a much
parse and analyze it, which means that the resulting reports are sub- more strategic level, this process completely undermines the integrity
optimal at best. The format of the report also comes with its own of the data. Once data is put into Excel, the business user is violating
set of unique problems. ERP, HR and CRM systems deliver static most data governance policies. The data is no longer trusted.
reports, oftentimes in text or PDF format, which are inflexible and The Computing study demonstrated just how pervasive this
cannot be integrated with data from other reports. This does little problem can be. According to the financial decision makers ques-
to foster understanding, analysis, or decision making and often does tioned, 52 percent felt that less than half of the information needed to
not include the unstructured, semi-structured, or externally sourced run the business effectively could be pulled from existing documents
data that is required to make information meaningful. As a result, and in the correct format without having to rely on IT. The fact that
most organizations spend a great deal of money and time consolidat- users struggle to get a grip with BI software is perhaps reinforced by
ing and mapping this data into a data warehouse, data mart or other the fact that for two out of three of those polled, it is not the more
operational data store. Or worse, an organization simply abandons tech savvy members of the IT department who are using it. More
the idea of leveraging that data and operates by intuition rather than than half (54%) said responsibility for producing BI reports was with
hard data. individual business departments such as finance and sales. Worse,
Report Analytics began to take root as a way to address this only 11 percent of these individual executives or business managers
longstanding business challenge. Report Analytics tools model, ag- said they run reports themselves -- meaning they had to submit a
gregate and transform information from any number of existing re- request to the IT department to perform these tasks on their behalf.
ports and business documents throughout the organization, making Report Analytics is a smart approach in todays economy. It is
it easy for users to access, extract and analyze data without having to an easy self-service solution which enables business users and data
invest in new reporting solutions. Functioning as the missing link consumers to get the data they need out of existing reports. Not only
in the broader BI reporting arena, Report Analytics captures struc- does this approach leverage an organizations significant invest-
tured and semistructured data from virtually any existing document. ments in enterprise applications, but it also avoids the costly route of
It also enables faster and deeper visibility into the business and better, creating a data warehouse, or worse - programming ones way to an
more informed business decisions. acceptable solution. n

Report Analytics & BINot an Either/Or About the Author: Michael Morrison is President and Chief Executive
Proposition Officer at Datawatch Corporation, a provider of report analytics products
and services. For more information contact Michael at Michael_Morrison@
For many, Report Analytics sounds like it overlaps with Busi- Datawatch.com or visit www.datawatch.com.
ness Analytics or Business Intelligence. In all actuality they are com-

38 | THE DATA CENTER JOURNAL www.datacenterjournal.com


Yourturn
Strategies for
Data Center
Consolidation
Part 2 of 3: White Space By ed spears

W
ith companies relying house more data in less hardware. Leading- Cloud computing can help organiza-
on IT more heavily than edge power companies can now integrate tions shrink their data centers in several
ever, data center capacity seamlessly with virtualization management ways. By offloading applications and infra-
requirements are steadily systems like vCenter of VMware or SCVMM structure to externally hosted public cloud
rising. Unfortunately, of Microsoft to enable live migration of vir- data centers that use the public Internet
so are the costs associated with data center tual machines in case of power events. This to exchange data, companies can reduce
construction and operation. As a result, is a significant improvement over the legacy the number of servers they must own and
organizations are increasingly searching for server shutdown capability that is no longer maintain in their own facilities. Alternatively,
ways to reduce the size of their data centers addressing the need for business continuity businesses can deploy private cloud solu-
without compromising their ability to meet which is inherent of any fully-virtualized tions that utilize the same basic technologies
business requirements. environment. UPSs and advanced power as the public cloud but reside on privately
The good news is that companies inter- management systems integrate seamlessly owned or leased servers. Through the so-
ested in following the trend toward smaller with software from VMware, Microsoft and phisticated use of virtualization and automa-
data centers can employ a wide range of other manufacturers to coordinate automatic tion, private clouds dramatically raise server
techniques to shrink their computing foot- migration or shutdown and infrastructure utilization rates, thereby lowering floor space
print. IT consolidation is best approached management in a virtual environment. requirements.
by a combination of three primary strategies,
design-level, white space and grey space. By 2. Deploy blade servers: 4. Leverage capacity
employing a combination of strategies in planning and asset
each area, data center managers can not only Often used in conjunction with vir- management tools:
make the most out of available footprint, but tualization, blade servers are plug-and-play
also assure mission-critical systems always processing units with shared power feeds, Deploying more server and power re-
have the clean, reliable electricity they need power supplies, fans, cabling and storage. By sources than is necessary wastes floor space,
to drive long-term success. compressing large amounts of computing but determining exactly how much capacity
Part one of this series discussed design- capacity into small amounts of space, blade is required can be difficult, especially if
level strategies for IT consolidation; the servers can dramatically reduce data center virtual servers or a private cloud infrastruc-
following will recommend six white space floor space requirements. They also enhance ture are used. Capacity planning and asset
tactics for shrinking the size of your data IT agility, since companies can simply plug management tools can help size a data center
center without shrinking its operating capac- in additional blades any time their process- optimally for current and near-term needs,
ity, availability or efficiency. ing needs grow. saving money while trimming the physical
footprint.
1. Utilize virtualization: 3. Leverage cloud
computing: 5. Implement a passive
Virtualizing and consolidating servers cooling scheme:
enables data centers to conserve floor space In an effort to lower overhead and
by replacing large numbers of lightly utilized increase IT efficiency, businesses around the Today, most organizations dissipate
devices with a smaller number of heavily world are rapidly adopting cloud computing data center heat by placing computer room
utilized devices. solutions. Indeed, some 15 percent of all IT air conditioning (CRAC) units around the
Consolidating storage resources spending in 2011 will be tied to cloud ser- periphery of their server floor. Unfortunate-
through virtualization can similarly reduce vices or infrastructure, according to analyst ly, CRAC-based cooling systems are often
a data centers footprint by enabling it to firm IDC. incapable of handling the greater power

www.datacenterjournal.com THE DATA CENTER JOURNAL | 39


densities and temperatures associated with 6. Switch from disk-based Closing Thoughts
technologies such as virtualization and blade storage to static storage
servers. technologies: Todays IT and facilities managers face
As a result, some companies are now a difficult bind: Though computing needs
investigating or implementing in-row liquid Though they currently account for are constantly escalating, so is the cost per
cooling systems, which take up floor space a small fraction of storage hardware sales, square foot of data center space and the
that could otherwise be dedicated to IT static storage systems are slowly gaining price of critical supporting resources such as
equipment. By utilizing a hot or cold aisle popularity among enterprise IT manag- electricity and water.
containment strategy instead of in-row ers. Such devices save data on solid-state Fortunately, there are many ways
cooling, data center managers can fit more memory chips, much like a USB memory to compact a data center while still meet-
server racks in their existing data halls while stick. Though generally more expensive ing business requirements. Most of them,
keeping operating temperatures at safe than conventional memory technologies, moreover, are proven and cost-effective
levels. One such strategy would be a passive static storage systems are also dramatically options for companies of almost any size. By
cooling system, which can come in many faster, more energy-efficient and more reli- following the white space recommendations
different types; the most efficient enclosures able, since they contain no moving parts. outlined above, and the design-level strate-
are equipped with a sealed rear door and In addition, static storage devices tend to gies discussed in part one of this series, you
chimney that captures hot exhaust air from be more compact than disk-based stor- can get more done in less space and position
servers and vents it directly back into the age systems, so deploying them can help your company to meet its IT requirements
return air ducts on CRAC units. The CRAC businesses save storage-related floor space with maximum cost effectiveness. Stay
units then chill the exhaust air and re-circu- in their data center. Tape-based archiving tuned for part three of the series, grey space
late it. A properly designed passive cooling systems are also being impacted by cloud consolidation strategies, for a complete guide
system can cost-effectively prevent even the providers offering low-cost archiving stor- to getting the most out of your physical
densest, hottest server racks from overheat- age at their facilities, something that can footprint. n
ing. Furthermore, some passive cooling lead to reduction of equipment on the data
schemes dont require raised floors, enabling center floor and still provide some type To read Part I of this series go to
companies to conserve the space formerly of disaster-proofing with data stored in www.datacenterjournal.com/category/
dedicated to maintaining air flow beneath another location. dcj-expert-blogs/
their server enclosures.

PLATINUM
PARTNERS
June 10 - 13, 2012
Hilton Orlando Bonnet Creek
Orlando, FL GOLD
PARTNERS

www.7x24exchange.org
2012 SPRING
CONFERENCE
SILVER
PARTNERS
End-To-End Reliability
Mission Critical Facilities

THOUGHT LEADERSHIP
BRONZE
CARASTRO & ASSOCIATES, INC.
PARTNERS
CONFERENCE KEYNOTE
mission critical engineers
WWW.CARASTRO.COM

Apollo 13: A Successful Failure


Captain Jim Lovell
NASA Legend and Apollo 13 Commander
MEDIA
PARTNERS

40 | THE DATA CENTER JOURNAL www.datacenterjournal.com


Calendar
VENDOR INDEX
MAY Universal Electric ....................................... Inside Front
www.uecorp.com

Mon May 14, 2012


Santa Clara, CA Cable System ............................................................ pg 5
Uptime Institute Symposium 2012 www.cablesys.com

Wed May 23, 2012 Corning ..................................................................... pg 7


New York, NY www.offers.corning.com/1-EDGESolutions
2nd Annual Spring Forum on Financing,
Investing, & Real Estate Development for Burndy ....................................................................... pg 9
Data Centers www.burndy.com

Wed May 23, 2012


Nice, France DataAire ................................................................... pg 11
www.dataaire.com
Datacentres 2012

Server Tech ............................................................. pg 19

JUNE
www.servertech.com

Eaton ....................................................................... pg 21
www.switchon.eaton.com/datacenterjournal5
Wed June 6, 2012
San Francisco, CA
PDU Cables ............................................................ pg 23
2nd Legal Forum on Cloud Computing
www.pducables.com
Agreements

Thur June 7, 2012 Belden ...................................................................... pg 23


Chicago, IL www.belden.com
Greater Chicago Data Center Summit
Binswanger ............................................................. pg 28
Mon June 11, 2012 www.binswanger.com/sherman
Brussels, Belgium
BICSI European Conference and Exhibition
AVTECH ..................................................................... pg 31
www.avtech.com
Tue June 12, 2012
Boston, MA
USENIX Federated Conferences Week AFCOM ...................................................................... pg 33
www.datacenterworld.com
Tue June 12, 2012
New York NY Americool ................................................................ pg 34
info360 Optimizing Your Information www.americoolin.com
Lifecycle
7x24 Exchange ...................................................... pg 40
Tue June 12, 2012 www.7x24exchange.org
London, England
4th Cloud Computing World Forum
DCJ Expert Blogs .................................................... Back
www.dcjexpertblogs.com

You might also like