Professional Documents
Culture Documents
Beyond
25TH PRINT EDITION
2012 and
Next Generation Optical Fiber Connectivity Solutions for the Mission Critical Data Center
www.servertech.com 1-800-835-1515
FEATURE
4 DATA CENTER INDUSTRY: 2012 AND BEYOND
The approaching end of another calendar year offers a good opportunity to look back at the data center industry in 2012 and look forward to what might be ahead in 2013. Overall, many major themes of this past year will continue into next year and, possibly, beyond. Rising demand for IT services, growing concern over energy prices and consumption, and a general do more with less approach are likely to continue unabated.
by Jeffrey R Clark, Ph.D.
The data center market is at the center of a number of different trends and forces that raise a number of opportunities, as well as concerns. A good way to get a perspective on current and possibly future directions is to go to the vendors at the center of the market. To this end, The Data Center Journal asked two vendorsSchneider Electric and Deerns Americato provide their takes on whats driving the industry and what the future looks like for data center facilities.
23 NEXT GENERATION OPTICAL FIBER CONNECTIVITY SOLUTIONS FOR THE MISSION CRITICAL DATA CENTER
by Bill Charuk
DESIGN CORNER
18 RESUSCITATING THE LEGACY DATA CENTER
by Eric Holzworth
As todays evolving data centers confront bandwidth capacity challenges, the migration from 10 gigabits per second (Gb/s)to 40 and 100 Gb/s, and higher density and cloud computing requirements , choosing the correct connectivity method has never been more important to meet the demands for optimum reliability, performance, scalability, cost effectiveness, and time to restoration .
ITOPS
28 DCIM AND CO-LOCATION: A ROADMAP FOR SUCCESS
by Constantin Delivanis
FACILITY CORNER
8 USING SOLAR ENERGY SYSTEMS TO OFFSET DATA CENTER ELECTRICITY CONSUMPTION
by Mark Crowdis
As data centers continue to face growing volumes of information to process and store, a series of surveys is highlighting what has become a serious problem--data center capacity is strained and struggling to meet ongoing demands.
If you have a data center, one way to reduce grid electricity consumption is to install a solar photovoltaic (PV) system on the facility. These systems utilize sunlight to generate electricity, which can then be used to power your data center. Solar PV costs have fallen over the past decade, making them cost-competitive solutions in more and more areas.
ITCORNER
20 KEEPING DATA IN-HOUSE: LEVERAGING THE POWER OF STORAGE AUTOMATION
by Brent Rhymes
Co-location is nothing new. A highly useful alternative to do-it-yourself, its a viable option for any company looking to build out the data center. In this challenging economic environment, there is more pressure than ever to cut costs. As companies explore new ways to keep expenses in check, many feel it doesnt make sense to incur additional capital investments to expand in-house data centers. Enter the co-location or Co-Lo, or a bit more broadly, the Multi-Tenant Data Center. Currently, its estimated there are more than 1,000 co-location data centers in the United States alone.
ITBUSINESS
31 NETWORK INFRASTRUCTURE CABLING PURCHASING TRENDS SHAPING TOMORROWS DATA CENTERS
by Bob Eskew
The data center market has been considered, in many areas, to be a bright spot in the commercial real estate industry as more and more properties are being converted to accommodate data centers. While this trend is filling the need for more mission-critical facilities, it also poses a challenge from a sustainability perspective because these facilities consume a large amount of energy.
Enterprise storage needs are continuing to grow with no slowdown in sight. With data representing the lifeline of a company, IT administrators are looking to the cloud as a means for storing, accessing and sharing data. IDC research predicts that by 2015, public storage cloud services will grow by more than 25 percent. While there is no doubt that these clouds can be useful for certain applications, not all provide the right availability, performance and protection for an enterprises critical business data. With
Structured cabling network infrastructures are playing a more prominent role than ever before in the reliability and operation of data centers. As a result, the network infrastructure cabling purchases made today will have an impact on data centers for years to come. Keep reading to learn about the trends that are driving todays network infrastructure purchases.
All rights reserved. No portion of DATA CENTER Journal may be reproduced without written permission from the Executive Editor. The management of DATA CENTER Journal is not responsible for opinions expressed by its writers or editors. We assume that all rights in communications sent to our editorial staff are unconditionally assigned for publication. All submissions are subject to unrestricted right to edit and/or to comment editorially.
AN EDM2R ENTERPRISES, INC. PUBLICATION ALPHARETTA, GA 30022 PHONE: 678-762-9366 FAX: 866-708-3068 | WWW.DATACENTERJOURNAL.COM DESIGN : NEATWORKS, INC., JOHNS CREEK GA 30022 TEL: 678-392-2992 | WWW.NEATWORKSINC.COM
www.datacenterjournal.com
VENDOR INDEX
Universal Electric ....................................... Inside Front www.starlinepower.com Server Tech ............................................................. pg 1 www.servertech.com Cable System ........................................................... pg 5 www.cablesys.com/pnp PDU Cables ............................................................... pg 7 www.pducables.com DataAire ..................................................................... pg 9 www.dataaire.com Eaton ....................................................................... pg 11 www.switchon.eaton.com/datacenterjournal11 Corning ..................................................................... pg 13 www.offers.corning.com/1-EDGESolutions Belden ...................................................................... pg 15 www.belden.com System Sensor ....................................................... pg 17 www.systemsensor.com/faast AVTECH ..................................................................... pg 21 www.avtech.com Emerson ................................................................... pg 22 www.emerson.com Sumitomo Electric ............................................... pg 27 www.sumitomoelectric.com Data Specialties, Inc. .......................................... pg 29 www.webuilddatacenters.com PDU Cables ............................................................ pg 30 www.pducables.com AFCOM ............................................................ Inside Back www.datacenterworld.com DCJ Expert Blogs .................................................... Back www.dcjexpertblogs.com
Thank You
n September of 2003, The Data Center Journal was born online. We began as a small, independent publication solely for the Data Center Industry. At that time there were no other publications that looked at the industry
as a holistic entity and none that looked to bring the areas of IT, Facilities and Design together. Our goal was to provide those that worked in the industry a place where data center information could be disseminated in a non-biased and educational forum. In the fall of 2006 we looked to bring that insight into the print realm and The Data Center Journal Magazine was born. It is with great pride that we present to you the 25th issue of DCJ Magazine. Over the years we have provided you information on Women in the DC Industry, Green Initiatives, Efficiency, Storage, the Cloud and Virtualization and each fall we take a look back at the year and give a Year in Review. In 2007, we started converting the magazine to a digital format, as well as keeping the traditional print. We host each magazine on the website for you to access 24/7. So, if you are new to DCJ Magazine, dont worry. You can access all issues since the spring of 2007 online. What a great resource for the industry! Since then many publications have popped up both online and in print. But over the years we have remained true to our original vision by keeping advertising to a minimum and not using our site as a forum for vendors. Our publications remain content heavy with the emphasis being on education and bettering the industry as a whole. We remain a small publication that does not have to rely on turning a big profittherefore we can continue to provide great quality content without having a conflict of interest. So, in closing we at The Data Center Journal would like to thank you for being loyal readers over the years. If you are new to DCJ; welcome! We look forward to your being a loyal reader for years to come. Sincerely,
www.datacenterjournal.com
Beyond
he approaching end of another calendar year offers a good opportunity to look back at the data center industry in 2012 and look forward to what might be ahead in 2013. Overall, many major themes of this past year will continue into next year and, possibly, beyond. Rising demand for IT services, growing concern over energy prices and consumption, and a general do more with less approach are likely to continue unabated. The next year, however, may well see an abandonment of the idea of an economic recovery as the broader economy slides back into recession, with less-thanappealing (but not necessarily unbearable) consequences for the data center industry.
2012 and
IT GROWTH
A steady trend in 2012 has been continued growth in demand for IT services in both the consumer and enterprise segments. This growth will continue to drive greater demand for energy in the data centers that deliver these servicesan important subject of its own. A growing area of concern focused on the intersection of the consumer and enterprise segments is bring your own device, where employees use personally owned devices in their work. BYOD, as its labeled, raises a variety of issues related to security, integration with company infrastructure, ownership of data and so on. Companies will continue
T
|
to seek both good policies and, if they adopt BYOD, good methods of integrating personal devices into the work environmentboth physical and virtual.
ENERGY CONSUMPTION
Yes, it might sound like a broken record, but energy is still one of the top concerns of data center operators, particularly with ever increasing demand for services. Energy prices are rising for a variety of reasons, ranging from turmoil in the Middle East to increasingly stringent environmental standards to inflationary pressure as a result of quantitative easing (read: money printing) from the Federal Reserve. This
www.datacenterjournal.com
Data Center
Fiber Casse
tte to Jum
per
Fiber Casse
tte to Cass
ette
CAT 6e Cass
ette to Patc
h Cord
CAT 6 Cass
ette to Cass
ette
I needed to run two 24 fiber patches from about 100 feet away. I wasnt comfortable with on site cabling for my secured Data Center, so I called Cablesys to order the pre-terminated, pre-bundled, pre-labeled 10G LC Plug & Play Solution. It took me less than ten minutes to hang and patch the two racks, and it saved me the time, money, and mess of ordering a 24 fiber run on site. - David, a Data Center manager.
Cablesys offers CAT 6e, CAT 5e Copper and LC, SC, MPO, OM3, OM4 Fiber Plug & Play Solutions that are 100% performance tested with a 15 year performance warranty. All you have to do is Plug & Play.
wouldnt be so bad, however, if demand for IT services was flat and data centers increased the efficiency of their operations. Demand is increasing, however, meaning that energy costs are rising even more. And the recent New York Times article entitled Power, Pollution and the Internetwhatever inaccuracies it may havehighlights the problem. The next year promises no relief, barring some radical change in the industry or the broader economy. More consumers will want more applications and more services on their mobile (and other) devices, as will employees. Increasing reliance on the cloud over local computing will increase pressure on data centers to meet these needs. One question markat least in the U.S.however, is the matter of energy legislation, particular regarding some form of carbon taxation or efficiency regulations. For several years, the industry has been kept in limbo regarding a national carbon tax or cap-and-trade scheme (such as in the European Union), whose costs would be passed down from utilities to consumers like companies that operate data centers. Furthermore, energy efficiency regulations may impose standards that many companies now pursue voluntarily. The year 2013 may or may not provide clarity: much depends on the economic conditions. A souring economy may put a continued hold on carbon taxation and efficiency regulations.
POLITICS
Because 2012 is an election year, 2013 may see a different distribution between Republicans and Democrats in Washington (and locally). Ultimately, however partisan bickering asideits difficult to find a dimes worth of difference between candidate A and candidate B (particularly in the presidential race), so the outcome of the election probably wont make much difference to the data center industry.
their IT capabilities. It will remain a strong trend in 2013 as the word of potential savings filters into the C-suite of more companies.
IT SPENDING
IDC predicts worldwide IT spending to increase by 6% in 2012 and another 6% in 2013, compared with 7% in 2011. Different regions, needless to say, will fare differently: the U.S. will see about 4% growth in 2012, whereas China will see about 14% (which is nonetheless down from 25% in the previous year). Europe, in the midst of its fiscal crisis, will see only about 1% growth. Although a return to a recession in the U.S. and a deepening of the troubles in the E.U. will likely put a further damper on growth in those regions, IT spending will probably remain fairly stable barring any major economic event. Companies are increasingly looking to IT as a means of improving their businesses through greater efficiency, for instance. But if the European Union breaks apart (does anyone remember Soviet Russia?) or another economic crash occurs in the U.S. or elsewhere in the west, all bets are off. The high growth rate of Chinese IT spending nevertheless shows momentum downward, which may signal a change of fortunes in that region. Profligate government spending and fiat currencies are coming back to bite the west; they may do so in the east as well.
ENERGY EFFICIENCY
Virtualization has taken the status of a best practice in the data center. Companies looking to improve energy efficiency and to reduce capital and operational costs often implement virtualization as part of their efficiency strategy. Adoption will increase and techniques/software will improve, but the novelty has largely worn off for the industry as a whole. Free cooling received a boost in 2011 with ASHRAEs release of its new operating temperature and humidity guidelines for data centers; these guidelines enable use of outside air for cooling year round in most locations. Cooling remains a major area of potential cost and energy savings for data centers in 2012, and this will continue into 2013. Many large companies are implementing free-cooling strategies, and smaller companies will increasingly follow suit, particularly if economic strains increase. Whatever the case, efficiency will remain an important area of discussion and innovation for the data center industry into 2013 and beyond. 6 | THE DATA CENTER JOURNAL
BIG DATA
What it is may not be entirely clear, but it sure sounds intimidating: big data. Its a buzz phrase in the data center and IT sectors, but like the cloud some years ago, its precise meaning is a little fuzzy. (And its also nearing the peak of Gartners 2012 Emerging Technologies Hype Cycle.) What is clear is that cheap storage capacity means that maintaining huge volumes of data isnt a problem. Doing something with all that data, however, can be problematic. And with more use of IT services, the flood of information will only increase. Look for big data to be an even bigger story in 2013 as companies look for better and more efficient ways to handle the quintillions (or more) of bits flooding their networks. And maybe with the increasing focus will come increasing clarity: what exactly is meant by the term big data and what differentiates it from not-so-big data.
OUTSOURCING
Yes, its an evil wordat least in the minds of somebut outsourcing is common and will remain so. Outsourcing can be either within national bounds or beyond them (offshoring). In the data center space, it can range from an all-out approach via the cloud to halfway measures like colocation and hybrid cloud. One of the chief benefits of outsourcing in the data center marketreduced capital costsis very appealing to companies that have IT needs but not much in the way of money to spend on new projects. A pay-as-you-go approach is preferable in many such cases. The cloud has been picking up steam as more companies employ it to supplement
www.datacenterjournal.com
gling. These power sources simply cannot match coal and nuclears ability to provide abundant power on demand in the present grid, so the only forces driving their use are government subsidies, regulations that push up the price of coal and companies that wish to invest in them for their own reasons. Nuclear power, the only real current competitor to coal, is still in the PR doghouse following the Fukushima disaster in Japan. In 2013, barring a real economic recovery (which is unlikely), alternative energy probably wont see many gains. Nevertheless, this is an area of great interest, albeit indirectly, to the data center industry. The problem of providing abundant, affordable energy to a world constantly demanding more is one of the major challenges now, and it will remain so in the futurepending some unforeseen breakthrough.
completion in 2013 at a total cost of around $2 billion (whats one or two compared with a $16,000 billion debt?). Billed as a means of safeguarding national security, the facility has more of an air of an expansion of the surveillance state. But dont worry: whatever parallel with Soviet Russia or Nazi Germany comes to mind, just remember it cant possibly happen here.
CONCLUSION
Plenty is happening in the data center industry at the technical level, as companies seek to gain greater performance from their facilities. Silicon process technologies are still producing faster and smaller chips for servers, densities are increasing to pack more computing power in less space and software is under development to handle ever more massive amounts of data. Data centers are facilities that take many months, or years, to build, so trends in the industry tend to wax and wane slowly. Many of the same themes of 2012 will carry over to 2013. Energy is the largest and most universal concern for companies with data centers. The economy also is a concern, although it is moderately stable, if unimpressive, for now. n
www.datacenterjournal.com
FACILITY CORNER
BY MARK CROWDIS
There are many potential benefits to solar PV installations. Such systems can: Reduce energy costs Act as a price hedge against rising energy costs Reduce the amount of pollution-rich energy consumed from the grid Reduce carbon emissions Strengthen public relations Capitalize on under-utilized roof or ground space
I
8 |
t is widely acknowledged that data centers are large consumers of energy; in 2010, data centers accounted for about 1.3% of all electricity used worldwide, and about 2% of the electricity used in the U.S. Furthermore, their energy usage increases every year. Between 2005 to 2010, data center electricity usage increased by about 56% worldwide. In the U.S., it increased by about 36% during that timeframe. This energy usage pattern has raised questions about the future of data centers and how they relate to sustainability and pollution. Additionally, it has been estimated that energy expenditures account for THE DATA CENTER JOURNAL
12% of data center expenses. From both a public relations and economic perspective, therefore, data center owners may wish to explore ways of reducing their electricity consumption from the electric grid.
Shortest lead times in the industry Built to customers specifications Advanced control systems Ultra reliable technology
preferably in a centralized location, or five acres of open ground space. This space cannot be significantly shaded by trees, nearby buildings, or even rooftop structures or HVAC units. For a roof-mounted solar PV system, the roof should be in good condition with a remaining lifetime of at least ten years; twenty is preferred. The roof should also have the structural strength to support the solar system and be composed of eligible materials; this excludes materials like clay tile, metal, or slate. Ideally, the surface on which the solar system is installed would be flat. It is best if the pitch of the roof does not exceed 5 for roof-mounted systems, and it is best if the pitch of the ground does not exceed 10 for ground-mounted systems. The cost structure must also be considered. Solar PV systems tend to make economic sense in areas with high electricity rates and incentives. One of the most important incentives is the 30% Federal Investment Tax Credit (ITC); this is available nationwide. Other incentives are state-based, such as sales tax exemptions, property tax exemptions, grants, and rebates. Another factor that can affect project economics is the salability of solar renewable energy credits (SRECs). SRECs represent the value of the environmental attributes associated with solar electricity. SRECs are often sold separately from the electricity generated by a solar PV system, and SREC values vary as a function of generation timeframe and location. As state budgets have tightened, some states have moved away from grants or rebates and implemented SREC-driven markets.
firm can review the equipment selection and design, a consultant can oversee the project construction, and an operation and maintenance (O&M) firm can be hired to ensure ongoing system operations. Locating, vetting, engaging, and supervising these firms does require an investment of time, of course, and should the system have unexpected operational issues, you would bear the cost of repair. A popular financing option that minimizes risk is a power purchase agreement (PPA) structure. Under a PPA structure, you do not own the solar PV system. Rather, a third party financier owns the system and sells you, the project host, the electricity generated by the solar system at a predetermined rate. This rate is often lower than the rate charged by the local utility. The financier takes on the responsibility for the system design, installation, and ongoing operations; you are only responsible for purchasing the electricity produced. The financier also takes responsibility for obtaining any incentives available, and typically passes through the benefits of the 30% tax credit and any other incentives to the site host via a lower PPA electricity rate. The financier is also responsible for the sale of any SRECs; the project host can choose to purchase these, but typically they are sold to a third party or utility.
the actual savings that you would enjoy, calculated by considering the electricity rate that you currently pay to an electricity utility and/or a direct access supplier. Before selecting a developer to work with, it may also be advisable to consider draft or sample contracts from the developers; some developers will have unreasonable or unduly unfavorable terms in their contracts. Once a vendor has been selected, it will be vitally important to negotiate the business terms of the contract appropriately. The wrong terms can leave you unprotected and open to liabilities; the right terms can provide significant protection and ensure that the solar installation is an asset to the data center. If you do have a direct access electricity supplier, it is extremely important to review that contract to ensure that it does not preclude you from engaging in on-site generation projects. The amount of attention required during the construction phase depends upon the type of installation. For systems that are purchased, more careful attention should be paid to the installation and system commissioning; the owner may want to consider hiring a third party firm to review the commissioning and verify proper system installation and operations. Indeed, for many firms that have little to no experience with solar PV systems, it is helpful to engage with an energy consulting firm to help provide guidance throughout the assessment and procurement process. Such a firm can verify that your site is suitable for solar, advise on the financing methods, identify reputable development partners, thoroughly evaluate proposals, negotiate advantageous contracts, and oversee construction. As data center electricity usage continues to rise and electricity prices increase, we expect more and more data centers to procure solar PV systems to both reduce their financial exposure and improve their image. n
About the Author: Mark Crowdis is the president of Reznick Think Energy, a Bethesda, Maryland-based renewable energy consulting firm that is a subsidiary of tax and accounting firm CohnReznick LLP. Elyse Rhodin is a senior analyst at Reznick Think Energy. E-mails: mcrowdis@reznickthinkenergy.com and erhodin@reznickthinkenergy.com
10
www.datacenterjournal.com
www.datacenterjournal.com
Eaton, Intelligent Power, and PowerAdvantage are trademarks of Eaton Corporation. 2012 Eaton Corporation. All rights reserved.
11
FACILITY CORNER
he data center market has been considered, in many areas, to be a bright spot in the commercial real estate industry as more and more properties are being converted to accommodate data centers. While this trend is filling the need for more mission-critical facilities, it also poses a challenge from a sustainability perspective because these facilities consume a large amount of energy. According to the Department of Energy (DOE) in 2006, U.S. data centers used approximately 61 billion kilowatt-hours, accounting for about 1.5 percent of all U.S. electricity consumption. More recently, a 2009 paper from the DOE stated that the annual source energy use of a two-megawatt data center is equal to the amount of energy consumed by 4,600 typical U.S. cars in one year. As the number of data centers increases, so will the demand for energy. The DOE has been working in cooperation with other agencies and the private sector to set energy efficiency measurements, metrics and reporting conventions for data centers to find potential solutions that will still meet this growing demand. According to the DOE, improving energy efficiency in data centers can have many benefits including: reducing business risk, lowering utility bills, reducing carbon emissions, and helping steer a company on a road of social responsibility. So what are some ways that developers and property owners can green their data centers?
tion of LED lighting in the U.S. over the next 20 years would reduce electricity consumption by 25 percent, save an accumulated $120 billion in energy costs, and reduce greenhouse gas emissions by 246 metric tons of carbon. And while tenants are already seeing results in cost savings with LED lighting, Convergence has also been able to reduce maintenance costs associated with replacing old halogen light bulbs throughout the campus.
12
www.datacenterjournal.com
EXPANDING OPPORTUNITIES
When opportunities emerge, Corning Cable Systems wants you to have a solution. We are expanding our Pretium EDGE Solutions offering you new products for your data center needs. More density, more rack applications and more mounting options are all made possible with the new Pretium EDGE Solutions. When your data center has a demand, Corning Cable Systems has the answer. Pretium EDGE Solutions: Expanding Opportunities
www.datacenterjournal.com
13
FACILITY CORNER
he data center market is at the center of a number of different trends and forces that raise a number of opportunities, as well as concerns. A good way to get a perspective on current and possibly future directions is to go to the vendors at the center of the market. To this end, The Data Center Journal asked two vendors Schneider Electric and Deerns America to provide their takes on whats driving the industry and what the future looks like for data center facilities.
Despite economic, regulatory and other challenges, data center customers and operators are striving for maximum availability. Mere seconds of down time, in some industries, can translate into huge losses in both money and reputation. Ensuring uptime is thus an ever more important priority that vendors are aiming to address. But the difficulties facing companies are not all bad; a challenging marketplace also creates opportunities for agile organizations that can foresee and meet changing demands, making flexibility an important asset.
ECONOMY: UNCERTAIN
Although the Great Recession was officially declared over, the mantra of recovery hasnt quite stuck. Unemployment has remained high (and recent gains only reflected that many unemployed people are simply giving up the search for jobs), government spending is frenzied and many companies remain wary about spending. Kevin Brown, Vice President of Data Center Global Solutions Offer for Schneider Electrics IT Business, and Dave Johnson, Senior Vice President of Home and Business Networks (also for Schneider Electrics IT Business), reflect on this uncertainty: Overall, we continue to see an uncertain world economy and mixed business trends in our key markets. Nevertheless, the burgeoning data center market has to some extent bucked the troubled times. Our global sales grew over 11% in the first half of 2012. We have continued to focus our investments and activities around the segments, customers, applications and countries where we see the most opportunities. Luck-
ily, the data center space continues to be an area of strong growth and opportunity. But not all is bad in a rough economy. Tight budgets and uncertain futures can force companies to cast off complacency and begin looking for ways to run their businesses more efficiently and in a manner that better serves customers. The data center market is no exception. In these uncertain times, companies are becoming smarter about seeking out and leveraging opportunities to save, said Brown and Johnson. They are leveraging technologies that allow them to scale up their IT resources as they grow versus investing in everything they need up front. Companies are using scalable IT architectures and leveraging new IT technologies that are faster, cheaper or drive greater productivity (i.e., smartphones and tablets). They are leveraging cost and flexibility advantages offered by colocation companies and cloud service providers. Gary Cudmore, data center principal at Deerns America, also sees similar movement among customers in this sector, albeit along the lines of design challenges or competitions. We are seeing a very interesting development: companies are issuing an open design challenge to qualified companies. The client will issue a basis of design (BOD) document and have the responding companies put together a conceptual design including construction costs and schedule. They will then select a short list of finalists to present their design. The finalists get a fee for making it to the short list and the client then selects the winner. For Deerns America, however, success must be judged over a fairly long span of
14
www.datacenterjournal.com
Horizontal
Multimode FANOUT
OM3
Field-Terminated
BI-MMF
OS2 50m
High Density
www.datacenterjournal.com
OM4
15
Singlemode
SC
Campus
Channel
MPO
Enterprise Backbone
LC
time owing to the scope of the projects it tackles. Our sales funnel looks quite promising, Cudmore said, but projects of this magnitude [$20+ million] take a long time to fruitionland purchase or lease and then design and construction can take as long as 2436 months.
would be how to get higher availability levels. Now companies want high availability, but also will look for technologies and approaches that will achieve the required level of availability as efficiently as possible. Thus, the challenge is not just to get the job done using fewer resources, but to do it more reliably as well. Meeting both aspects of this challenge necessitates a commitment to both efficiency and qualityboth of which require some level of monetary investment. The focus on efficiency, however, aims to cut costs in the long term, both by reducing infrastructure purchases and by pushing existing infrastructure to maximize its output. Since the financial crisis, its clear that companies are watching their pennies very closely, note Brown and Johnson. Add to that the continuing uncertainty and increasing speculation that the economy may ultimately be heading back into recession, given the unresolved fiscal turmoil in the European Union and slowing growth in Asia. The economic situation has translated into three changes in decision making. First, companies are looking to see if they can extend the life of the existing infrastructure. Instead of building a new data center, we see companies looking to consolidation and higher densities to squeeze what they can out of the infrastructure they have. Second, there is a greater focus on total installed first cost than there has been in the past. Companies are beginning to look at not only the equipment cost, but installation as well. We think this is one of the drivers for the interest in prefabricated modular construction. Third, they look to colocation and/or cloud providers as an outlet for capacity. IT spending is thus not all or nothing when it comes to outsourcing. Companies with in-house data centers may look to a mixed approach, combining some in-house IT with the cloud or colocation to take up slack. Cudmore notes, Certainly the pay-as-you-grow approach is dominating the design approach. Reducing the upfront capital is a big concern. We are seeing companies looking to lease the physical infrastructure and turn it from capex to opex on the balance sheet.
to taking innovative approaches to facilities, the ability to flow with market changes is also critical. Companies are really focusing on a data center that is not only energy efficient and green, said Cudmore. What they really want is extreme flexibility. The fear of the unknown and what the future will hold for their IT strategy is driving this requirement. The key for data center designers and engineers is to pursue implementation strategies that are agile, efficient and affordableand that meet demand all the time. The economy makes the data center facilities task a difficult one, but its not all bad news. The silver lining of a recession is that customers reevaluate their priorities and challenge their traditional approaches, said Brown and Johnson. We have sharpened our focus on new products and continued our strong investments in marketing communication and our sales staff. We have also continued to invest R&D while capitalizing on the growth in developing economies as well as other key growth segments such as data centers and the smart grid. We continue to focus on our strong product offering as well as delivering complete solutions for our customers through architectures that span hardware, software and services.
CONCLUSIONS
The data center market is seeing rising demand, but stagnant economic conditions are putting pressure on companies to meet that demand affordably. The focus in the industry is centering on efficiency and outsourcing, with the cloud and colocation providers catching the overflow from companies that do not wish to implement new or expand existing in-house infrastructure. Schneider Electric and Deerns America overlap on a number of points regarding market conditions, although each offers different points of focus for meeting the challenges of that market. Data centers have managed to dodge some of the consequences of the recession that have hit other industries much harder. No end to increasing demand for IT services seems to be in sight, so the market shows a number of positive signs. But the next year or two may signal whether companies will become more optimistic about the economy, commensurately increasing spending and investment, or whether a continued lack of growth or even downturn means much leaner times. n
16
www.datacenterjournal.com
server farms
prisons
clean rooms
rooms ries data centers cold storage msrecord storage warehouses atriums clean rooms
art galleries
ms
plants
rms
FAASTcomputer
manufacturing plants
server farms
art galleries
nts
Aspiration anywhere.
The FAAST Fire Alarm Aspiration Sensing Technology provides the earliest and most accurate smoke detection available. FAAST not only reduces costly downtime in your most critical applications, its unique combination of nuisance immunity, incipient re detection, integral Internet communications, and e-mail notication capabilities opens up a world of application possibilities. It can even protect many spaces deemed impossible for traditional re detection technologies. With FAAST, you can truly take aspiration anywhere.
www.datacenterjournal.com
data centers
17
comput ms libraries clean rooms record storage museums atriums server farm warehouses libraries re FAAST cold stora ooms data centers ean mus record es wa ouses
atriums
l l e r i e s manufacturing plants
data center
critical
detention centers
m corr
man
ser
art ga
da
DESIGN CORNER
N
18 |
o doubt, data is the global currency driving the worlds economic engine, and there is no singular Fort Knox to store all the data. Exacerbating the situation are cloud computing services, the ubiquitous connectivity for mobile devices, government compliance and regulations as well as backup, storage and disaster recovery needs. This new data-driven economy has contributed to the massive data center construction era we are now living inthink Google, Amazon, Apple and Facebook. As data centers continue to face growing volumes of information to process and store, a series of surveys is highlighting what has become a serious problem-data center capacity is strained and strug-
gling to meet ongoing demands. According to an Uptime Institute survey of 525 data center operators and owners 71 percent of which were located in North America - more than a third of data center facilities will run out of space, power, or cooling, or all the above by the end of 2012, Part of the problem facing enterprises is that most data centers were built 10 to 15 years ago to support mainframe technology and lack the capabilities needed to support current technologies, David Cappuccio, Gartner vice president and chief of research for data centers, said in a recent article in InformationWeek. In the height of the Dot-Bomb era (1998 2002), a conservative estimate of over five million square feet of raised
floor data center space was constructed to meet the needs of the Killer App that was anticipated to be right around the corner. As the market imploded, and the original owners of the facilities failed, opportunistic companies both enterprise and collocation firms snatched up these data centers at cents on the dollar and through the ensuing years have filled them with servers running critical enterprise applications or have leased rack or cage space to thousands of collocation customers. Unfortunately, for todays owners and operators of those facilities, they were designed to an average power and cooling density of 50-75 watts per square foot (w/ sf). Not to mention that the facilities had redundancies and reliability far below what is required by todays high density blade
www.datacenterjournal.com
servers and the critical applications being run on them. Fast forward to 2012, and the industry is awash in legacy data centers with too little power, too little cooling capacity, and aged, inefficient equipment nearing the end of its useful life. And, the design of these centers are lacking in the areas of redundancy, concurrent maintainability, monitoring and the advantages of over a decade of advances in technology to support the ever increasing dependency on data. Not a pretty site (pun intended)
The need for more power and cooling, inefficient energy use and the risk of downtime due to the failure of aged infrastructure are just some of many drivers for change. The option to replicate and migrate existing IT systems to a new site can cost three times that of improving the existing facility, and can in itself be fraught with coordination challenges and risk.
one chance for success, you better practice like its the Super Bowl. All critical methods of procedure (MOPs) for system cutovers or tie-ins need to be practiced in numerous dry runs, and under similar conditions (nights, weekends, etc.). This requires a level of commitment from all parties to devote the resources and time to ensure these critical events are perfectly executed.
Coordination
The mission requires careful coordination by a firm with extensive experience in renovating and improving existing critical facilities. In most cases, an integrated design, procurement and construction approach will yield the tightest coordination, and eliminate miscommunications that can prove disastrous at the zero hour. This lead critical facilities partner should have both the construction skills and inhouse electrical and mechanical engineering expertise to coordinate intelligently between all stakeholders in the project, including collaborating with the owners operations team, the data center customers, and the commissioning agent, to ensure that the goals of all parties performance, reliability, efficiency and financial are being achieved.
Technology
Advances such as in-row cooling; hot and cold aisle containment; computational fluid dynamics (CFD) modeling; CRAC units with electrically commutated (EC) fans and UPS systems with integrated redundancies allow more power and cooling in less footprint and more efficiently and reliably than 10-12 years ago.
Planning
Precision is key to a successful implementation plan when it comes to a complete overhaul of the electrical, mechanical and/or fire systems of an active data center. In most cases a phased installation will be required to accommodate the migration of critical loads to new infrastructure equipment, and then de-commissioning the old infrastructure so the remaining new parts can be implemented. Constructability issues need to be determined and solved up front to avoid surprises halfway through a critical implementation phase.
Practice
Most design or construction professionals dont consider practice to be part of their typical process. But when there is only
www.datacenterjournal.com
19
IT CORNER
Storage Automation
BY BRENT RHYMES
nterprise storage needs are continuing to grow with no slowdown in sight. With data representing the lifeline of a company, IT administrators are looking to the cloud as a means for storing, accessing and sharing data. IDC research predicts that by 2015, public storage cloud services will grow by more than 25 percent. While there is no doubt that these clouds can be useful for certain applications, not all provide the right availability, performance and protection for an enterprises critical business data. With IP policies requiring companies to keep their most sensitive data in-house, the unintended side effect is the daily flood of storage requests, draining IT resources and productivity. To address this deluge, storage administrators are examining the benefits of adding automation software to their storage environments, transforming them into private storage cloud infrastructures. Storage automation can deliver the availability, performance and protection users require, letting storage administrators keep pace with business needs and maintain control over their data. Storage
automation impacts companies by reducing resource requirements, increasing service levels and lowering provisioning costs by up to 80 percent. Private storage clouds ensure that information remains safely within the IT departments security parameters while providing the necessary storage capacity across the enterprise.
Storage requests can be fulfilled quickly and accurately by automated, self-service storage portals.
IT departments get thousands of storagerelated requests a year. The introduction of a self-service portal and automated fulfillment significantly decreases the amount of time it takes for a business unit to request and receive new storage resources. This allows storage administrators to focus their time on implementing storage strategy initiatives and future automated storage administration tasks. Now, requests that once took weeks can be completed in hours or days, while minimizing human error.
Storage automation allows companies to deliver storage quickly despite IT staffing limitations.
Storage automation enables delegation and self-service. With delegation, senior personnel can focus on strategy while less senior staff can deliver on storage requests. Through a self-service portal, custom-
20
www.datacenterjournal.com
ers can enter requests from a defined set of services applicable to their role and responsibility. Coupling delegation and self-service lightens the burden storage administrators face, while meeting end-user demands in a more expedient fashion.
that goes into expanding existing storage, re-tiering, archiving, backing up, setting up and managing replication, and possibly even cleaning up short-lived storage. Its no surprise that things that are broken will need to be fixed. These concerns are leading to automation initiatives that will allow IT staff to spend more time evaluating new storage technologies and designing storage service offerings.
AVTECH is the worldwide leader in making monitors and sensors that allow easy remote monitoring of environmental conditions in computer rooms, data centers and other types of facilities. These conditions include:
Temperature & Heat Index Humidity Flooding / Water Smoke / Fire Main & UPS Power Room Entry, Motion Relays, Cameras & More
When threatening conditions are detected, AVTECH monitors will immediately notify managers via todays most advanced technologies, enabling a timely support response or auto corrective action. This allows organizations to avoid or minimize downtime, protect against costly hardware damage and eliminate other related expenses. With prices from $195 to $1195, there is a solution for every organization and budget.
Eco-mode Economics
Advertorial
In todays always-on, always efficient data center environments, its easy to get caught up in the numbers, especially when it comes to UPS performance.
High efficiency is now a standard in UPS design, but selecting the system with the biggest savings potential requires you to take a close look at the fine print. With newer eco-mode models advertising efficiency as high as 99 percent, it pays to know what is real and what is fantasy. Following are three things you need to know when selecting your UPS system and managing it to achieve real energy savings.
The Liebert NXL from Emerson Network Power is one of the most efficient and reliable high-power UPS systems available, due to features such as Intelligent Eco-mode and active inverter technology.
Think eco-intelligent
Eco-mode has done wonders for improving UPS efficiency, but it comes with a cost: Some methods of switching to eco-mode and back again may leave availability vulnerable. To address this issue, Emerson Network Power offers an improved version called Intelligent Eco-mode. This advanced technology employs an active inverter to create an almost seamless transfer of power with a smooth waveform. While the power requirements are fractionally higher than other methods, the result is an estimated 4% to 5% energy savings over standard operation, all without compromising availability. Bottom line: To maximize energy savings and maintain reliability, select a UPS with Intelligent Eco-mode.
Got a power management question for Peter or want a complete UPS selection checklist? Visit knowUPS.com.
By the Numbers
Energy use of infrastructure equipment cooling, UPS, PDU and lighting Growth rate of electricity consumption globally between 2005 and 2010 Estimated cost to operate a server annually in a typical data center Estimated savings achieved by deploying a UPS with higher voltage and Intelligent Eco-mode THE DATA CENTER JOURNAL
Support systems
IT equipment
www.datacenterjournal.com
IT CORNER
Next Generation Optical Fiber Connectivity Solutions for the Mission Critical Data Center
BY BILL CHARUK
s todays evolving data centers confront bandwidth capacity challenges, the migration from 10 gigabits per second (Gb/s)to 40 and 100 Gb/s, and higher density and cloud computing requirements , choosing the correct connectivity method has never been more important to meet the demands for optimum reliability, performance, scalability, cost effectiveness, and time to restoration. Various research studies assert that the physical layer accounts for nearly 60 to 80% of network downtime, elevating connectivity from an important focus to a mission critical and crucial decision. As the data center evolves with growing challenges, so must the connectivity solutions evolve simultaneously and harmoniously to resolve those challenges. Therefore, optical fiber and connectivity manufacturers have innovated the next generation connectivity solution the field installable splice-on connector or SOC. The splice-on connector (MPO, SC, LC, FC, ST) has now been added to the mix of familiar legacy field installable connectivity methods still widely used:
Field installable (puck and polish) connectors, Pre-polished/Mechanical connectors Factory Pre-terminated cable assemblies To make an educated connectivity decision, it is important to become familiar with the splice-on connector and to evaluate the advantages and disadvantages of each popular connectivity method deployed today as they pertain to the high density data center environment and according to the following major criteria: Performance and reliability for multimode and single-mode fiber types Flexibility, speed, and ease of use for fiber installations and cable builds, MACs, and restoration Scalability and future proofing Within the examination of the connector methods, references to cost effectiveness are also made, providing the essential information with which to determine the correct connectivity decision to meet the needs of the data center today and tomorrow.
Table 1: Channel Insertion Loss and Connection Loss Budgets for 40G and 100G Systems Data Rate 40-Gbe 40-Gbe 100-Gbe 100-Gbe Designation 40GBase-SR4 40GBase-SR4 100GBase-SR10 100GBase-SR10 Mb/s 40,000 40,000 100,000 100,000 Fiber Type OM3 OM4 OM3 OM4 Number of Fibers 8 8 20 20 Max Link Length (Meters) 100 150 100 150 Max Channel Insertion Loss (dB) 1.9 1.5 1.9 1.5 Allowable Connection Loss Budget (dB) 1.5 1.0 1.5 1.0
www.datacenterjournal.com
23
installer and requires a fairly high skill set to achieve required insertion and return loss performance by hand polishing the end face of the ferrule to precisely the correct angle. Limitations include use of epoxies, an average 10% failure rate in large projects due to human error and fatigue, limited viability in single-mode systems, and often unacceptable return loss for APC (angled polish connectors). Another disadvantage is that puck and polish connectors are blind terminations, for which no actual or estimated loss associated with the connector is available until the full link is completed and personnel are in place at both ends of the link to complete power through testing. The mechanical connector method aligns a cleaved fiber with a pre-polished stub and utilizes a cam or crimp mechanism to mechanically splice the fibers. The advantages of this termination method include installation speed, low skill set, and the factory polished and inspected end face. Unlike the puck and polish method, these relatively expensive connectors offer various end face polishes from the factory, which are tested prior to use. The major disadvantage to this method is the reflectance and attenuation associated with the mechanical splice. To combat this, index matching gel is used to ensure reflections do not occur. However, this also presents a risk to the data center systems. The refractive index of index matching gel is temperature dependent. Figure 2 shows the relationship between temperature and the return loss as a function of the refractive index of the gel. Obviously, when the data center systems are operating normally, the performance of index matching gel is a low risk. If the data center should experience a temperature excursion, the gel may not be able to maintain system optical integrity or could even begin to flow out of the connector. While a data center may never see these temperatures, it is prudent to design the optical network in such a manner where this will not be a possible outcome of a temperature excursion due to loss of cooling capability, even for brief periods. Another consideration is that this method is also a blind termination process, requiring additional test equipment and time to test. Pre-terminated cable assemblies offer strong advantages, primarily the installed 24 | THE DATA CENTER JOURNAL
connectors that have been tested to meet attenuation performance standards. They also do not require any skill, other than connector cleaning, for use. Unfortunately, this solution has nontechnical drawbacks that make them an expensive and inconvenient solution for the data center. The pre-terminated cable method is costly when ordered from the factory. With space at a premium, storage of cable slack can be problematic. Even when the lengths are engineered, the addition of a small amount of cable length can lead to storage issues over time. The next generation solution, the splice-on connector (MPO, SC, LC, ST), is a hybrid technology consisting of a prepolished connector (APC, UPC, PC) and fiber stub that can be fusion spliced directly on the end of an optical fiber. Essentially, the SOC provides the advantages of a pigtail without the negative inventory or storage issues for a splice tray or cable slack. The SOC also takes advan-
tage of a factory polished end face that has been tested for attenuation and inspected for other defects. The use of fusion splicing is an advantage in and of itself in that there is no serious question in the industry of the fusion splices quality for reduced attenuation and the elimination of reflectance. When the fusion splice is hermitically sealed within the heat shrink or protective sleeve, an ultra reliable optical fiber termination is achieved that is more robust and optically superior to the mechanical splice. The SOC, in effect, is a connector/ pigtail that generates a factory-quality connection in a field installable method. The connector can be attached directly to a 250 micron or 900 micron buffered fiber, affording the flexibility for customized on-site cable builds at exact lengths, while eliminating the limitations of factory preterminated cables. To install this connector, todays easy to use, low skill fusion splicer with the appropriate fiber/connector holders is www.datacenterjournal.com
The data center is a unique environment within the enterprise network. The data center is the only network where speed, reliability, security and optical performance must remain uncompromised at all times. This is true during initial installations of optical links, repairs and restorations, moves, and any changes or rerouting that may occur over the life of the data center.
required. As backward compatibility with existing fusion splicers allows for more widespread adoption of this technology, connector jigs have been developed to retro-fit on many popular fusion splicers already owned by many installers. In addition to the factory polished end face, there are other major advantages. First, by using a fusion splicer with splice loss estimation capability, the technician can overcome the blind splice and get a reading of the estimated splice loss before actually fusing the fiber onto the pre-cleaved fiber stub of the connector. This step increases both productivity and yield while immediately ensuring acceptable insertion loss within specified tolerances. Figures 3 and 4 show the comparative performance results of the various connector methodologies.
FLEXIBILITY, SPEED, AND EASE OF USE FOR FIBER INSTALLATIONS AND CABLE BUILDS, MACS, AND RESTORATION
The data center is a unique environment within the enterprise network. The data center is the only network where speed, reliability, security and optical performance must remain uncompromised at all times. This is true during initial installations of optical links, repairs and restorations, moves, and any changes or rerouting that may occur over the life of the data center. Next generation technologies have the overall advantage of having solved the limitations of preceding technologies. Likewise, the splice-on connector has essentially
eliminated the weaknesses of the connector methods having preceded it. By using the fusion splicer with immediate performance feedback, the SOC solves the blind terminations and blind splices, along with the problematic epoxies, associated with the puck and polish and mechanical connector methods to improve speed, productivity and performance. Also, the SOC comes in all varieties of needed connector types and polishes not all made available by these methods. The SOC also solves the major limitations of pre-terminated cables by eliminating the shorts, excess slack, storage, pulling terminated cables through conduit that can damage the connectors, and logistic delays associated with this method. Pre-terminated cables are often used in emergency repair situations and are ordered for same day or next day delivery. Significant downtime of the optical path is possible when waiting for delivery of the assembly. If inventory is kept on site for these occurrences, having the correct length of jumper will result in high inventory costs. Similarly, the pre-terminated cable method slows down immediate data center moves, adds, and changes (MACs) while continually adding to project costs. With the SOC, exact cable builds can be now be accomplished on-site quickly and easily, as well as repairs, restorations, and any MACs crucial to the data center. If a new optical link needs to be established or an old one repaired, one end of an optical cable can be connectorized. The cable can then be easily routed to the termination point within the data center without having to be concerned about damaging connectors during the installation. The second end can then be terminated at the exact required length so there is no additional slack needed to be stored anywhere along the pathway.
www.datacenterjournal.com
25
Whether the case is installing new equipment requiring new type of connectors or addressing any MACs, the fusion splice-on connector allows for real-time scalability and upgrades. The technician can exchange connector types and build customized cable assemblies on-site in minutes, thereby eliminating dependency on the factory and its related logistical delays and costs. The same applies for quick restorations. Since the fusion splicer completes the connection operation, all the technician needs to do is move the SOC from the splice area, move the heat shrink protection sleeve over the splice and place it in the curing oven on the splicer. Assembly of the final connector is simple; slide the connector housing over the ferrule assembly and capture the strength yarns on the backside of the connector; the total process of a single fiber termination takes less than three minutes to complete.
SCALABILITY AND FUTURE PROOFING TO MEET THE NEEDS OF THE DATA CENTER TODAY AND TOMORROW
To ensure the high-density and 40 and 100 Gigabit speed migration of the data center, which requires at least six Chart 1: Termination Time Study MPO Versus Single Fiber Solutions for Various Fiber Counts to 12 times more fiber, the nextgeneration MPO SOC connector has been developed. Until recently, the MPO has typically been installed in a factory and shipped to the field, since no puck and polish versions exist. With the escalated use of high precision mass fusion splicers, MPO SOCs are able to be field installed with very high quality and reliability. The immediate on-site scalability of the SOC to customize the optical fiber 26 | THE DATA CENTER JOURNAL
infrastructure of the data center ensures an already future-proofed data center network. The MPO version enjoys the same advantages of the single fiber connector. Depending on the type of cable to be terminated, productivity and speed for fiber installations, MACs, and restoration are greatly improved. For example, terminating 12 fibers simultaneously on a ribbon cable versus twelve individual LC connectors on 3mm single fiber cable can save approximately 86% on installation
time. Chart 1 below shows a comparison of installation times using an MPO SOC connector versus single fiber connections on 3mm cable and 900 micron fiber. Chart 1 shows that as the fiber counts increase, the time savings increase dramatically when using the MPO SOC. Of course, to be a viable solution, the methodology must meet industry standards for optical pathways. In Table 2, the initial Table 1outlining network standards is revisited and now includes the maxi-
www.datacenterjournal.com
Table 2: Network Specifications as They Relate to Limiting Case MPO SOC Connector.
Data Rate Designation Mb/s Fiber Type Number Max Link Length Max Channel Allowable Connection LYNX2 MPO SOC of Fibers (Meters) Insertion Loss (dB) Loss Budget (dB) Maximum Loss (dB)
8 8 20 20
mum expected connector loss for the MPO SOC. It is clear from the data presented that there exists ample headroom for additional splices in the system and still be able to meet the end to end loss budgets required for 40G/100G transmission. The MPO SOC complies to US: EIA/TIA-604-5, FOCIS 5 and International: IECX-61754-7 standards.
CONCLUSION
Next generation data centers require next generation connectivity solutions. By combining the advantages of legacy con-
4thLevelAd_DataCenterJournal(SeptFacilitiesInsertion).pdf
nection technologies into one which uses the latest advancements in fusion splicer technology, the splice-on connector enables rapid, high quality connectivity with the speed, performance, and immediate scalability required by todays evolving data center. As fiber counts continue to increase with the advent of 40G and 100G transmission schemes, installation of single and MPO connectors in the field, resulting in excellent optical integrity, will be a requirement for optimum customer service and reduced operating costs, while lowering risks associated with legacy methodology and connector design. Finally, the MPO
1 8/19/12 11:08 AM
SOC is ready to support the coming higher speed networks, which will one day become ubiquitous in the data center and allow for graceful growth when further advancements in network transmissions continue to emerge.n
About the Author: With over 20 years of experience in the communications and information industry, Bill Charuk serves as senior technical manager for Sumitomo Electric Lightwave (www.sumitomoelectric. com), overseeing its Data Center Solutions division. Bill can be reached at bcharuk@ sumitomoelectric.com.
th
Yo st ur he Ne two Hig rk To The
Le ve l
TM
TakeYourNetwork ToTheHighestLevel
100GNEXTGEN
Lynx2-MPO
IntroducingtheRevolutionary4thLevelDataCenterSolution
EnablingFlexibleReal-TimeOn-SiteOpticalFiberInfrastructureDesign
Optimum Performance Immediate Scalability Faster Moves, Adds & Changes Real-Time Cable Builds Faster Restorations Less Downtimeand more www.datacenterjournal.com Contact Customer Service Today at 800-358-7378 THE DATA CENTER JOURNAL | info@sumitomoelectric.com | www.sumitomoelectric.com 27
ke Ta
40GTOMORROW
Lynx2CustomFit Splice-OnConnectors
10GTODAY
Hardware
1GLEGACY
OpticalFiberCable, FusionSplicers&Tools
IT OPS
BY SEV ONYSHKEVYCH
o-location is nothing new. A highly useful alternative to do-it-yourself, its a viable option for any company looking to build out the data center. In this challenging economic environment, there is more pressure than ever to cut costs. As companies explore new ways to keep expenses in check, many feel it doesnt make sense to incur additional capital investments to expand in-house data centers. Enter the co-location or Co-Lo, or a bit more broadly, the Multi-Tenant Data Center. Currently, its estimated there are more than 1,000 co-location data centers in the United States alone. But the road to co-location is getting bumpy. With a large number of competitors, Co-Los are looking for ways to differentiate their offerings, and they cant always choose to win by cutting prices; at an extreme, the cheapest co-los are pure real estate plays. Many must get creative through new value-added services; the answer can be found in offering special Data Center Infrastructure Management (DCIM), to handle a multi-tenant data center (or multiple centers). Simply put: customers need better visibility into their piece of real estate and equipment within Co-Los. A strong provider will offer customers a clear roadmap delivering all information necessary to be successful and in real-time. But for years, operators achieved this in a highly manual fashion wasting both time and money. So where do we go from here? For many, DCIM has become the GPS necessary to lead providers in the right direction and continue to show the desired path.
lion by 2013. This is great for the market, but tough on data centers as the Cloud merely means servers and storage managed by someone else, somewhere else, but you still have hardware, which puts significant strain on current data center resources. As a recent article points out: While the cloud term seems somewhat celestial and ethereal, the impact has been immediate and forceful on the wholesale and co-location data center industry as it strives to keep pace with the rapid spike in corporate demand, with the growth in total data usage and storage rising to nearly incomprehensible numbers. In May 2011, The Uptime Institute confirmed this claim. In a survey of data center owners and operators, 36% reported data centers would run out of power, cooling or space by 2012. According to the report, operators must now explore options to handle the load, including co-location. Sixty-two percent said they would handle demand by consolidating servers, while 40 percent choose to build a new data center, and 29 percent plan to lease co-location space.
28
www.datacenterjournal.com
is little room for collaboration across IT and facilities. Traditional systems are severely limited in their ability to monitor systems and respond to alerts. Drawbacks include:
Virtually all systems were designed for a model of corporate data centers in which there is one owner who manages and oversees all of the equipment. In a colocation environment, different rooms, different cages, different racks or different servers are owned, perhaps managed and operated by different tenants or clients. Each of them needs to be able to see their piece of the overall facility, and associated power equipment such as UPSes or PDUs, and branch circuits, but with no visibility to any other tenants location. Thick Client Requirements: Most systems have thick-client protocols that dont support the modern concept of 24/7, anywhere connectivity in a mobile business environment. Real-Time Monitoring: Older systems lack real-time monitoring and visibility. Gathering, blending and analyzing performance metrics from disparate monitoring systems across geographically dispersed data centers is time consuming and more prone to human error. Licensing Requirements: To proactively monitor a facility, all stakeholders must have access. Most operate with a per-user licensing agreement, making it extremely cost-prohibitive, blocking critical information from those who need it. The result is inefficient power use, higher costs, and more downtime. So where do we go from here? For many co-location providers, the answer can be found in Data Center Infrastructure Management (DCIM).
DESTINATION: DCIM
DCIM converges IT and data center facilities functionality into one unified system. The goal is to help Co-Los not only optimize energy usage, ensure correct equipment layout, and support new efforts towards virtualization, but also to allow tenants to be able to see and monitor their piece of the data. A good system is powered by a smart infrastructure that monitors, collects, and analyzesdata in real-time. True DCIM manages infrastructures holistically, including all IT assets and efficiently linking both IT and facilities. www.datacenterjournal.com
According to 451 Research, 2010 market revenues for DCIM were $245 million. This number will experience a compound annual growth rate of 39% through 2015 growing the marketplace to more than $1 billion. In his report DCIM: Going Beyond IT, Gartner analyst David J. Cappuccio reports data center managersand co-location providers cannot control costs unless they can access real-time data center software offering an accurate view of current consumption: Data center managers must have the information they need to make informed decisions for effective planning and management of data center assets and physical infrastructure to ensure the service levels the company expects. They also must have the insight needed to properly plan and forecast future data center capacities: including space, power, cooling and network connection DCIM provides insights and drives performance throughout the data center, including data center assets and physical infrastructure. True DCIM can adapt to changing IT environments, monitoring everything from infrastructure changes and failures to power and cooling. Features of a true DCIM solution include: Real-Time Monitoring: Enables the accurate, real-time viewing of information allowing for quick resolution. Tenant-Level Views: Ability for tenants to monitor, manage and be alerted on just their relevant section of the data center. Unified Information: A DCIM system is only as good as the information it provides. The right system creates a holistic view of all relevant information of the IT infrastructure and facility. Web-Based Interface: DCIM should be browser-based with the ability to deliver alarms via multiple devices. Users should receive alarms via e-mail, Web browser as well as through traditional methods of phone and pager. Easy Integration and Use: Advanced DCIM tools are easy and intuitive to use. Simple to configure in-house, they consequently eliminate time and cost from the vendor and consultant. The best tools are vendor-neutral and interact with all data center systems with disparate protocols.
the number of decision-makers viewing information. The best systems base cost on the number of points monitored rather than per user. Holistic Visibility: Next-generation monitoring systems provide more visibility, scalability and control over the entire global information environment. Gartner estimates DCIM tools can reduce operating expenses by as much as 20% and extend the life of a data center by as much as five years. As a co-location provider thats good news for your customers and even better news for your business.
At the highest levels, DCIM saves money while empowering workers. Manual processes clearly drain resources and come at significant costs. A good DCIM offering significantly improves team response to requests for adding, moving or changing computer equipment -- reducing time spent analyzing, testing, and verifying capacity at each location. It also dramatically cuts downtime.This is especially important when monitoring energy usage spikes that can shut down a system.
ACCELERATING CO-LOCATION
DCIM is quickly becoming the solution-of-choice for Co-Los. In this challenging environment, its the one key differentiator that can determine success. And these benefits are played out many times. Surveys of Co-Location providers have shown that DCIM helps in several areas: Faster response times Reduced risk of downtime Increased capacity utilization More accurate billing Ability to generate more revenue from given physical, power and cooling capacity. Energy savings Greater management involvement This is proven time and again in several core areas. The first proof point is power management. DCIM can visualize true power usage identifying stranded power and eliminating overbuilding. Through current and historical usage reports, Co-Los accurately gauge power usage effectiveness and efficiency. Another manner in which DCIM sets providers apart is capacity management offering true insight into both IT and facilities. This allows for more effective collaboration with an ability to maximize space and power and uncover underutilized capacity. Cooling is yet another area where DCIM sets Co-Los apart. A comprehensive DCIM solution can manage the entire cooling system offering insight into actual capacity and utilization. It manages the system holistically and provides a real-time status for each device.
www.datacenterjournal.com
IT BUSINESS
Network Infrastructure
BY BOB ESKEW
net, these cables limit the future uses of an infrastructure. Currently, Category 6 and Category 6a cables are the recommended standard. However, the United States is lagging behind when it comes to adopting Category 6a systems. According to a recent study by BRISA, the United States adoption rate of Category 6a systems was only 5 percent, compared to the worldwide adoption rate of 12 percent. Europe boasts a 29 percent adoption rate, which is the highest of any region in the world. The Asia-Pacific regions adoption rate of Category 6a was 4 percent in 2011, and South Africas was just 3 percent. This means that much of the world is still relying on Category 5e cables. In May of 2012, Cabling Installation and Maintenance published an article that predicted 10-Gbit/sec controller and adapter port shipments will surpass those of 1-Gbit/sec by 2014. Significant price drops, increases in server input /output (I/O) processing capabilities, virtualization and big data are the driving forces behind the adoption of 10GBase-T. The market research firm DellOro Group predicts that 10-Gbit/ sec devices will grow at a compound annual growth rate (CAGR) of 33 percent over the next five years. In addition, new cabling industry standards and the increased
demand on data center throughput means that 40/100G Ethernet is expected to play a significant role in the next generation of data centers. In addition, the data center market is becoming a hotspot for highervalue products, including Multi-Fiber Push On connectors (MPOs) and OM4 fiber. The increase in sales of MPOs is mainly driven by top-of-rack and end-of-row architectures, according to BRISA. One of the biggest shifts took place this past year when fiber optic cable product sales outpaced sales of copper cable in 2011, as reported by Cabling Installation and Maintenance. Fiber now accounts for 50 percent of data center cabling value, which is a 46 percent increase from 2010. In 2011 fiber-optic connectivity accounted for 41 percent of the overall data center cabling market, and copper connectivity trailed behind at 33 percent.
MANUFACTURING TRENDS
In recent years, buyers have been taking a closer look at where and how cables are manufactured. One reason buyers are more concerned about the origin of cables is the rise in counterfeit cables. Counterfeit cables have become a hot button issue in
www.datacenterjournal.com
31
the cabling industry, especially in the past year. Counterfeit cables are often perceived as coming only from overseas manufacturers, but the reality is, counterfeit cables can originate both domestically and overseas. Counterfeit cables are often made with copper-clad aluminum conductors instead of solid-copper conductors, or the conductors may be undersized. In addition, counterfeit cables lack the proper heat protection materials, resulting in a potential fire hazard. Cable that weighs significantly less than standard cable is often a telltale sign of counterfeit cable made of inferior materials. As a buyer, it is important to verify that the cables have been evaluated by Underwriters Laboratories (UL) and the Electrical Testing Laboratory (ETL). Beware that counterfeit cable may include a UL or ETL logo on the box, but that does not necessarily mean that the cable has been evaluated. Verify the UL and ETL numbers on all boxes of cable with each respective organization before purchasing.
DISTRIBUTION TRENDS
The traditional structured cabling industry supply chain has followed a multitier distribution channel for more than 50 years. In recent years, buyers have been reevaluating the distribution channel and weighing the benefits against the costs. The multi-layer channel can result in significant markups in the cost of structured cabling products. Products such as Category 5e cables and connectors, which are considered a commodity and are near the end of their product lifecycle, usually have a small markup of 10 to 15 percent when sold through a multi-tier distribution channel, but new, in-demand products may have markups as high as 70 percent when sold through distribution. Each stop on the distribution channel adds another layer of expense to cabling products. The traditional channel begins with a manufacturer who hires a manufacturers rep firm to sell the manufacturers products in a designated territory for a commission. The cabling products are then shipped to a distributor, who sells them to a contractor. The contractor is the final stop before the product is sold to end users. As a result of the added costs, many buyers are foregoing the distribution channel and purchasing cable through direct buy from reputable manufacturers.
a result, future proofing is a major concern among buyers. High-performance cables are helping data center network infrastructures handle the increasing need for speed and high-volume data transfers. As an added bonus, high-performance cables also promote energy efficiency and have a long lifespan, saving money on both energy costs and replacement cables. In addition to managing mobility, data centers also have to be equipped to handle the explosive growth of video and video conferencing in the corporate environment. Video is fast becoming a common communication tool in the workplace. In fact, one-third of all corporations report using video at least once a week, according to the 2011 IMS Enterprise Web Communications Survey. Furthermore, organizations are not just consuming videothey are also producing it and storing it for training, executive communication, collaboration and corporate messaging purposes. However, video usage is often sporadic and unpredictable. Todays data center network infrastructures have to be built to handle the growing amount of data transfers. n
About the Author: Bob Eskew, RCDD, is the founder and CEO of Automated Systems Design Inc. (ASD), which provides manufacturing, management and maintenance of nationwide custom turnkey Information Transport Systems. ASD, a five-time Inc. 500/5000 company, manufactures high-performance iCAT-ITS copper offerings, including CAT 5E, CAT 6E, and 10-gigabit CAT 6a cables, as well as iGLO-ITS fiber optic products. For details, visit www.asdusa.com.
Sources: BRISA World Structured Cabling Market Report April 2012. Accessed: http://www.bsria.co.uk/services/market-intelligence/ multi-client/cabling/. Cabling Installation and Maintenance: 10G network-gear port shipments to surpass 1G by 2014. May 1, 2012. Accessed: http://www.cablinginstall.com/index/display/article-display/3867996147/articles/cabling-installation-maintenance/ volume-20/issue-5/departments/editors-picks.html. Cabling Installation and Maintenance: U.S. Adopting Cat 6A at Less Than Half the Worldwide Rate. July 17, 2012. Accessed: http://www.cablinginstall.com/index/display/article-display/5887298825/articles/cabling-installation-maintenance/ news/network-cable/cat6a/2012/july/US-adopting-Cat-6A-athalf-worldwide-rate.html. TIA-942A data center cabling standard to recommend OM4 fiber. Cabling Installation & Maintenance. June 28, 2011. Accessed: http://www.cablinginstall.com/index/display/ article-display/6133575191/articles/cabling-installation-maintenance/news/cabling-standards/tia/2011/6/TIA-942-A-datacenter-cabling-standard-to-recommend-OM4-fiber.html.
BANDWIDTH DEMAND
Bandwidth demand is also shaping purchasing patterns in the data center industry. One of the biggest challenges facing organizations today is managing bandwidth. Data center network infrastructures are being pushed to the limit as the amount of data being transferred continues to increase exponentially. The increase in mobile devices accessing data centers is having a significant impact on bandwidth demand as well. Todays data center network infrastructures must be built with an eye toward the future to prepare for the inevitable growth of mobile-data traffic. The cable in data center network infrastructures is designed to last at least 10 years and accommodate three to four generations of electronics. As
32
www.datacenterjournal.com
AFCOM is the leading association for data center professionals, offering services that help fuel the growth and improvement of data centers professionals around the world. AFCOM was established in 1981 to offer data center managers the latest information, education, networking, and technology they need to operate every aspect of the data center.
Data Center World
AFCOMs conferences, held twice each year, provide critical information that can be directly applied to everyday life in the data center. AFCOM Members receive valuable discounts on conference registrations.
Propel
Get this monthly html newsletter filled with all of the latest data center industry news in your inbox.
w w w. a f c o m . c o m