You are on page 1of 21

INDUSTRY DEVELOPMENTS AND MODELS The Future of Virtualization: Leveraging Mobility to Move Beyond Consolidation

John Humphreys

IDC OPINION
The server virtualization marketplace has been evolving rapidly over the past few years and IDC has seen customer attitudes and stances toward virtualization mature rapidly as well. As customers gain familiarity with the technology and as the technology matures, organizations are leveraging virtualization to solve far more than their server consolidation challenges. Increasingly, end users are using virtualization to solve for disaster recovery, high availability, remote client and, ultimately, managing the delivery of business applications to end users. These new emerging use cases are the focus of this study and are predicated on the three key attributes of virtualization software, which include: ! Application isolation. The ability for applications to be encapsulated in individual virtual machines and isolated from other applications residing on the same host. This helps to maintain one "server," one app paradigm while still utilizing the hardware and helps avoid all the application regression testing that must occur in a shared OS environment. The application isolation attribute is leveraged both in consolidating servers and in consolidating desktops onto servers running in the datacenter (so called vdi). ! Virtual machines are files. As such virtual machines can be copied, backed-up, replicated and moved like files. This in turn enables unique, easier, lower-cost business continuity practices and, as a result, allows customers to protect a greater percentage of assets and thus limit the cost of downtime and lost revenue associated with IT outages. ! Live migration. The ability to move a live running application from one host to another enables virtual machines to move without any application downtime. Today, the capability is largely used as a tool to address planned downtime, and increasingly for capacity planning and load balancing across a pool of server resources. Longer term, by pairing live migration with application monitoring technology, customers will be able to manage the quality of service for entire business services, whether those services are delivered via an SOA or a traditional three-tiered architecture.

Global Headquarters: 5 Speen Street Framingham, MA 01701 USA

P.508.872.8200

F.508.935.4015

www.idc.com

Filing Information: April 2008, IDC #211938, Volume: 1 Enterprise Virtualization Software: Industry Developments and Models

IN THIS STUDY
This IDC study provides data on how virtualization is being leveraged within the market today as well as data and IDC's opinions on how is currently evolving, the new use cases being deployed, and a longer term view on how SOA and virtualization could come together to help move the concept of cloud computing closer to becoming a reality.

SITUATION OVERVIEW
A lot has been written in recent years on virtualization and how it is changing IT. At the highest level, virtualization technologies deliver on the key attributes of encapsulation and mobility. For the most part, the industry and customers have been focused on the encapsulation benefits and the cost reductions those benefits provide. Going forward, IDC believes that, while there are still some use cases rooted in the encapsulation attributes, the majority of new reasons to employ virtualization in the infrastructure will be associated with the mobility of virtual machines. While this study is primarily focused on the role of virtualization technologies in implementing these new use cases, it does require firm grounding in the state of the market today and the drivers behind the first wave of virtualization adoption. As IDC has found over the last few years, virtualization technologies are largely considered to be mainstream by customers. Recent surveys found that over 50% of all customers are employing virtualization in support of production applications, including components of some of the most mission-critical applications such as supply chain management and enterprise resource planning. In fact, of those employing virtualization in their organizations, on average they report that roughly one quarter of their production applications are running on virtual machines. Within the next 12 months, these same users expect nearly 50% of their applications will be hosted on a virtualized server. This phenomenon is driving companies to reconsider best practices when deploying new applications, and a growing number of end users report that unless there is an economic or technical reason, their stance is to deploy all new applications as a virtual machine. This reversal in best practice is one of the indicators that IDC believes, foreshadows the expanded role virtualization may have in addressing a host of IT management and process challenges.

Drivers of Virtualization 1.0


As has been well documented, the market for virtualization emerged out of the downturn in IT spending following the bursting of the Internet bubble. At that time, because organizations had been expanding their IT infrastructures so rapidly and considering long-term consequences, a period of rationalization and consolidation was extremely ripe. Companies that were consolidating whole datacenters found that the number of servers under management in their IT shops had exploded and it was not uncommon for organizations that had hundreds of servers under

2008 IDC

#211938

management to now find that they were managing multiple thousands of devices and that the IT build out and a trend toward decentralized IT amplified each other. Rolling this up to a market level, IDC has found that by 2010, if these trends continue unabated, there will be approximately 41 million servers installed in customer sites worldwide. This marks a 700% increase over the 15-year period from 1996 to 2010. At the same time, the drivers for this explosion made tremendous sense when looking at each investment from a tactical standpoint. The first driver for this explosion in systems is the rapid expansion in applications that IT needs to host. Today, there is nary a business process or project that is somehow supported by IT and one, two or even a half a dozen servers. At the same time, the mandate has been to take out as much cost as possible so each new project is scrutinized to see if it would support an IT investment. This directly led, in combination with new technologies emerging into the market, to buyers gravitating toward lower cost systems based on the x86 processor. These systems were and continue to be priced at orders of magnitudes lower than the more centralized high-end systems favored 20 or more years ago. The gravitation toward low-end x86 servers has gotten to the point where approximately 90% of all systems sold are today based on chips from Intel and AMD. This sort of sea change was also made possible by the wholesale support of the Windows operating system for a majority of applications that businesses need to run. The downside of Windows applications is that historically running more than one application on the OS has led to conflicts in the resource or DLL, which in turn has led to system instability. Rather than spend significant time, effort, and energy testing and regressing applications so they would work well in a shared OS environment, the best practice becomes to only deploy one application per server. The result of this one application per system paradigm has been a tremendous underutilization of server resources. IDC estimates that, on average, less then 10% of the total server capacity is utilized over a period of weeks or months. Again, taking this to a market level, this means that today there is roughly $140 billion of server capacity sitting idle in the marketplace. This is equivalent to roughly a three-year supply. The encapsulation benefits of virtualization software enabled customers to harness the growing power of x86 servers and put a greater percentage of the capacity they purchased to productive use. At the same time, by running on an isolated OS, this allowed them to do so without having to do the expensive regression and testing they would have incurred in a shared OS deployment. Customers also found that by reducing the number of servers, they also saved in terms of power and cooling costs as well as real estate expenses associated with maintaining enough datacenter space to house these systems. To put some metrics to this, customers report on average cutting their facilities' costs by about 20% post virtualization. This holds the potential to return huge amounts of capital to the customers as IDC has found that, in aggregate, customers spend $29 billion annually in powering and cooling their servers. Additionally, we have worked with others that, through virtualization, have been able to extend the life of the facility taking

#211938

2008 IDC

advantage of the time value of money and pushing out the multimillion dollar capital outlay associated with building a new datacenter. Finally, in this first phase of virtualization adoption we are seeing that the technology has the potential to significantly alter operating cost structure for managing IT. IDC has found that in the "physical world" on average, most organizations employ one IT professional for every 2030 servers installed in the datacenter. In the virtual world, discussions with early adopters has found that the same IT professional can manage 6080 virtual machines with some customers reporting ratios of up to 200 to 1. Being able to address the operational costs, which drive between 7080% of total company spending in the realm of IT, is one of the tremendous opportunities for virtualization technologies to change the economics of IT. The other major opportunity is centered on mitigating lost revenue attributable to system downtime. This component of the future of virtualization will be the main thrust of this report.

FUTURE OUTLOOK
The Future of Virtualization: Some Forks in the Road
As it was stated previously, the two key attributes of virtualization are, at the highest level: encapsulation and mobility. To date, the thrust of adoption has been on leveraging the encapsulation benefits and we have seen virtualization first deployed in test and development scenarios (including developer workstations), then in the migration of unsupported NT4 applications, and finally into new applications being deployed in production. In this manner there is a well-defined path going forward. The industry has found a very compelling and powerful "hammer" and now it can go out and look for "nails." One area that has garnered much attention in the need for consolidation is the desktop computer. Utilization of desktops is even lower than servers and the environment is even more distributed driving even bigger support costs and management headaches. Finally, the power cost savings or "green" benefits of consolidation of desktops could dwarf that of servers given that IDC estimates that there are roughly 500 million corporate PCs deployed around the globe today. A few years ago, a few enterprising customers began levering virtualization for just such an exercise. They paired virtualization together with Microsoft's Remote Desktop Protocol (RDP) to create a hosted desktop solution (see Figure 1). Today, hosted desktops or "vdi" as it is becoming known, is clearly the next step for the consolidation of IT infrastructure. That said, there are still some hurdles that must be overcome first and foremost is the economics of vdi. These solutions still run at a price premium relative to traditional distributed desktops. The two main "culprits" are storage costs and operating system expenses From a storage perspective, the price difference between locally deployed 4080GB hard drive and the same volume of space on a SAN is significant; hence, a driver of

2008 IDC

#211938

why customers are also looking at lower cost network storage options such as iSCSI or NAS when considering moving to hosted desktops. Additionally, by consolidating the local hard drive data onto networked storage, customers also have the ability to consolidate multiple copies of applications and operating systems that are typically replicated locally. IDC estimates that 1520% of most local hard drives is space dedicated to OS and application copies. Given these centralized streaming or dynamic network boot capabilities along with the strides being made in lowering cost per gigabyte, IDC does believe that storage costs for client consolidation will eventually no longer be an economic challenge. The other major cost hurdle is with operating systems. Today, to legally deploy a virtual Windows desktop a customer must either be a software assurance customer or (delete) and purchase a special license that Microsoft terms VECD or it must purchase a retail copy of Windows. As a lot of early adopters are either not software assurance customers or have had difficulty navigating the VECD licensing terms; they have opted for the retail approach. This has the effect of adding approximately $300 to each vdi seat a customer deploys, which in turn has priced vdi solutions above that of traditional desktops (somewhere around 2030% higher). As a result, the solution has remained a niche play and customers that are deploying solutions are doing so for primarily compliance or data security reasons. Secondary drivers of purchasing include end-user productivity gained through increased uptime as well as better desktop management processes for hosted desktop solutions. Another hurdle that vdi will need to overcome if it is to be as broadly adopted as server virtualization is the performance issue. Unlike server virtualization where performance impacts are negligible for the majority of applications, desktop virtualization is reliant upon a remote graphics protocol to deliver the user interface to the screen in front of the user. Today the most commonly deployed remote graphics protocols are Microsoft's RDP and Citirx's ICA and while much better than just a few years ago, the performance does lag that of a local system, especially when trying to using graphics-intensive applications. Again, as with storage, much is being done in the industry to mitigate the performance issue. Thin client vendors have begun to embed graphics chips for local rendering in the point-of-presence devices. Microsoft, in addition to the improvements coming in RDP v6, recently purchased Calista to handle graphics virtualization on the server. In addition to these incremental approaches to remote graphics improvement, there are a host of net new activities to improve performance. These include work in kernel-based virtualization by Quamranet to drive, among other things, a new protocol it terms SPICE Simple Protocol for Independent Computing Environments. There is also work being done by an industry consortium called Net2Display being run out of Columbia University as well as some other protocol developments from the likes of nComputing and others. And while this does seem like plumbing and not as sexy as policy-based management or service-oriented architectures, because it is so critical to the successful enablement of vdi, a fast, efficient, simple, and hardware-independent approach does seem like a major gating factor in adoption and hence a valuable piece of the solution.

#211938

2008 IDC

Finally, in the realm of challenges to vdi adoption are desktop trends and cultural resistance to change. Currently, vdi solutions are applicable only to those with desktop systems as a user must be connected to get access. Too much of the average laptop users' needs require off line access which today is not handled elegantly at all. From a total available market perspective this reduces the TAM from roughly 500 million systems to 350 million but the trend toward deploying more laptops will likely reduce this further with every refresh cycle. IDC also believes there will be a challenge getting workers to overcome the "my PC" and concerns about monitoring the activity of workers running on vdi. These challenges are likely to linger much longer then the hurdles associated with the economics and performance of the solution but if the costs don't at least reach parity with that of traditional desktops or the performance lags that of a local solution, these cultural concerns will be mute. Beyond servers and clients, there are a host of different areas companies are looking to enable with the isolation benefits of virtualization. These run the gamut from network applications that can now be scaled virtually instead of through hardwarebased appliances through to consumer plays in smartphones, set-top boxes, gaming and even televisions. Most of these are looking to virtualization as a means to reduce bill of materials costs and increase product margins.

FIGURE 1
Beyond Server Consolidation
$4,500 $4,000 Revenue ($M) $3,500 $3,000 $2,500 $2,000 $1,500 $1,000 $500 $2006 2007 2008 2009 2010 2011 Desktop virtualization Application virtualization Presentation virtualization

Virtualization for Client Consolidation


#Oh I couldve had a V8 # Virtual clients # Go Green

Server Hosted Virtual Desktops (vdi)


Unique Client OSs

Virtualization for Server Consolidation


# Server consolidation # Extend the datacenter life # Go Green

Remote Interaction

Datacenter

Source: IDC, 2008

2008 IDC

#211938

Storage

IP Network

Execution

The Other Fork: Virtual Machine Mobility


In addition to the consolidation benefits of virtualization, there appears to be a whole host of new use cases based on the mobility of virtual machines. In aggregate, these use cases are generally focused on reducing downtime and increasing the agility of IT. These include how virtualization can help customers avoid planned downtime, protect a greater percentage of their IT assets in case of disaster, reduce unplanned downtime due to hardware and eventually application and OS failures, and ultimately help to deliver on the concept of service-oriented computing. In terms of virtual machine mobility, there are essentially two means in which a VM becomes "mobile." First, virtualization software has the effect of decoupling the application stack from the underlying hardware. It does so by essentially turning a server into a file. As such a VM can now be copied, backed-up, replicated, and moved like a file. This in turn opens the door to bring low-cost business continuity to the rest of the IT environment. In addition to turning servers into files, a growing number of the virtualization software providers have the ability to do live migrations moving a live running VM with OS and application from one host to another without any downtime. Today, this capability is being used to address planned downtime needs and IT professional "quality of life" issues such as doing hardware swaps or upgrades during normal business hours without having to take down the application. Going forward, the technology can be leveraged to created pools of computing capacity against which applications can be run and further out, as a technology that enables SOA and true "cloud computing."
Business Continuity: Driving Interest

Increasingly, customers are beginning to recognize that IT is no longer a series of unrelated systems each providing a discrete business function or application. Rather, IT itself has increasing become one whole interconnected "system" and like interconnected infrastructure systems such as communication or energy delivery, the failure of one piece can and typically does have a cascading impact on the whole. As one customer recently said: [In a] traditional [DR model] you buy a cold or warm spare and you stick it at the other datacenter where it gathers dust because the odds of it being used are very low and you forget to update it. It has old firmware, old drivers, or may have been 'acquired' for use elsewhere. And then when you have to use it, all hell breaks loose. As a result, IDC finds that partners, the government and even internally through SLAs, business units are requiring more disaster-tolerant and generally available IT solutions. For IT to protect the "great unprotected masses" without dramatically increasing the budget will require innovators to apply new technology in a different way to satisfy both uptime and budgetary pressures. The trajectory of adoption for business continuity is illustrated in Figure 2. All of this indicates, in the words of Clayton Christiansen, generally business continuity is an "underserved job." By undershot Dr Christiansen and others at

#211938

2008 IDC

Innosight describe a job or process that "is important to customers and have yet to be addressed appropriately." IDC believes virtualization is a technology that is extremely well positioned to benefit from this awakening. In a recent worldwide survey of approximately 400 customers, business continuity (BC) ranked second behind consolidation as the reason they are implementing virtualization (multiple responses allowed). Interestingly in emerging markets, customers were more likely to be using virtualization software to drive BC then for consolidation. Additionally, the midmarket (1001,000 employees) was as likely to use virtualization software for BC as consolidation (see Figure 3). As an example of this, Gannett Publishing has taken very effective use of this capability. The company implemented virtualization across both its primary as well as subsidiary sites. As a result, the company has had two key benefits. First, the company was able to consolidate DR services into its two primary datacenters. Instead of having to maintain a third site to act as a cold backup, they now use each primary site as a backup to each other. Additionally, they have been able to offer DR as a service to its subsidiaries. This was put to dramatic use by some local and regional papers in Louisiana when hurricane Katrina hit in 2005. As the story approached, because the VMs and data had been replicated offsite, Gannett was able to migrate services for these properties to the Washington DC area and once the storm had passed and the local infrastructure was back up and running, the company migrated applications back to the primary sites all without any loss of service.

2008 IDC

#211938

FIGURE 2
Business Continuity: Leveraging Virtual Machine Files

3,000

Virt. License Revenue

2,500

Virtualization for Application Availability


# Alternative to clustering # Unplanned downtime # VM/OS/application awareness

2,000

1,500

1,000

500

2006 2007 2008 Consolidation 2009 File mobility 2010 2011 2012

Live migration

Virtualization for Infrastructure Availability


# Auto restart of VMs Based on HV or hardware failures

Readiness Map
Value

Virtualization for DR
High availability Service level automation Maintenance Capacity planning Disaster recovery

Priority Capability

# Disaster recovery

Source: IDC, 2008

#211938

2008 IDC

FIGURE 3
A Variety of Drivers: Business Continuity by Markets and Company Size

Percent of Respondents Ranking Capability Very Important


35% 30% 25% 20% 15% 10% 5% 0% Test and dev. Consolidation Business continuity Resource pooling Emerging markets VDI Mainframe migration

Organizations in emerging markets are more likely to employ virtualization for business continuity then consolidation

40% 35% 30%


Developed markets

25% 20%

Companies under 1,000 employees are as likely 10% to employ virtualization for business continuity 5% as consolidation
0% Test and dev. Server consolidation Business continuity Midsize Resource pooling Enterprise VDI Mainframe migration

15%

Source: IDC's Virtualization Tracker Survey, 20072008

Beyond DR: High Availability for the Masses

The other area virtualization may help change the calculus of IT is in minimizing the revenue lost because of system downtime. Said another way, virtualization may be able to drive new revenue by making applications more highly available. An IDC study from 2003 found that 44% of the nearly 1,400 customers protected through clustering less than 10% of their server assets and on a weighted average basis, this U.S. study found that on average companies were protecting roughly 19% of their servers with clustering technology. Most of those participants found tremendous value in clustering for availability but they also cited ease of use as the driving reason they weren't protecting more of their servers. Top reasons include difficulty in setting up, configuring, and maintaining clustered servers, application compatibility, and finding the people with the right skill sets (see Figure 4). As a result, most customers protect only the most mission-critical applications against both disasters as well as less widespread outages.

2008 IDC

#211938

To understand the cost of downtime implications of this approach at a market level, IDC has created a model that distributes the installed base of servers across a range of availability levels. These levels include: AL0 which is essentially an unprotected system, AL1 which employs technologies like RAID or application logging to ensure data availability, AL3 which can be thought of as a traditional cluster for both data and application availability, and AL4 which is a fault-tolerant system. Based on data from the clustering study and from IDC storage group, we believe about 35% of all installed systems can be classified as AL0; 55% as AL1; 8% as AL3 and the remaining systems as AL4. Applying uptime estimates that range from 99% for AL0 systems to 99.999% for systems classified as AL4 and combining that with estimates of planned versus unplanned downtime share and hourly downtime costs which was gathered from a 2001 study by Contingency Planning Research, which is a Division of Eagle Rock Alliance, IDC estimates that server downtime cost organizations roughly $140 billion in lost worker productivity and revenue in 2007. The vast majority of these losses (over 75%) are estimated to come from minimally or unprotected systems which are largely thought of as "low strategic value." The cost of downtime to organizations has been trending up in recent years as the footprint of IT grows and as individual systems become interconnected and the IT infrastructure of a company increasingly becomes one big system. As a result, the impact of downtime is expected to only increase in subsequent years. IDC estimates that at the current pace of growth, downtime will cost organizations worldwide over $220 billion by 2011 (see Figure 5). Intuitively, these estimates makes sense as it is roughly the same size as the cost of managing and administrating IT and organizations are constantly having to balance minimizing downtime costs with labor and technology expenses. While it is certainly possible to drive downtime to zero the cost to do so would be tremendous and grossly outweigh the value to the organization. This use case in terms of near to midterm opportunities for virtualization, is leveraging the technology to bring high availability to the masses. As was stated previously, the vast majority of applications are not deployed in a HA environment because to date solutions such as clustering have been perceived as complex, hard to set up, and difficult to manage. To address unplanned downtime today virtualization companies are providing an automatic restart capability if the hypervisor or host go down for whatever reason. In these scenarios, a host failure triggers the rebooting of the VM on other virtualized hosts in the datacenter. While this is a good start to trying to combat the lost revenue associated with unplanned outages, ultimately knowing what is happening at the hypervisor and hardware layers fails to deliver customers what they most want application-level awareness and action. In this way, current HA solutions in the virtualization market are "blind from the waist up." That is, they do not know what is happening inside the virtual machine. They do not know if the operating system or application has stopped working, and that is ultimately what IT professionals charged with delivering application services most care to know.

10

#211938

2008 IDC

One issue hampering the delivery of application-level HA is that there needs to be application knowledge, which in turn means being able to collect data from every application in a company's portfolio. Building the connectors to provide this level of awareness is a daunting task. That said, there is a lot of work going on today to close this awareness gap while getting around the need to have application knowledge for the entire portfolio. Notably, Marathon Technologies has announced a solution to provide system fault tolerance to virtualization. Its solution, which can also deliver basic host-level restarts and component-level fault tolerance, relies on a virtual appliance that sits on the hypervisor and essentially recreates a clustered physical environment in the virtual world. This appliance inspects I/O traffic coming through the hypervisor and if it notices an absence of activity can take action (i.e., restart all the VMs on a host, switch over to a hot standby, switch over to a full lock stepped virtual machine). IDC, however, believes there is an additional opportunity between maintaining a hot standby and automatic restarts at the host level. By inspecting traffic through the hypervisor, virtual machine Managers have the ability to tell if a specific VM is active or not and by assuming that a lack of activity for a certain period of time equates to a fault in the VM, that specific VM can be restarted either on the same host or a new one. What this solution lacks in elegance it makes up for in applicability. Because it is not reliant on detailed application or even OS information, it can be widely implemented something that is critical if trying to bring HA to the vast majority of systems that currently go unprotected.

FIGURE 4
Challenges to Clustering: Ease of Use

(%) 35 30 25 20 15 10 5 0 Set-up Compati- Skills Maintebility nance Cost/ No value problems

Data migration Training/learning period Budget constraints Application compatibility Support not available Lack of qualified IT staff Too difficult to manage Configurations Getting it operational No problems

Source: IDC, 2008

2008 IDC

#211938

11

FIGURE 5
Cost of Downtime, 20022011
Installed Base Units (M) 50 45 40 300 200 100 0 2002 2003 2004 2005 2006 2007 2008 2009 2010 Combined P&C Server spending
Source: IDC, 2008

(US$B)

500 400

Lost revenue

35 30 25 20 15 10 Mgmt. spending Cost of downtime

Opex

Capex

VM Mobility: Live Migration


Recapping where the industry is today with respect to leveraging virtual machine mobility, IDC sees two primary use cases gaining acceptance from customers. The first, leveraging virtualization as a tool to reduce or eliminate planned downtime takes advantage of the live migration capabilities. The other use case is disaster recovery and here customers are taking advantage that with virtualization servers are expressed as files and these files can be copied, replicated, moved, and snapshot like any other file. The attitude of customers is shifting toward mobility as well. In 2005, when IDC surveyed virtualization customers about their motivations for deploying the technology in their organizations, only about 5% of the responses involved use cases that required mobility. When IDC repeated that survey 12 months later, the percentage that wanted to leverage the mobility aspects of the technology ballooned to approximately 21% of those surveyed. The use of live migration with customers has corresponded to the rise in these new use cases. In 2005, just over 25% of those surveyed reported using the technology. This rose to nearly 45% in 2006 and over 60% in 2007. Going forward, mobility is seen as a defining feature that will enable the technology to move beyond just a tool for consolidation. From that perspective IDC believes there are two emerging use cases on the adoption horizon: virtualization as a tool for capacity planning and as a solution in delivering service-oriented computing (see Figure 6). Both of these use cases leverage the live migration capabilities in combination with insights into the health or utilization of the hardware and application layers, respectively.

12

#211938

2008 IDC

Essentially, these use cases are the same from a virtualization technology level, rather it's how far up the stack instrumentation occurs that provides the unique difference. In the capacity planning use case, rather than trying to estimate server capacity at the host level, users can treat multiple hosts as a single pool of resources and VM loads can be balanced across the pool based on levels of utilizations of processor, memory, and I/O as well as policies set up by the user. These rules could pertain to individual VM requirements and host-level utilizations both from a maximum and minimum perspective. Users are also able to apply affinity and antiaffinity rules to guarantee that, in the case of affinity rules, two or more VMs are always hosted on the same machine and with anti-affinity rules, the two or more VMs never reside on the same machine (two DNS servers for instance). Companies such as Natixis Capital Markets have already begun deploying such architectures and have reported great success with over 4,000 individual moves without incident over roughly a six-month period. This approach to capacity planning means that IT architects and capacity planners are freed up from having to "forecast" or guess at the capacity needs of any given application they deploy, which in turn will likely result in less of the dramatic overprovisioning of resources that has gone on historically. Rather, they can plan at the resource pool level, which, because of the diversification of load, makes forecasting capacity needs much easier and more straightforward. Also, instead of purchasing excess capacity "insurance" in the form of additional servers for each application, they can overprovision at the pool level, knowing that at any given moment in time a few of the applications may be experiencing demand spikes, but it would not likely extend to all applications in the company's portfolio. Conceptually, the idea is very appealing for those that have ever struggled with planning for the rollout of a new application, but in practice there are still some hurdles that must be overcome if the concept will have broad market adoption. First and foremost is that, today, live migrations cannot cross subnet boundaries. This means to benefit from the economy of scales this technology would provide the host should have access to all subnets in the pool. Effectively, this creates a situation in which the customer is giving up network security through isolation for flexibility. To address these concerns companies like Cisco with its Topspin acquisition, 3Leaf Networks, Xsigo and others are working to bring I/O virtualization to the fore to deliver both secure isolated networks and host pooling that allows migrations to occur across larger and more diverse resource pools and applications. While the capacity planning use case is intriguing from a technology perspective and generally the capabilities exist to apply the concept today, the value to the customer is of question. Unlike DR, load balancing across a pool of servers in an automated way will likely feel risky for the vast majority of end users. The analogy would be inventing the car that drives itself. The first users are likely to be risk takers and even these folks will want to have their hands on or near the wheel. For some the reward (reclaiming hours of their day) may be worth the risk, especially with successive iterations of the technology the same will be most likely be true with resource pools and making capacity planning easier may simply not be enough of an incentive to take the risk.

2008 IDC

#211938

13

FIGURE 6
Future Use Cases of Live Migration

Use of Live Migration


100% 80% 60% 40% 63% 20% 0% 2005 27% 44% 73% 56% 37%

Virtualization for PolicyBased Automation


# Service-oriented computing # IT in the cloud

2006

2007

Yes

No

Virtualization for Capacity Planning


# Pool of resources # Balance load based on Hardware utilization Application requirements

Revenue Share, 2012


Isolation 16%

Virtualization for Hardware Maintenance


#Planned downtime

Live migration 42%

File mobility 42%

Source: IDC, 2008

Server Virtualization and the Impact on SOA

This last phase in the application of live migration marks a long term and somewhat speculative outlook on how the technology may develop to deliver what could be termed service-centric computing. The belief is that today both SOA and virtualization are powerful forces shaping the future of IT. SOA is fundamentally rearchitecting how business applications are delivered moving from one of discrete applications to one of componentized services. In the same manner, virtualization is changing the way in which infrastructure is delivered moving from static and hard wired to dynamic and extremely flexible. IDC believes, by bringing together SOA and virtualization, the industry has the opportunity to execute on that long-held vision in which IT professionals can shift the delivery of infrastructure and applications over to management systems that are linked to policies and service levels that are set by the business. The beginnings of this evolution can already be seen in how application services are delivered. Historically, application servers incorporated a hierarchical application stack that went from operating system, to Java Virtual Machine (JVM) to application server software and finally to a series of applications hosted in containers. The upside

14

#211938

2008 IDC

was one software stack; the downside was the applications were not isolated from each other and one failure in the stack could bring down multiple applications. With virtualization, this architecture could be altered so that each application was maintained in a virtual machine and each VM had an OS, JVM, and copy of app server software. While this works wonders to deliver better isolation of applications, it also has driven the creation of more images and software stacks to patch, update, and manage. More recently, BEA has introduced LiquidVM, Virtual Edition and Liquid Operations Control. More recently, BEA has introduced the idea of an application server appliance with the LiquidVM, Virtual Edition, and Liquid Operations Control. Here instead of a separate OS, JVM and App server, the stack is combined into a tuned and optimized appliance that supplies all these services. Customers can deploy their applications right on top of this appliance and it greatly reduces the number of software components IT has to manage. This architecture is very similar to that of SOA as services are run in an environment just as applications are run on an appliance. Clearly, the big leap will be in deconstructing applications into the base services and this will likely be more than a five-year process, but whether truly componentized applications become a reality or not, the combination of application environment sensing and the control infrastructure control instantiated through virtualization is a powerful concept that could lead to IT managing delivering services rather then focusing on managing infrastructure (see Figure 7). If the industry can devise a way to link applications to represent individual business processes, the concept of service-oriented computing remains feasible as the management of the infrastructure can be linked to monitoring of the application or service environment and measured against policies and service levels set by the users. This really does represent a marriage of best-in-breed application monitoring tools provide detailed data on the health, performance and demand for a specific business application or service while infrastructure management tools such as live migration implement the moves, adds, and changes of individual VMs hosting the service. This vision for policy-based automation of the scaling, moving, resizing, provisioning and decommissioning of virtual machines in support of application service levels is highly speculative and only roughly defined. Even more of a challenge is that for such a vision to come to fruition would require unprecedented collaboration in the industry and hence is why service-oriented computing while this scenario in the future of virtualization still be years from broad market acceptance and likely look very different once all the vendor wrangling and positioning is complete. That said, if a version of this concept actually makes it successfully to market, it is only a small step from here to moving IT into the cloud. Here individual services hold no special significance or proprietary value add for the customer. It is only through the combination of these services that one is able to deliver a business application and the unique advantage comes not from the individual services but from how they are

2008 IDC

#211938

15

strung together. As such, moving services either wholesale or in portions, into the "cloud" would likely be perceived to be less of a risk than moving an entire application; and core business process as unique value adds or "special sauce" can always be maintained by the organization. Again, while a fun and interesting exercise, the visions around cloud computing seem years if not decades from being a reality. Much work on the fundamental architecture of the applications and in the management architecture still needs to be developed. At the same time, customers will need to grow more comfortable with moving core applications to service providers and comfort around data security will need to grow.

FIGURE 7
Policy-Based Management of Services Via Virtualization

Monitoring & Policy

Network

A E

A E

A E

A E

A E

A E

A E

A E

A E

A E

A E

A E

A E

A E

A E

A E

A E

A E

Hypervisor

Hypervisor

Hypervisor

Infrastructure Mgmt.

Data and Images


Source: IDC, 2008

ESSENTIAL GUIDANCE
Use cases continue to grow and the future of virtualization application is moving beyond leveraging the isolation attributes. New use cases are based on virtual machine mobility the first are taking advantage of the fact that virtual machines are essentially files and the next wave of applications are leveraging the live migration capabilities.

16

#211938

2008 IDC

This is enabling new models around disaster recovery, high availability as well as combined with SOA, a means to deliver true service-oriented computing. All that said it is important to keep in mind what virtualization technologies can do well and where more work needs to be done. The benefits of virtualization include: ! Great tool for server consolidation ! Helps reduce power and cooling costs ! Can change the server to admin ratio ! Consolidation works for more than just servers ! Imparts flexibility on the infrastructure and makes provisioning much faster ! Works well for hardware upgrades ! Homogenizes all the servers ! Enables BC for the "rest of the datacenter" ! Is creating whole new business models and value chains ! Lays the foundation for Policy Based Automation Challenges that will need to be overcome include: ! Not for all applications I/O-intensive apps still pay an overhead ! Doesn't consolidate images change management still an issue! ! Speed is not a substitute for intelligence customers can move too fast ! No application awareness policy-based automation is not yet an option ! DR is reliant on app log shipping or replication software better coordination needed ! More cost-effective storage needed ! Creates more mission-critical servers ! Mobility impacts on security still poorly understood ! Network mobility is lagging behind ! IT, Business Unit and Executive Team buy-in is required There certainly are many ways to employ this technology today and with continued refined from the vendors as well as continued experimentation and creative thinking

2008 IDC

#211938

17

from customers, IDC believes the application of the virtualization to the problems faced by IT is only now just beginning to be developed.

LE ARN MORE
Related Research
! IDC's Software Taxonomy, 2008 (IDC #210828, February 2008) ! Worldwide System Infrastructure Software 2008 Top 10 Predictions (IDC #209633, December 2007) ! Worldwide and U.S. High-Availability Server 20072011 Forecast and Analysis (IDC #210096, December 2007) ! Customer Requirements in the Server Virtualization Marketplace (IDC #209247, November 2007) ! Worldwide Operating Systems and Subsystems 20072011 Forecast Update (IDC #209282, November 2007) ! Worldwide Virtual Machine Software 2006 Vendor Shares (IDC #208041 August 2007) ! Worldwide Virtual Machine Software 20072011 Forecast (IDC #208015 August 2007) ! SAVVIS Leverages Virtualization to Reshape Managed Services (IDC #206270, April 2007) ! Nationwide: Leveraging Policy-Based Automation in a Virtual Environment (IDC #204953, January 2007) ! Virtualization: 10 Questions with QUALCOMM (IDC #203501, October 2006) ! VMware: Virtualization 3.0 (IDC #202061, June 2006) ! Virtualization and Change and Configuration Management Software (IDC #202128, June 2006) ! Microsoft Acquires Softricity for Application Virtualization and Management (IDC #201844, May 2006) ! Lowering the Barrier to Entry: VMware Offers Server Virtual Machine Product for Free (IDC #34885, February 2006) ! IDC's Top 10 System Infrastructure Software Predictions for 2006 (IDC #34837, February 2006) ! Success and Disruption: The Broader Impact of Open Source Software (IDC #DR2006 _3CAG, February 2006)

18

#211938

2008 IDC

! Worldwide Virtual Environment Software 20022004 Competitive Market Final View (IDC #34419, November 2005)

Copyright Notice
This IDC research document was published as part of an IDC continuous intelligence service, providing written research, analyst interactions, telebriefings, and conferences. Visit www.idc.com to learn more about IDC subscription and consulting services. To view a list of IDC offices worldwide, visit www.idc.com/offices. Please contact the IDC Hotline at 800.343.4952, ext. 7988 (or +1.508.988.7988) or sales@idc.com for information on applying the price of this document toward the purchase of an IDC service or for information on additional copies or Web rights. Copyright 2008 IDC. Reproduction is forbidden unless authorized. All rights reserved.

2008 IDC

#211938

19

You might also like