You are on page 1of 31

“ Virtualization ”

29

1. Introduction

Virtualization is a proven software technology that is rapidly transforming the IT


landscape and fundamentally changing the way that people compute. Today’s powerful x86
computer hardware was designed to run a single operating system and a single application.
This leaves most machines vastly underutilized. Virtualization lets you run multiple virtual
machines on a single physical machine, sharing the resources of that single computer across
multiple environments. Different virtual machines can run different operating systems and
multiple applications on the same physical computer.

Virtualization is a framework or methodology of dividing the resources of a computer


into multiple execution environments, by applying one or more concepts or technologies such
as hardware and software partitioning, time-sharing, partial or complete machine simulation,
emulation, quality of service, and many others.

Virtualization is technology for supporting execution of computer program code, from


applications to entire operating systems, in a software-controlled environment. Such a Virtual
Machine (VM) environment abstracts available system resources (memory, storage, CPU
core(s), I/O, etc.) and presents them in a regular fashion, such that “guest” software cannot
distinguish VM-based execution from running on bare physical hardware.

Fig (1): Virtual Machine

Virtualization commonly refers to native virtualization, where the VM platform and


the guest software target the same microprocessor instruction set and comparable system
architectures. Virtualization can also involve execution of guest software cross-compiled for a

NEHRU COLLEGE OF ENGINNERING AND RESEARCH CENTRE


“ Virtualization ”
29

different instruction set or CPU architecture; such emulation or simulation environments help
developers bring up new processors and cross-debug embedded hardware.

A virtual machine provides a software environment that allows software to run on


bare hardware. This environment is created by a virtual-machine monitor, also known as a
hypervisor. A virtual machine is an efficient, isolated duplicate of the real machine. The
hypervisor presents an interface that looks like hardware to the “guest” operating system. It
allows multiple operating system instances to run concurrently on a single computer; it is a
means of separating hardware from a single operating system. it can control the guests’ use of
CPU, memory, and storage, even allowing a guest OS to migrate from one machine to
another.

It is also a method of partitioning one physical server computer into multiple “virtual”
servers, giving each the appearance and capabilities of running on its own dedicated machine.
Each virtual server functions as a full-fledged server and can be independently rebooted.

How Does Virtualization Work?

Virtualization platform transform or “virtualize” the hardware resources of an x86-


based computer—including the CPU, RAM, hard disk and network controller—to create a
fully functional virtual machine that can run its own operating system and applications just
like a “real” computer. Each virtual machine contains a complete system, eliminating
potential conflicts. Virtualization works by inserting a thin layer of software directly on the
computer hardware or on a host operating system. This contains a virtual machine monitor or
“hypervisor” that allocates hardware resources dynamically and transparently. Multiple
operating systems run concurrently on a single physical computer and share hardware
resources with each other. By encapsulating an entire machine, including CPU, memory,
operating system, and network devices, a virtual machine is completely compatible with all
standard x86 operating systems, applications, and device drivers. You can safely run several
operating systems and applications at the same time on a single computer, with each having
access to the resources it needs when it needs them.

NEHRU COLLEGE OF ENGINNERING AND RESEARCH CENTRE


“ Virtualization ”
29

Virtual Machine

Fig(2): VMware Virtual Machine

A virtual machine is a tightly isolated software container that can run its own
operating systems and applications as if it were a physical computer. A virtual machine
behaves exactly like a physical computer and contains it own virtual (ie, software-based)
CPU, RAM hard disk and network interface card (NIC). .

An operating system can’t tell the difference between a virtual machine and a physical
machine, nor can applications or other computers on a network. Even the virtual machine
thinks it is a “real” computer. Nevertheless, a virtual machine is composed entirely of
software and contains no hardware components whatsoever. As a result, virtual machines
offer a number of distinct advantages over physical hardware.

Virtual Machines Benefits

Virtual machines possess four key characteristics that benefit the user:

• Compatibility: Virtual machines are compatible with all standard x86 computers
• Isolation: Virtual machines are isolated from each other as if physically separated
• Encapsulation: Virtual machines encapsulate a complete computing environment
• Hardware independence: Virtual machines run independently of underlying hardware

NEHRU COLLEGE OF ENGINNERING AND RESEARCH CENTRE


“ Virtualization ”
29

Virtual Infrastructure

A virtual infrastructure lets you share your physical resources of multiple machines
across your entire infrastructure. A virtual machine lets you share the resources of a single
physical computer across multiple virtual machines for maximum efficiency. Resources are
shared across multiple virtual machines and applications. This resource optimization drives
greater flexibility in the organization and results in lower capital and operational costs.

Fig(3): Virtual Infrastructure

A virtual infrastructure consists of the following components:

• Bare-metal hypervisors to enable full virtualization of each x86 computer.


• Virtual infrastructure services such as resource management and consolidated backup
to optimize available resources among virtual machines

Virtual Infrastructure Benefits

Delivering built–in availability, security and scalability to applications. It supports a wide


range of operating system and application environments, as well as networking and storage
infrastructure.

NEHRU COLLEGE OF ENGINNERING AND RESEARCH CENTRE


“ Virtualization ”
29

2. History of Virtualization

Virtualization is a proven concept that was first developed in the 1960s by IBM as a way
to logically partition large, mainframe hardware into separate virtual machines. These
partitions allowed mainframes to "multitask"; run multiple applications and processes at the
same time.

Virtualization was effectively abandoned during the 1980s and 1990s when client-server
applications and inexpensive x86 servers and desktops established the model of distributed
computing. The growth in x86 server and desktop deployments has introduced new IT
infrastructure and operational challenges. These challenges include:

• Low Infrastructure Utilization - Typical x86 server deployments achieve an average


utilization of only 10% to 15% of total capacity. Organizations typically run one application
per server to avoid the risk of vulnerabilities in one application affecting the availability of
another application on the same server.

• Increasing Physical Infrastructure Costs - The operational costs to support growing


physical infrastructure have steadily increased. Most computing infrastructure must remain
operational at all times, resulting in power consumption, cooling and facilities costs that do
not vary with utilization levels.

• Increasing IT Management Costs - As computing environments become more complex, the


level of specialized education and experience required for infrastructure management
personnel and the associated costs of such personnel have increased. Organizations spend
disproportionate time and resources on manual tasks associated with server maintenance, and
thus require more personnel to complete these tasks.

• Insufficient Failover and Disaster Protection - Organizations are increasingly affected by


the downtime of critical server applications and inaccessibility of critical end user desktops.
The threat of security attacks, natural disasters, health pandemics and terrorism has elevated
the importance of business continuity planning for both desktops and servers.

NEHRU COLLEGE OF ENGINNERING AND RESEARCH CENTRE


“ Virtualization ”
29

• High Maintenance end-user desktops - Managing and securing enterprise desktops present
numerous challenges. Controlling a distributed desktop environment and enforcing
management, access and security policies without impairing users' ability to work effectively
is complex and expensive.

Present Day

Today, computers based on x86 architecture are faced with the same problems of
rigidity and underutilization that mainframes faced in the 1960s.

Today's powerful x86 computer hardware was originally designed to run only a single
operating system and a single application, but virtualization breaks that bond, making it
possible to run multiple operating systems and multiple applications on the same computer at
the same time, increasing the utilization and flexibility of hardware.

Why Virtualization: A List of Reasons

Following are some reasons for and benefits of virtualization:

 Virtual machines can be used to consolidate the workloads of several under-utilized


servers to fewer machines, perhaps a single machine (server consolidation). Related
benefits are savings on hardware, environmental costs, management, and
administration of the server infrastructure.
 The need to run legacy applications is served well by virtual machines. A legacy
application might simply not run on newer hardware and/or operating systems. Even
if it does, if may under-utilize the server,
 Virtual machines can be used to provide secure, isolated sandboxes for running
untrusted applications. You could even create such an execution environment
dynamically - on the fly - as you download something from the Internet and run it.
 Virtual machines can be used to create operating systems, or execution environments
with resource limits, and given the right schedulers, resource guarantees.
 Virtual machines can provide the illusion of hardware, or hardware configuration that
you do not have (such as SCSI devices, multiple processors,) Virtualization can also
be used to simulate networks of independent computers.

NEHRU COLLEGE OF ENGINNERING AND RESEARCH CENTRE


“ Virtualization ”
29

 Virtual machines can be used to run multiple operating systems simultaneously:


different versions, or even entirely different systems, which can be on hot standby.
Some such systems may be hard or impossible to run on newer real hardware.
 Virtual machines allow for powerful debugging and performance monitoring.
 Virtual machines can isolate what they run, so they provide fault and error
containment. You can inject faults proactively into software to study its subsequent
behavior.
 Virtual machines are great tools for research and academic experiments. Since they
provide isolation, they are safer to work with. They encapsulate the entire state of a
running system: you can save the state, examine it, modify it, reload it, and so on. The
state also provides an abstraction of the workload being run.
 Virtualization can enable existing operating systems to run on shared memory
multiprocessors.

• Driving out the cost of IT infrastructure through more efficient use of available resources

• Simplifying the infrastructure.

• Increasing system availability.

• Delivering consistently good performance.

• Centralizing systems, data, and infrastructure

NEHRU COLLEGE OF ENGINNERING AND RESEARCH CENTRE


“ Virtualization ”
29

3.Virtual machine & Hypervisor

VIRTUAL MACHINE

Virtual machine (VM) is a software implementation of a machine (computer) that executes


programs like a real machine.

Fig(4): Connectix Virtual PC version 3 in Mac OS 9, running a Windows 95

A virtual machine was originally defined by Popek and Goldberg as "an efficient,
isolated duplicate of a real machine".

Virtual machines are separated into two major categories, based on their use and
degree of correspondence to any real machine. A system virtual machine provides a
complete system platform which supports the execution of a complete operating system (OS).
Process virtual machine is designed to run a single program, which means that it supports a
single process. An essential characteristic of a virtual machine is that the software running
inside is limited to the resources and abstractions provided by the virtual machine -- it cannot
break out of its virtual world.

System virtual machines

System virtual machines (sometimes called hardware virtual machines) allow the
sharing of the underlying physical machine resources between different virtual machines,
each running its own operating system. The software layer providing the virtualization is

NEHRU COLLEGE OF ENGINNERING AND RESEARCH CENTRE


“ Virtualization ”
29

called a virtual machine monitor or hypervisor. A hypervisor can run on bare hardware
(Type 1 or native VM) or on top of an operating system (Type 2 or hosted VM).

The main advantages of system VMs are:

• multiple OS environments can co-exist on the same computer, in strong isolation from each
other
• the virtual machine can provide an instruction set architecture (ISA) that is somewhat
different from that of the real machine

The guest OS’s do not have to be all the same, making it possible to run different OS’s on
the same computer (e.g., Microsoft Windows and Linux, or older versions of an OS in order
to support software that has not yet been ported to the latest version).

Process virtual machines

A process VM, sometimes called an application virtual machine, runs as a normal


application inside an OS and supports a single process. It is created when that process is
started and destroyed when it exits. Its purpose is to provide a platform-independent
programming environment that abstracts away details of the underlying hardware or
operating system, and allows a program to execute in the same way on any platform.

A process VM provides a high-level abstraction — that of a high-level programming


language (compared to the low-level ISA abstraction of the system VM). Process VMs are
implemented using an interpreter; performance comparable to compiled programming
languages is achieved by the use of just-in-time compilation.

This type of VM has become popular with the Java (JVM). And .NET Framework,
which runs on a VM called the Common Language Runtime.

NEHRU COLLEGE OF ENGINNERING AND RESEARCH CENTRE


“ Virtualization ”
29

Techniques

Emulation of the underlying raw hardware (native execution)

Fig (5): VMware Workstation running Ubuntu, on Windows Vista

This approach is described as full virtualization of the hardware, and can be


implemented using a Type 1 or Type 2 hypervisor. Each virtual machine can run any
operating system supported by the underlying hardware. Users can thus run two or more
different "guest" operating systems simultaneously, in separate "private" virtual computers.

Full virtualization is particularly helpful in operating system development, when


experimental new code can be run at the same time as older, more stable, versions, each in a
separate virtual machine.

Emulation of a non-native system

Virtual machines can also perform the role of an emulator, allowing software
applications and operating systems written for another computer processor architecture to be
run.

Some virtual machines emulate hardware that only exists as a detailed specification. For
example:

• The specification of the Java virtual machine.


• The Common Language Infrastructure virtual machine at the heart of the Microsoft .NET
initiative.

NEHRU COLLEGE OF ENGINNERING AND RESEARCH CENTRE


“ Virtualization ”
29

• Open Firmware allows plug-in hardware to include boot-time diagnostics, configuration code,
and device drivers that will run on any kind of CPU.

This technique allows diverse computers to run any software written to that specification;
only the virtual machine software itself must be written separately for each type of computer
on which it runs.

Hypervisor

A hypervisor, also called virtual machine monitor (VMM), is a computer hardware


platform virtualization software that allows multiple operating systems to run on a host
computer concurrently.

Classifications

Hypervisors are classified in two types:

• Type 1 (or native, bare-metal) hypervisors are software systems that run directly on the host's
hardware as a hardware control and guest operating system monitor. A guest operating system
thus runs on another level above the hypervisor.

• Type 2 (or hosted) hypervisors are software applications running within a conventional
operating system environment. Considering the hypervisor layer being a distinct software
layer, guest operating systems thus run at the third level above the hardware.

NEHRU COLLEGE OF ENGINNERING AND RESEARCH CENTRE


“ Virtualization ”
29

4. Popek and Goldberg virtualization requirements

The Popek and Goldberg virtualization requirements are a set of sufficient


conditions for computer architecture to efficiently support system virtualization. They were
introduced by Gerald J. Popek and Robert P. Goldberg in their 1974 article "Formal
Requirements for Virtualizable Third Generation Architectures". Even though the
requirements are derived under simplifying assumptions, they still represent a convenient
way of determining whether computer architecture supports efficient virtualization and
provide guidelines for the design of virtualized computer architectures.

There are three properties of interest when analyzing the environment created by a VMM:

Equivalence: A program running under the VMM should exhibit a behavior essentially identical to
that demonstrated when running on an equivalent machine directly.

Resource control: The VMM must be in complete control of the virtualized resources.

Efficiency: A statistically dominant fraction of machine instructions must be executed without VMM
intervention.

In Popek and Goldberg terminology, a VMM must present all three properties. VMM
are typically assumed to satisfy the equivalence and resource control properties. So, in a
sense, Popek and Goldberg's VMMs are today's efficient VMM.

The main result of Popek and Goldberg's analysis can then be expressed as follows.

Theorem 1. For any conventional third-generation computer, a VMM may be constructed if


the set of sensitive instructions for that computer is a subset of the set of privileged
instructions.

Theorem 2. A conventional third-generation computer is recursively Virtualizable if

1. it is Virtualizable and
2. a VMM without any timing dependencies can be constructed for it.

NEHRU COLLEGE OF ENGINNERING AND RESEARCH CENTRE


“ Virtualization ”
29

5. Classification of Virtualization

Here we discuss about different types of virtualization

• Platform virtualization, which separates an operating system from the underlying


platform resources
o Full virtualization
o Hardware-assisted virtualization
o Partial virtualization
o Para virtualization
o Operating system-level virtualization
o Hosted environment

• Resource virtualization, the virtualization of specific system resources, such as


storage volumes, name spaces, and network resources
o Storage virtualization, the process of completely abstracting logical storage
from physical storage
 RAID - redundant array of independent disks
 Disk partitioning
o Network virtualization, creation of a virtualized network addressing space
within or across network subnets

• Computer clusters and grid computing, the combination of multiple discrete


computers into larger meta computers

• Application virtualization, the hosting of individual applications on alien


hardware/software
o Portable application
o Cross-platform virtualization
o Emulation or simulation

• Desktop virtualization, the remote manipulation of a computer desktop

NEHRU COLLEGE OF ENGINNERING AND RESEARCH CENTRE


“ Virtualization ”
29

6. Platform virtualization

Platform virtualization is a virtualization of computers or operating systems. It hides


the physical characteristics of computing platform from the users, instead showing another
abstract, emulated computing platform.

Fig (6): VMware Workstation running Ubuntu, on Windows, an example of platform virtualization

Concept

The creation and management of virtual machines has been called platform
virtualization, or server virtualization.

Platform virtualization is performed on a given hardware platform by host software (a


control program), which creates a simulated computer environment, a virtual machine, for its
guest software. The guest software, which is often itself a complete operating system, runs
just as if it were installed on a stand-alone hardware platform. Typically, many such virtual
machines are simulated on a single physical machine, their number limited by the host’s
hardware resources. Typically there is no requirement for a guest OS to be the same as the
host one. The guest system often requires access to specific peripheral devices to function, so
the simulation must support the guest's interfaces to those devices. Trivial examples of such
devices are hard disk drive or network interface card.

NEHRU COLLEGE OF ENGINNERING AND RESEARCH CENTRE


“ Virtualization ”
29

There are several approaches to platform virtualization.

Full virtualization

In full virtualization, the virtual machine simulates enough hardware to allow an


unmodified "guest" OS (one designed for the same instruction set) to be run in isolation. This
approach was pioneered in 1966 with IBM CP-40 and CP-67, predecessors of VM family.

Hardware-assisted virtualization

In hardware-assisted virtualization, the hardware provides architectural support that


facilitates building a virtual machine monitor and allows guest OSes to be run in isolation. In
2005 and 2006, Intel and AMD provided additional hardware to support virtualization.
Examples include Linux KVM, VMware Workstation, VMware Fusion, Microsoft Virtual
PC, Xen, Parallels Desktop for Mac,VirtualBox and Parallels Workstation.

Hardware virtualization technologies include:

• AMD-V x86 virtualization (previously known as Pacifica)


• IBM Advanced POWER virtualization
• Intel VT x86 virtualization (previously known as Vanderpool)
• UltraSPARC T1 and UltraSPARC T2 processors from Sun Microsystems have the Hyper-
Privileged execution mode

Partial virtualization

In partial virtualization (and also "address space virtualization"): The virtual machine
simulates multiple instances of much (but not all) of an underlying hardware environment,
particularly address spaces. Such an environment supports resource sharing and process
isolation, but does not allow separate "guest" operating system instances.

Para virtualization

In paravirtualization, the virtual machine does not necessarily simulate hardware, but
instead (or in addition) offers a special API that can only be used by modifying the "guest"
OS. This system call to the hypervisor is called a "hypercall" in TRANGO and Xen;

NEHRU COLLEGE OF ENGINNERING AND RESEARCH CENTRE


“ Virtualization ”
29

Operating system-level virtualization

In operating system-level virtualization, a physical server is virtualized at the


operating system level, enabling multiple isolated and secure virtualized servers to run on a
single physical server. The "guest" OS environments share the same OS as the host system –
i.e. the same OS kernel is used to implement the "guest" environments. Applications running
in a given "guest" environment view it as a stand-alone system.

Hosted environment

Applications that hosted by the third party servers and that can be called or can be
used by a remote system’s environment.

NEHRU COLLEGE OF ENGINNERING AND RESEARCH CENTRE


“ Virtualization ”
29

7. Resource virtualization

The virtualization of specific system resources, such as storage volumes, name spaces,
and network resources is the resource virtualization,

Storage Virtualization

Storage virtualization is the pooling of multiple physical storage resources into what
appears to be a single storage resource that is centrally managed. Storage virtualization
automates tedious and extremely time-consuming storage administration tasks. This means
the storage administrator can perform the tasks of backup, archiving, and recovery more
easily and in less time, because the overall complexity of the storage infrastructure is
disguised. Storage virtualization is commonly used in file systems, storage area networks
(SANs), switches and virtual tape systems. Users can implement storage virtualization with
software, hybrid hardware or software appliances. Virtualization hides the physical
complexity of storage from storage administrators and applications, making it possible to
manage all storage as a single resource. In addition to easing the storage management burden,
this approach dramatically improves the efficiency and cuts overall costs.

The Advantages of Storage Virtualization


Storage virtualization provides many advantages.

First, it enables the pooling of multiple physical resources into a smaller number of
resources or even a single resource, which reduces complexity. Many environments have
become complex, which increases the storage management gap. With regard to resources,
pooling is an important way to achieve simplicity. A second advantage of using storage
virtualization is that it automates many time-consuming tasks. In other words, policy-driven
virtualization tools take people out of the loop of addressing each alert or interrupt in the
storage business. A third advantage of storage virtualization is that it can be used to disguise
the overall complexity of the infrastructure.

NEHRU COLLEGE OF ENGINNERING AND RESEARCH CENTRE


“ Virtualization ”
29

Network virtualization

Network virtualization is the process of combining hardware and software network


resources and network functionality into a single, software-based administrative entity, a
virtual network. Network virtualization involves platform virtualization, often combined with
resource virtualization.

Network virtualization is categorized as either external, combining many networks, or


parts of networks, into a virtual unit, or internal, providing network-like functionality to the
software containers on a single system. Whether virtualization is internal or external depends
on the implementation provided by vendors that support the technology.

Components of a virtual network

Various equipment and software vendors offer network virtualization by combining any
of the following:

• Network hardware, such as switches and network adapters, also known as network interface
cards (NICs)
• Networks, such as virtual LANs (VLANs) and containers such as virtual machines and
Solaris Containers
• Network storage devices
• Network media, such as Ethernet and Fibre Channel

External network virtualization

External network virtualization, in which one or more local networks are combined
or subdivided into virtual networks, with the goal of improving the efficiency of a large
corporate network or data center. The key components of an external virtual network are the
VLAN and the network switch. Using VLAN and switch technology, the system
administrator can configure systems physically attached to the same local network into
different virtual networks. Conversely, VLAN technology enables the system administrator to
combine systems on separate local networks into a VLAN spanning the segments of a large
corporate network.

NEHRU COLLEGE OF ENGINNERING AND RESEARCH CENTRE


“ Virtualization ”
29

Internal network virtualization

In internal network virtualization, a single system is configured with containers,


such as the Xen domain, combined with hypervisor control programs or pseudo-interfaces
such as the VNIC, to create a “network in a box.” This solution improves overall efficiency of
a single system by isolating applications to separate containers and/or pseudo interfaces.

Combined internal and external network virtualization

Some VMM offer both internal and external network virtualization. Basic approach is
network in the box on a single system, using virtual machines that are managed by hypervisor
software. Infrastructure software connects and combines networks in multiple boxes into an
external virtualization scenario.

NEHRU COLLEGE OF ENGINNERING AND RESEARCH CENTRE


“ Virtualization ”
29

8. Cluster and Grid computing


Cluster computing

Fig (7): An example of a computer cluster

A computer cluster is a group of linked computers, working together closely so that


in many respects they form a single computer. The components of a cluster are commonly,
but not always, connected to each other through fast local area networks. Clusters are usually
deployed to improve performance and/or availability over that provided by a single computer,
while typically being much more cost-effective than single computers of comparable speed or
availability.

Cluster categorizations

High-availability (HA) clusters

High-availability clusters (also known as failover clusters) are implemented primarily


for the purpose of improving the availability of services which the cluster provides. They
operate by having redundant nodes, which are then used to provide service when system
components fail..

Load-balancing clusters

Load-balancing clusters operate by distributing a workload evenly over multiple back


end nodes. Typically the cluster will be configured with multiple redundant load-balancing
front ends.

NEHRU COLLEGE OF ENGINNERING AND RESEARCH CENTRE


“ Virtualization ”
29

Compute clusters

Clusters are used for primarily computational purposes, rather than handling IO-
oriented operations such as web service or databases. For instance, a cluster might support
computational simulations of weather or vehicle crashes.

Grid computing

Grids are usually compute clusters, but more focused on throughput like a computing
utility rather than running fewer, tightly-coupled jobs. grids will incorporate heterogeneous
collections of computers, possibly distributed geographically distributed nodes, sometimes
administered by unrelated organizations.

Grid computing is optimized for workloads which consist of many independent jobs
or packets of work, which do not have to share data between the jobs during the computation
process. Grids serve to manage the allocation of jobs to computers which will perform the
work independently of the rest of the grid cluster. Resources such as storage may be shared
by all the nodes, but intermediate results of one job do not affect other jobs in progress on
other nodes of the grid.

NEHRU COLLEGE OF ENGINNERING AND RESEARCH CENTRE


“ Virtualization ”
29

9. Application virtualization

Application virtualization is a term that describes software technologies that improve


portability, manageability and compatibility of applications by encapsulating them from the
underlying operating system on which they are executed. A fully virtualized application is not
installed in the traditional sense, although it is still executed as if it is. The application is
fooled at runtime into believing that it is directly interfacing with the original operating
system and all the resources managed by it, when in reality it is not. Application
virtualization differs from operating system virtualization in that in the latter case, the whole
operating system is virtualized rather than only specific applications.

Description

Limited application virtualization is used in modern operating systems such as


Microsoft Windows and Linux. For example, IniFileMappings were introduced with
Windows NT to virtualize (into the Registry) the legacy INI files of applications originally
written for Windows 3.1.

Full application virtualization requires a virtualization layer. This layer must be


installed on a machine to intercept all file and Registry operations of virtualized applications
and transparently redirect these operations into a virtualised location. The application
performing the file operations never knows that it's not accessing the physical resource it
believes it is. In this way, applications with many dependent files and settings can be made
portable by redirecting all their input/output to a single physical file, and traditionally
incompatible applications can be executed side-by-side.

Benefits of application virtualization

• Allows applications to run in environments that do not suit the native application (e.g. Wine
allows Microsoft Windows applications to run on Linux).
• Uses fewer resources than a separate virtual machine.
• Run incompatible applications side-by-side, at the same time and with minimal regression
testing against one another.
• Implement the security principle of least privilege by removing the requirement for end-users
to have Administrator privileges in order to run poorly written applications.

NEHRU COLLEGE OF ENGINNERING AND RESEARCH CENTRE


“ Virtualization ”
29

• Simplified operating system migrations.


• Accelerated application deployment, through on-demand application streaming.
• Improved security, by isolating applications from the operating system.
• Fast application provisioning to the desktop based upon user's roaming profile.

Disadvantages of application virtualization

• Applications have to be "packaged" or "sequenced" before they will run in a virtualized way.

• Minimal increased resource requirements (memory and disk storage).


• Not all software can be virtualized. Some examples include applications that require a device
driver and 16-bit applications that need to run in shared memory space.
• Some types of software such as anti-virus packages are difficult to virtualize.
• Some compatibility issues between legacy applications and newer operating systems cannot
be addressed by application virtualization (although they can still be run on an older operating
system under a virtual machine).

NEHRU COLLEGE OF ENGINNERING AND RESEARCH CENTRE


“ Virtualization ”
29

10. Cross-platform virtualization

Cross-platform virtualization is a form of computer virtualization that allows software


compiled for a specific CPU and operating system to run unmodified on computers with
different CPUs and/or operating systems, through a combination of dynamic binary
translation and operating system call mapping.

Since the software runs on a virtualized equivalent of the original computer, it does not
require recompilation or porting, thus saving time and development resources. However, the
processing overhead of binary translation and call mapping imposes a performance penalty,
when compared to natively-compiled software. For this reason, cross-platform virtualization
may be used as a temporary solution until resources are available to port the software.

By creating an abstraction layer capable of running software compiled for a different


computer system, cross-platform virtualization characterizes the Popek and Goldberg
virtualization requirements outlined. Cross-platform virtualization is distinct from emulation
and binary translation - which involve the direct translation of one CPU instruction set to
another - since the inclusion of operating system call mapping provides a more complete
virtualized environment. Cross-platform virtualization is also complementary to server
virtualization and desktop virtualization solutions, since these are typically constrained to a
single CPU type, such as x86 or POWER.

Emulation

Emulation or Emulator may refer to as imitation of behavior of a computer or other


electronic system with the help of another type of computer/system. Console emulator, a
program that allows a computer or modern console to emulate another video game console.
Hardware emulation, the use of special purpose hardware to emulate the behavior of a yet-to-
be-built system, with greater speed than pure software emulation

NEHRU COLLEGE OF ENGINNERING AND RESEARCH CENTRE


“ Virtualization ”
29

Simulation

Simulation is the imitation of some real thing, state of affairs, or process. The act of
simulating something generally entails representing certain key characteristics or behaviors of
a selected physical or abstract system.

A computer simulation (or "sim") is an attempt to model a real-life or hypothetical


situation on a computer so that it can be studied to see how the system works. By changing
variables, predictions may be made about the behaviour of the system.

Computer simulation has become a useful part of modeling many natural systems in
physics, chemistry and biology, and human systems in economics as well as in engineering to
gain insight into the operation of those systems.

Simulation in Computer science

In Computer science, simulation has some specialized meanings: Alan Turing used
the term "simulation" to refer to what happens when a universal machine executes a state
transition table (in modern terminology, a computer runs a program) that describes the state
transitions, inputs and outputs of a subject discrete-state machine.

Less theoretically, an interesting application of computer simulation is to simulate


computers using computers. In computer architecture, a type of simulator, typically called an
emulator, is often used to execute a program that has to run on some inconvenient type of
computer, or in a tightly controlled testing environment Since the operation of the computer
is simulated, all of the information about the computer's operation is directly available to the
programmer, and the speed and execution of the simulation can be varied at will.

NEHRU COLLEGE OF ENGINNERING AND RESEARCH CENTRE


“ Virtualization ”
29

11.Desktop virtualization

Desktop virtualization or virtual desktop infrastructure (VDI) is a server-centric


computing model that borrows from the traditional thin-client model but is designed to give
system administrators and end-users the ability to host and centrally manage desktop virtual
machines in the data center while giving end users a full PC desktop experience.

Rationale

Installing and maintaining separate PC workstations is complex, and traditionally


users have almost unlimited ability to install or remove software. Desktop virtualization
provides many of the advantages of a terminal server, but (if so desired and configured by
system administrators) can provide users much more flexibility. Each, for instance might be
allowed to install and configure their own applications. Users also gain the ability to access
their server-based virtual desktop from other locations.

Advantages

• Instant provisioning of new desktops


• Near-zero downtime in the event of hardware failures
• Significant reduction in the cost of new application deployment
• Robust desktop image management capabilities
• Normal 2-3 year PC refresh cycle extended to 5–6 years or more
• Existing desktop-like performance including multiple monitors, bi-directional audio/video,
streaming video, USB support etc.
• Ability to access the users' enterprise desktop environment from any PC, (including the
employee's home PC)
• Desktop computing power on demand
• Multiple desktops on demand
• Self provisioning of desktops (controlled by policies)

NEHRU COLLEGE OF ENGINNERING AND RESEARCH CENTRE


“ Virtualization ”
29

12.Virtualization Softwares

VMware Workstation

Fig (8): VMware Workstation 6.5 running Ubuntu The Snapshot Manager in VMware
Workstation 6

VMware Workstation is a virtual machine software suite for x86 and x86-64
computers from VMware, a division of EMC Corporation. This software suite allows users to
set up multiple x86 and x86-64 virtual computers and to use one or more of these virtual
machines simultaneously with the hosting operating system. Each virtual machine instance
can execute its own guest operating system, such as Windows, Linux, BSD variants, or
others. In simple terms, VMware Workstation allows one physical machine to run multiple
operating systems simultaneously.

NEHRU COLLEGE OF ENGINNERING AND RESEARCH CENTRE


“ Virtualization ”
29

Microsoft Virtual Server

Microsoft Virtual Server is a virtualization solution that facilitates the creation of


virtual machines on the Windows XP, Windows Vista and Windows Server 2003 operating
systems. Originally developed by Connectix, it was acquired by Microsoft prior to release.
Virtual PC is Microsoft's related desktop virtualization software package.

Virtual machines are created and managed through an IIS web-based interface or
through a Windows client application tool called VMRCplus.

The current version is Microsoft Virtual Server 2005 R2 SP1. New features in R2 SP1
include Linux guest operating system support, Virtual Disk Precompactor, SMP (but not for
the Guest OS), x86-64 (x64) Host OS support (but not Guest OS support), the ability to
mount virtual hard drives on the host OS and additional operating systems including
Windows Vista. It also provides a Volume Shadow Copy writer which enables live backups of
the Guest OS on a Windows Server 2003 or Windows Server 2008 Host. A utility to mount
VHD images is also included since SP1. Officially supported Linux guest operating systems
include Red Hat Enterprise Linux versions 2.1-5.0, Red Hat Linux 9.0, SUSE Linux and
SUSE Linux Enterprise Server versions 9 and 10.

Microsoft Virtual PC

Microsoft Virtual PC is a virtualization suite for Microsoft Windows operating


systems, and an emulation suite for Mac OS X on PowerPC-based systems. The software was
originally written by Connectix, and was subsequently acquired by Microsoft. In July 2006
Microsoft released the Windows-hosted version as a free product. In August 2006 Microsoft
announced the Macintosh-hosted version would not be ported to Intel-based Macintoshes,
effectively discontinuing the product as PowerPC-based Macintoshes are no longer
manufactured.

Virtual PC virtualizes a standard PC and its associated hardware. Supported Windows


operating systems can run inside Virtual PC. However, other operating systems like Linux
may run, but are not officially supported (for example, Ubuntu, a popular Linux distribution,

NEHRU COLLEGE OF ENGINNERING AND RESEARCH CENTRE


“ Virtualization ”
29

can get past the boot screen of the Live CD (and function fully) when using Safe Graphics
Mode).

VirtualBox

VirtualBox is an x86 virtualization software package, originally created by German


software company innotek, now developed by Sun Microsystems as part of its Sun xVM
virtualization platform. It is installed on an existing host operating system; within this
application, additional operating systems, each known as a Guest OS, can be loaded and run,
each with its own virtual environment.

Supported host operating systems include Linux, Mac OS X, OS/2 Warp, Windows
XP or Vista, and Solaris, while supported guest operating systems include FreeBSD, Linux,
OpenBSD, OS/2 Warp, Windows and Solaris. According to a 2007 survey ,Virtual Box is the
third most popular software package for running Windows programs on Linux desktops.

Xen

Xen is a virtual machine monitor for IA-32, x86, x86-64, IA-64 and PowerPC 970
architectures. It allows several guest operating systems to be executed on the same computer
hardware concurrently. Xen was initially created by the University of Cambridge Computer
Laboratory and is now developed and maintained by the Xen community as free software,
licensed under the GNU General Public License (GPL2).

A Xen system is structured with the Xen hypervisor as the lowest and most privileged
layer. Above this layer are one or more guest operating systems, which the hypervisor
schedules across the physical CPUs. The first guest operating system, called in Xen
terminology "domain 0" (dom0), is booted automatically when the hypervisor boots and
given special management privileges and direct access to the physical hardware. The system
administrator logs into dom0 in order to start any further guest operating systems, called
"domain U" (domU) in Xen terminology.

NEHRU COLLEGE OF ENGINNERING AND RESEARCH CENTRE


“ Virtualization ”
29

13.Conclusion

Virtualization dramatically improves the efficiency and availability of resources and


applications. Earlier Internal resources are underutilized under the old “one server, one
application” model and users spend too much time managing servers rather innovating. By
virtualization platform, users can respond faster and more efficiently than ever before. Users
can save 50-70% on overall IT costs by consolidating their resource pools and delivering
highly available machines.

Other major improvements by using virtualization are that they can:

• Reduce capital costs by requiring less hardware and lowering operational costs while
increasing your server to admin ratio
• Ensure enterprise applications perform with the highest availability and performance
• Build up business continuity through improved disaster recovery solutions and
deliver high availability throughout the datacenter
• Improve desktop management with faster deployment of desktops and fewer support
calls due to application conflicts.

Even after the implementations of distributed computing and other technologies,


virtualization proved to be an effective in using the available resources of a system fully in an
efficient way.

NEHRU COLLEGE OF ENGINNERING AND RESEARCH CENTRE


“ Virtualization ”
29

14.References

Websites:

1) http:// www. wikipedia.com


2) http:// www.vmware.com
3) http:// www.kernalthread.com
4) http:// www.virtualizationadmin.com
5) http:// www.virtualization.org
6) http:// www.microsft.com/virtualization.aspx

Books:

1) “Virtualization : From Beginners to Professionals”, Apress Publications

NEHRU COLLEGE OF ENGINNERING AND RESEARCH CENTRE

You might also like