You are on page 1of 14

Cloud Computing

Assignment-1

SRICHARAN REDDY RATHNA


100511733047
BE 4/4

XEN HYPERVISOR
Xen creates a Virtual Machine Monitor (VMM) also known as a hypervisor; this is a software
system that allows the execution of multiple virtual guest operating systems simultaneously on a
single physical machine. In particular, this creates a Type 1 or bare-metal hypervisor, meaning
that it runs directly on top of the physical machine as opposed to within an operating system.
Guest virtual machines running on a Xen Project Hypervisor are known as domains and a
special domain known as dom0 is responsible for controlling the hypervisor and starting other
guest operating systems. These other guest operating systems are called domUs, this is because
these domains are unprivileged in the sense they cannot control the hypervisor or start/stop
other domains.
TYPES OF VIRTUALIZATION
Xen hypervisor supports two primary types of virtualization: Para-virtualization and hardware
virtual machine (HVM) also known as full virtualization. Para-virtualization uses modified
guest operating systems that we refer to as enlightened guests. These operating systems are
aware that they are being virtualized and as such dont require virtual hardware devices,
instead they make special calls to the hypervisor that allow them to access CPUs, storage and
network resources.
In contrast HVM guests need not be modified as the hypervisor will create a fully virtual set of
hardware devices for this machine that resemble a physical x86 computer. This emulation
requires much more overhead than the paravirtualisation approach but allows unmodified guest
operating systems like Microsoft Windows to run on top of the hypervisor.
VIRTUAL MACHINE MIGRATION
Administrators can "live migrate" Xen virtual machines between physical hosts across a LAN
without loss of availability. During this procedure, the LAN iteratively copies the memory of the
virtual machine to the destination without stopping its execution.
XEN ARCHITECTURE
Following figure shows the basic architecture of the Xen Hypervisor. We see that the hypervisor
sits on the bare metal (the actual computer hardware). We see the guest VMs all sit on the
hypervisor layer, as does the "Control Domain" (also called "Dom0"). The Control Domain is a
VM like the guest VMs, except that it has two basic functional differences:
1. The Control Domain has the ability to talk to the hypervisor to instruct it to start and stop
guest VMs.

2. The Control Domain by default contains the device drivers needed to address the hardware.
This stops the problem that often plagued Linux users in the 1990s: You install your software on
a new piece of hardware, only to find that you lack the drivers to use it.

The dom0 forms the interface to the hypervisor, through special instructions the dom0
communicates to the Xen software and changes the configuration of the hypervisor. This
includes instantiating new domains and related tasks.
Another crucial part of the dom0s role is that it is the primary interface to the hardware. The
hypervisor doesnt contain device drivers, instead the devices are attached to dom0 and you can
use standard Linux drivers. Dom0 then shares these resources with guest operating systems
through a number of backend daemons.
Each Para-virtualized subsystem in the hypervisor consists of 2 parts: 1) the aforementioned
backend that lives in dom0, and 2) the frontend driver within the guest domain. The backend
is effectively a daemon which uses special ring buffer based interfaces to transfer data to guests,
be it to provide a virtual hard-disk.The frontend driver then takes this stream of data and converts
it back into a device within the guest operating system.
The two important paravirtualisation systems are: net-back/net-front, and blk-back/blk-front which are the para virtualized networking and storage systems, respectively.
THE VIRTUAL MACHINE INTERFACE

Para virtualized x86 interface, factored into three broad aspects of the system: memory
management, the CPU, and device I/O.
Memory Management
Segmentation cannot install fully-privileged segment descriptors and cannot overlap with the top
end of the linear address space.
Paging Guest OS has direct read access to hardware page tables, but updates are batched and
validated by the hypervisor. A domain may be allocated discontiguous machine pages.
CPU
Protection Guest OS must run at a lower privilege level than Xen.
Exceptions Guest OS must register a descriptor table for exception handlers with Xen. Aside
from page faults, the handlers remain the same.
System Calls Guest OS may install a `fast' handler for system calls, allowing direct calls from an
application into its guest OS and avoiding indirecting through Xen on every call.
Interrupts Hardware interrupts are replaced with a lightweight event system.
Time Each guest OS has a timer interface and is aware of both `real' and `virtual' time.
Device I/O
Network, Disk, etc. Virtual devices are elegant and simple to access. Data is transferred using
asynchronous I/O rings.
An event mechanism replaces hardware interrupts for notifications.
CONTROL TRANSFER: HYPERCALLS AND EVENTS
Two mechanisms exist for control interactions between Xen and an overlying
domain:synchronous calls from a domain to Xen may be made using a hypercall, while
notifications are delivered to domains from Xen using an asynchronous event mechanism.The
hypercall interface allows domains to perform a synchronous software trap into the hypervisor to
perform a privileged operation,analogous to the use of system calls in conventional operating
systems.
An example use of a hypercall is to request a set of page table updates, in which Xen validates
and applies a list of updates, returning control to the calling domain when this is completed.
Communication from Xen to a domain is provided through an asynchronous event mechanism,
which replaces the usual delivery mechanisms for device interrupts and allows lightweight
notification of important events such as domain-termination requests. Akin to traditional Unix
signals, there are only a small number of events, each acting to flag a particular type of
occurrence. For instance, events are used to indicate that new data has been received over the
network, or that a virtual disk request has completed. Pending events are stored in a per-domain
bitmask which is updated by Xen before invoking an event-callback handler specified by the
guest OS. The callback handler is responsible for resetting the set of pending events, and
responding to the notifications in an appropriate manner. A domain may explicitly defer event
handling by setting a Xen-readable software flag: this is analogous to disabling interrupts on a
real processor.

Features
The following are key concepts of the Xen architecture:

Full virtualization.
Xen can run multiple guest OS, each in its on VM.
Instead of a driver, lots of great stuff happens in the Xen daemon, xend.
Advantages

The Xen server is built on the open source Xen hypervisor and uses a combination of

paravirtualization and hardware-assisted virtualization. This collaboration between the OS and


the virtualization platform enables the development of a simpler hypervisor that delivers highly
optimized performance.
Xen provides sophisticated workload balancing that captures CPU, memory, disk I/O, and

network I/O data; it offers two optimization modes: one for performance and another for density.
The Xen server takes advantage of a unique storage integration feature called the Citrix

Storage Link. With it, the sysadmin can directly leverage features of arrays from such companies
as HP, Dell Equal Logic, NetApp, EMC, and others.
The Xen server includes multicore processor support, live migration, physical-server-tovirtual-machine conversion (P2V) and virtual-to-virtual conversion (V2V) tools, centralized
multiserver management, real-time performance monitoring, and speedy performance for
Windows and Linux.
DATA TRANSFER: I/O RINGS
The presence of a hypervisor means there is an additional protection domain between guest OSes
and I/O devices, so it is crucial that a data transfer mechanism be provided that allows data to
move vertically through the system with as little overhead as possible.
Two main factors have shaped the design of our I/O-transfer mechanism: resource management
and event notification. For resource accountability, we attempt to minimize the work required to
demultiplex data to a specific domain when an interrupt is received from a device . the overhead
of managing buffers is carried out later where computation may be accounted to the appropriate
domain.
Similarly, memory committed to device I/O is provided by the relevant domains wherever
possible to prevent the crosstalk inherent in shared buffer pools; I/O buffers are protected during
data transfer by pinning the underlying page frames within Xen.
Request Consumer Private pointer in Xen
Request Producer Shared pointer updated by guest OS
Response Consumer Private pointer in guest OS
Response Producer Shared pointer updated by Xen

Request queue - Descriptors queued by the VM but not yet accepted by Xen
Outstanding descriptors - Descriptor slots awaiting a response from Xen
Response queue - Descriptors returned by Xen in response to serviced requests
The structure of asynchronous I/O rings, which are
used for data transfer between Xen and guest OSes.

Figure shows the structure of our I/O descriptor rings. A ring is a circular queue of descriptors
allocated by a domain but accessible from within Xen. Descriptors do not directly contain I/O
data; instead, I/O data buffers are allocated out-of-band by the guest OS and indirectly referenced
by I/O descriptors. Access to each ring is based around two pairs of producer-consumer pointers:
domains place requests on a ring, advancing a request producer pointer, and
Xen removes these requests for handling, advancing an associated request consumer pointer.
Responses are placed back on the ring similarly, save with Xen as the producer and the guest OS
as the consumer. There is no requirement that requests be processed in order: the guest OS
associates a unique identifier with each request which is reproduced in the associated response.
This allows Xen to unambiguously reorder I/O operations due to scheduling or priority
considerations. This structure is sufficiently generic to support a number of different
device paradigms. For example, a set of `requests' can provide buffers for network packet
reception; subsequent `responses' then signal the arrival of packets into these buffers. Reordering
is useful when dealing with disk requests as it allows them to be scheduled within Xen for
efficiency, and the use of descriptors with out-of-band buffers makes implementing zero-copy
transfer easy.
We decouple the production of requests or responses from the notification of the other party: in
the case of requests, a domain may enqueue multiple entries before invoking a hypercall to alert
Xen; in the case of responses, a domain can defer delivery of a notification event by specifying a
threshold number of responses.
This allows each domain to trade-off latency and throughput requirements.

XEN SPLIT DRIVER MODEL

The Split Driver model is one technique for creating efficient virtual hardware. One device driver
runs inside the guest Virtual Machine (aka domU) and communicates with another corresponding
device driver inside the control domain Virtual Machine (aka dom0). This pair of codesigned
device drivers function together, and so can be considered to be a single "split" driver.
Examples of split device drivers are Xen's traditional block and network device drivers when
running paravirtualized guests.
The situation is blurrier when running HVM guests. When you first install a guest Operating
System within a HVM guest, it uses the OS's native device drivers that were designed for use
with real physical hardware, and Xen and dom0 emulate those devices for the new guest.
However, when you then install paravirtual drivers within the guest (these are the "tools" that
you install in the guest on XenServer, or XenClient, and likely also on VMware, etc.) - well, then
you're in a different configuration again. What you have there is a HVM guest, running a nonparavirtualized OS, but with paravirtual split device drivers.
So, to answer your question, when you're running in fully virtualized mode, you may or may not
be using split device drivers -- it depends on whether or not they are actually installed to be used
by the guest OS. Recent Linux kernels already include paravirtual drivers that can be active
within a HVM domain.

VMWARE VSPHERE
VMware vSphere leverages the power of virtualization to transform datacenters into simplified
cloud computing infrastructures and enables IT organizations to deliver flexible and reliable IT
services. VMware vSphere virtualizes and aggregates the underlying physical hardware
resources across multiple systems and provides pools of virtual resources to the datacenter.
As a cloud operating system, VMware vSphere manages large collections of infrastructure (such
as CPUs, storage, and networking) as a seamless and dynamic operating environment, and also
manages the complexity of a datacenter. The following component layers make up VMware
Infrastructure Services
These are the set of services provided to abstract, aggregate, and allocate hardware or
infrastructure resources. Infrastructure Services can be categorized into: VMware vCompute
the VMware capabilities that abstract away from
underlying disparate server resources. vCompute services aggregate these
resources across many discrete servers and assign them to applications.
VMware vStoragethe set of technologies that enables the most efficient
use and management of storage in virtual environments.
VMware vNetworkthe set of technologies that simplify and enhance
Networking in virtual environments.
Application Services
Application Services are the set of services provided to ensure availability,
Security , and scalability for applications. Examples include HA and Fault

Tolerance.
VMware vCenter Server
VMware vCenter Server provides a single point of control of the datacenter. It
provides essential datacenter services such as access control, performance
monitoring , and configuration.
Clients
Users can access the VMware vSphere datacenter through clients such as the
vSphere Client or Web Access through a Web browser.
Support for virtualization in VMware:
To support virtualization in x86 processors, VMware pioneered a philosophy called
binary translation where the hypervisor would trap privileged instructions issued by the guest
and re-work them so they play nice in a virtual environment. This allows running
an unmodified guest OS within a virtual machine.

Features of VMware:
ESX(i) is a component of a VMware vSphere which offers many advanced features for
management and administration, such as
vMotion (live migration)
High Availability
Fault Tolerance
Storage Motion

vMotion
Allowsto migrate live virtual machines from one vSphere host to another with no impact on end
users. This way, you can perform server maintenance and upgrades without having to schedule
downtime or disrupt business operations. With vSpherevMotion, you can dynamically
reconfigure virtual machine placement across vSphere serversto align with changing business
policies or proactively move virtual machines away from failing or underperforming hardware.

Storage vMotion
Allows to move virtual machine disk files from one datastore to another without disrupting
service to the end user. You can take control of storage operations by simplifying storage array
migrations, maintenance and upgrades and by dynamically optimizing storage I/O performance.
High Availability
High Availability (HA) provides cost-effective business continuity for applications running in
virtual machines. If a physical server goes down, affected virtual machines are automatically
restarted on other servers with spare capacity. Should an operating system fail, vSphere HA
restarts the affected virtual machine on the same physical server.
Data Protection
VMware vSphere Data Protection(VDP) is the ideal backup and recovery solution, providing
efficient backups to disk and dependable recovery. VDP technology satisfies business
requirements by eliminating the high cost and complexity of traditional approaches.
Fault Tolerance
VMware vSphere Fault Tolerance (FT) delivers continuous availability for applications,
protecting against hardware failures by creating a live, shadow instance of a virtual machine that
is in virtual lockstep with the primary instance. With immediate failover between two instances,
FT eliminates even the smallest chance of data loss or disruption.
Replication
vSphere Replication lets you replicate powered-on virtual machines over the network from one
vSphere host to another without storage array-based native replication. Unique advantages
include reduced bandwidth requirements, no storage lock-in and the ability to create flexible
disaster recovery configurations.

HYPER-V
The Hyper-V role enables you to create and manage a virtualized computing environment by
using virtualization technology that is built in to Windows Server. Installing the Hyper-V role
installs the required components and optionally installs management tools. The required
components include Windows hypervisor, Hyper-V Virtual Machine Management Service, the
virtualization WMI provider, and other virtualization components such as the virtual machine
bus (VMbus), virtualization service provider (VSP) and virtual infrastructure driver (VID).
The management tools for the Hyper-V role consist of:

GUI-based management tools: Hyper-V Manager, a Microsoft Management Console


(MMC) snap-in, and Virtual Machine Connection, which provides access to the video
output of a virtual machine so you can interact with the virtual machine.

Hyper-V-specific cmdlets for Windows PowerShell. Windows Server 2012 includes a


Hyper-V module, which provides command-line access to all the functionality available
in the GUI, as well functionality not available through the GUI.

If you use Server Manager to install the Hyper-V role, the management tools are included unless
you specifically exclude them. If you use Windows PowerShell to install the Hyper-V role, the
management tools are not included by default.The Hyper-V technology virtualizes hardware to
provide an environment in which you can run multiple operating systems at the same time on one
physical computer. Hyper-V enables you to create and manage virtual machines and their
resources. Each virtual machine is an isolated, virtualized computer system that can run its own
operating system. The operating system that runs within a virtual machine is called a guest
operating system.

Hyper-V features
Hyper-V is the server virtualization platform developed by Microsoft. It is mostly used by the
SMB companies. It has such features as
Live Migration - maintaining network connections and uninterrupted services during VM
migration between physical hosts
Quick Migration- a guest VM is suspended on one host and resumed on another host.
This operation happens in the time it takes to transfer the active memory of the guest VM
over the network from the first host to the second host.
Dynamic Memory
Microsoft offers its hypervisor in two versions: stand-alone software and a component included
in the Windows Server 2008 OS.
Hyper-V is free but you should buy Windows Server that costs. Moreover, you can purchase
Microsoft System Center software suite and System Center Virtual Machine Manager for
management of Hyper-V and VMware environments, but for this you need to pay additional
money. According to the survey, Microsoft Hyper-V controls 13% of the server virtualization
segment in IT industry.
Usually, Hyper-V is selected as a hypervisor by the users who don't have exceptional
performance and functionality requirements. This virtualization tool supports limited number of
applications and utilities.

Practical applications
Hyper-V provides infrastructure so you can virtualize applications and workloads to support a
variety of business goals aimed at improving efficiency and reducing costs, such as:

Establish or expand a private cloud environment. Hyper-V can help you move to or
expand use of shared resources and adjust utilization as demand changes, to provide more
flexible, on-demand IT services.

Increase hardware utilization. By consolidating servers and workloads onto fewer, more
powerful physical computers, you can reduce consumption of resources such as power
and physical space.

Improve business continuity. Hyper-V can help you minimize the impact of both
scheduled and unscheduled downtime of your workloads.

Establish or expand a virtual desktop infrastructure (VDI). A centralized desktop strategy


with VDI can help you increase business agility and data security, as well as simplify
regulatory compliance and management of the desktop operating system and
applications. Deploy Hyper-V and Remote Desktop Virtualization Host (RD
Virtualization Host) on the same physical computer to make personal virtual desktops or
virtual desktop pools available to your users.

Increase efficiency in development and test activities. You can use virtual machines to
reproduce different computing environments without the need for acquiring or
maintaining all the hardware you would otherwise need.

Hardware requirements
Hyper-V requires a 64-bit processor that includes the following:

Hardware-assisted virtualization. This is available in processors that include a


virtualization optionspecifically processors with Intel Virtualization Technology (Intel
VT) or AMD Virtualization (AMD-V) technology.

Hardware-enforced Data Execution Prevention (DEP) must be available and enabled.


Specifically, you must enable Intel XD bit (execute disable bit) or AMD NX bit (no
execute bit).

Software requirements (for supported guest operating systems)


Hyper-V includes a software package for supported guest operating systems that improves
integration between the physical computer and the virtual machine. This package is referred to as
integration services. In general, you install this package in the guest operating system as a
separate procedure after you set up the operating system in the virtual machine. However, some
operating systems have the integration systems built-in and do not require a separate installation.

You might also like