Professional Documents
Culture Documents
Chapter 1
Introduction
Technology enhancement usually leads to easier and convenient life for mankind but it has
not always been positive for Mother Nature – our planet. Last few decades have seen huge
technological development along with leapfrog increase in consumption as the two most
populous countries of the world join the bandwagon. Earlier with consumption confined to
few select countries,emission and other side effects, though harmful was causing slow-
poisoning.But the needs of this new club of users from China, India, Brazil and many other
so called emerging economies are overwhelming Mother Nature particularly her capabilities
to sustain the overall Green House Gas (GHG) emission. This in essence has sped the
obvious side effects – like increase in average global temperature, irregular weather pattern,
changing wind pattern, elevation of sea level, impacting plant and animal kingdom. The
impact on the nature of our planet has brought us to the junction where future needs to be
considered carefully, to sustain and make our planet green again, so as to prevent a
catastrophe from happening.
1.1.1Overall Impact :-
The ICT sector has grown at an extraordinary pace over the last two decades transforming
society and economy. ICT impacts business, lifestyle and family relationships unlike never
before. As the ICT sector grows, GHG emission from the ICT sector will continue to grow.
Below are a few examples of the explosive growth of ICT:
The number of computers connected to the Internet is expected to cross 3 billion by
2011.According to some projections, by 2020 the number of devices connected to the
Internet will be around 50 billion. Today there are more than 1.5 billion users of Internet. As
more and more users from developing nations start using Internet, this number will see a
significant increase over the years. Many of these users from developing nations will access
Internet via their mobile phones.
Global mobile phone penetration is already reaching 50% while the number of mobile
phone users in India as of May 2010 has already crossed 617 million with an annual growth
of close to 50% .
For most economies, the share of Gross Domestic Product (GDP) attributable to the
ICT sector is already quite significant and is increasing each year. In India, ICT sector
contributed about 5.8% of the national GDP in Fiscal Year 2009. Share of GDP attributable
to ICT sector in developed economies such as United Kingdom is close to 7% . As of 2007,
the ICT sector was responsible for about 2% of total Carbon emissions at over 0.8 billion
tones of CO2 equivalent. With the kind of growth happening in the ICT sector, total
emissions from this sector is estimated to rise to about 1.4 billion tones of emission by 2020.
Segment wise contribution towards the total carbon footprint of the ICT sector is shown in
Figure 1.
signal processing and associated circuitry, power amplifiers, power supply and air
conditioning.
Fig 2 CO2 footprint of telecom devices and infrastructure by 2020 (Mt: Million tons).
(2) Almost all Cathode Ray Tube displays will be replaced by energy efficient
Liquid Crystal Displays. Both will bring significant efficiencies; however the increase in
number of PCs will mean that the total CO2 footprint in 2020 will be three times that of 2002
levels.
The Internet is often represented as a cloud and the term “cloud computing” arises
from that analogy. Accenture defines cloud computing as the dynamic provisioning of IT
capabilities (hardware, software, or services) from third parties over a network. McKinsey
says that clouds are hardware-based services offering compute, network and storage capacity
where: hardware management is highly abstracted from the buyer; buyers incur infrastructure
costs as variable OPEX [operating expenditures]; and infrastructure capacity is highly elastic
(up or down).1 The cloud model differs from traditional outsourcing in that customers do not
hand over their own IT resources to be managed. Instead they plug into the cloud, treating it
as they would an internal data center or computer providing the same functions.
In Cloud computing end users share a large, centrally managed pool of storage and
computing resources, rather than owning and managing their own systems [5]. There are
many definitions of cloud computing, and discussion within the IT industry continues over
the possible services that will be offered in the future.The broad scope of cloud computing is
succinctly summarize as:
Cloud computing is a model for enabling convenient, on-demand network access to a shared
pool of configurable computing resources that can be rapidly provisioned and released with
minimal management effort or service provider interaction
Often using existing data centers as a basis, cloud service providers invest in the
necessary infrastructure and management systems, and in return receive a time-based or
usage-based fee from end users. The end user in turn sees convenience benefits from having
data and services available from any location, from having data backups centrally managed,
from the availability of increased capacity when needed. One of the most important point is
that, for many users it averts the need for a large oneoff investment in hardware, sized to suit
maximum demand, and requiring upgrading every few years . Further benefits flow from the
centralized maintenance of software packages, data backups, and balancing the volume of
user demands across multiple servers or multiple data center sites. A number of organizations
are already hosting and/or offering cloud computing services.
But while its financial benefits have been widely discussed,the shift in energy usage
in a cloud computing model has received little attention. Through the use of large shared
servers and storage units, cloud computing can offer energy savings in the provision of
computing and storage services, particularly if the end user migrates toward the use of a
computer or a terminal of lower capability and lower energy consumption. At the same time,
cloud computing leads to increases in network traffic and the associated network energy
consumption. Thus here we are trying to explore the balance between server energy
consumption,network energy consumption, and end-user energy consumption, to present a
fuller assessment of the benefits of cloud computing. The issue of energy consumption in
information technology equipment has been receiving increasing attention in recent years and
there is growing recognition of the need to manage energy consumption across the entire
information and communications technology (ICT) sector.And that is why we need to discuss
Green Cloud Computing in order to make cloud computing more eco-efficent and green.
Chapter 2
Literature survey
Cloud computing has been defined by National Institute of Standards and Technology as a
model for enabling convenient, on-demand network access to a shared pool of configurable
computing resources (e.g., networks, servers,storage, applications, and services) that can be
rapidly provisioned and released with minimal management effort or cloud provider
interaction. Cloud computing can be considered a new computing paradigm insofar as it
allows the utilization of a computing infrastructure at one or more levels of abstraction, as an
on-demand service made available over the Internet or other computer network. Because of
the implications for greater flexibility and availability at lower cost, cloud computing is a
subject that has been receiving a good deal of attention lately.
Cloud computing services benefit from economies of scale achieved through versatile
use of resources, specialization, and other practicable efficiencies. However, cloud
computing is an emerging form of distributed computing that is still in its infancy. The term
itself is often used today with a range of meanings and interpretations. Much of what has
been written about cloud computing is definitional, aimed at identifying important paradigms
of use and providing a general taxonomy for conceptualizing important facets of service.
have to put all the data that they wish to access on the centrlized server provided by the
service provider. The public cloud computing is extremely suitable for end user with low
budget and for those people who wish to have access to their data from anywhere in the
world.
Chapter 3
Cloud Service Models And Security Issues in Cloud
Cloud Computing is a broad and ill-defined term, but in essence it amounts to virtualised
third-party hosting. That is, rather than renting part or all of an actual physical server from a
hosting company, you rent a certain amount of server resources. Your server runs inside a
virtual container which can be moved from one physical server to another without
interruption of service. Such a container is also capable of spanning multiple physical
machines, giving it potentially limitless resources.
Depending on what we choose we have diffrent Cloud Computing Models. Lets discuss
them one by one.
charged a monthly or yearly fee for access to the latest version of software. Additionally, the
software is hosted in the cloud and all computation is performed in the cloud. The client’s PC
is only used to transmit commands and receive results. Typically, users are free to use any
computer connected to the Internet.However, at any time, only a fixed number of instances of
the software are permitted to be running per user. One example of software as a service is
Google Docs.When a user exclusively uses network- or Internetbased software services, the
concept is similar to a thin client model, where each user’s client computer functions
primarily as a network terminal, performing input, output, and display tasks, while data are
stored and processed on a central server. Thin clients were popular in office environments
prior to the widespread use of PCs. In this scenario, data storage and processing is always
performed in the cloud and we are thus able to significantly reduce the functionality, and
consequently,the power consumption, of the client’s PC.
The second type is the simplest form of cloud computing: effectively virtualised
shared-server space into which you put anything you want. Thi may be an internal business
application, but it could equally be a public-facing website. This type, works almost exactly
the same way as regular shared server hosting. The host gives you access to a virtual
directory to which you can upload a single website’s files, but any changes to the platform or
environment have to be handled by the hosting company. For multiple web applications, you
need multiple SaaS virtual sites, which are charged separately but can all be managed though
a single web console.
So how is SaaS better than an ordinary shared server? With SaaS, the website exists
in a virtual container, which can be moved from physical server to physical server without
interruption of service.This makes it much easier to transfer the site to more powerful
hardware as the need arises, but all of the other limitations of shared servers still apply: very
limited configurability, no third-party software or background services, and so on.
Advantages:
VPS machine images can be saved as a file which can be deployed as a virtual server later.
This means that one effectively has a “clean install” of the required environment available at
all times as a rollback position in the event of a catastrophic failure.Moreover, the state of the
running virtual server can be “saved” at any time, providing a very convenient backup
procedure, albeit one that would be difficult to test without incurring charges from the
provider (the server image would be in a proprietary format, so the only way to test would be
to pay them to launch an instance of it). The basic cost of a VPS server appears to be slightly
lower than that of an equivalent dedicated platform. As with all cloud solutions, cost scales
according to use, so costs come down if traffic levels fall, which is very useful for anyone
whose revenue stream depends on traffic.
Disadvantages:
One potential problem is that Virtual Private Servers are typically incapable of passing a PCI
Security Audit, making them unsuitable for high-security functions like handling credit card
information. One could not, therefore, use a VPS to host an e-commerce site. In addition,
there is the issue of loss of control. Providers like Amazon reserve the right to shut off the
server without prior notice if it is behaving in a way that leads them to believe it has been
compromised by hackers, or if they think we are using it for unethical activities like
spamming. This means that if you were to end up on a blacklist by mistake, the consequences
would be worse than with a non-cloud server.
Advantages
Any PaaS solution comes with the advantage of minimising the developer’s maintenance
time while still providing a considerable amount of customisation and configuration.There is
also an argument to be made that since the two biggest players in the industry – Microsoft
and Google – are investing so heavily in PaaS cloud computing, there is a certain
inevitability to their emergence as a standard. This means that the odds of useful third party
tools being developed for PaaS systems are very high
Disadvantages
Vendor lock-in is always a concern when it comes to PaaS. One would have to write
applications to be tailored to the chosen platform, and migrating an application out of that
platform onto a standard dedicated server would be a problem. As with IaaS, full compliance
with security standards like PCI would be a problem. All PaaS platforms share the
disadvantage that there is a limit to the options available in terms of third-party applications.
This is mainly caused by the relative novelty of the platforms, which means very few
developers have released compatible versions of their software at this stage. Both Microsoft
Azure and the Google Application Engine require developers to port their software to the
new platform. It will take some time before the full range of software currently available on a
dedicated Windows Server becomes available on PaaS.
These are a few diffrent models of Cloud Computing that come under green Cloud
Computing. Basically every green cloud model is a cloud computin model but not every
Cloud Computing model is a green cloud computing model.
3.2.1 Trust
Under the cloud computing paradigm, an organization relinquishes direct control over many
aspects of security and, in doing so, confers an unprecedented level of trust onto the cloud
provider.
A) Insider Access.
Data processed or stored outside the confines of an organization, its firewall, and
other security controls bring with it an inherent level of risk. The insider security threat is a
well-known issue for most organizations and, despite the name, applies as well to outsourced
cloud services. Insider threats go beyond those posed by current or former employees to
include contractors, organizational affiliates, and other parties that have received access to an
organization’s networks, systems, and data to carry out or facilitate operations. Incidents may
involve various types of fraud, sabotage of information resources, and theft of confidential
information. Incidents may also be caused unintentionally—for instance, a bank employee
sending out sensitive customer information to the wrong Google mail account. Moving data
and applications to a cloud computing environment operated by a cloud provider expands the
insider security risk not only to the cloud provider’s staff, but also potentially among other
customers using the service. For example, a denial of service attack launched by a malicious
insider was demonstrated against a well-known IaaS cloud. The attack involved a cloud
subscriber creating an initial 20 accounts and launching virtual machine instances for each,
then using those accounts to create an additional 20 accounts and machine instances in an
iterative fashion, exponentially growing and consuming resources beyond set limits.
B) Composite Services.
Cloud services themselves can be composed through nesting and layering with other
cloud services. For example, a SaaS provider could build its services upon the services of a
PaaS or IaaS cloud. The level of availability of the SaaS cloud would then depend on the
availability of those services. Cloud services that use third party cloud providers to outsource
or subcontract some of their services should raise concerns, including the scope of control
over the third party, the responsibilities involved, and the remedies and recourse available
should problems occur. Trust is often not transitive, requiring that third-party arrangements
be disclosed in advance of reaching an agreement with the cloud provider, and that the terms
of these arrangements are maintained throughout the agreement or until sufficient
notification can be given of any anticipated changes.Liability and performance guarantees
can become a serious issue with composite cloud services. For example, a consumer storage-
based social networking service closed down after losing access to a significant amount of
data from 20,000 of its subscribers. Because it relied on another cloud provider to host
historical data, and on yet another cloud provider to host its newly launched application and
database, direct responsibility for the cause of the failure was unclear and never resolved.
C) Visibility.
Migration to public cloud services relinquishes control to the cloud provider for
securing the systems on which the organization’s data and applications operate.
Management, procedural, and technical controls used in the cloud must be commensurate
with those used for internal organizational systems or surpass them, to avoid creating gaps in
security. Since metrics for comparing two computer systems are an ongoing area of research,
making such comparisons can be a formidable task . Cloud providers are typically reluctant
to provide details of their security and privacy, since such information might be used to
devise an avenue of attack. Moreover, detailed network and system level monitoring by a
cloud subscriber is generally not part of most service arrangements, limiting visibility and the
means to audit operations directly.Transparency in the way the cloud provider operates is a
vital ingredient for effective oversight over system security and privacy by an organization.
To ensure that policy and procedures are being enforced throughout the system lifecycle,
service arrangements should include some means for gaining visibility into the security
controls and processes employed by the cloud provider and their performance over time.
Ideally, the organization would have control over aspects of the means of visibility, such as
the threshold for alerts and notifications or the level of detail and schedule for reports, to
accommodate its needs.
3.2.2 Architecture
The architecture of the software systems used to deliver cloud services comprises hardware
and software residing in the cloud. The physical location of the infrastructure is determined
by the cloud provider as is the implementation of the reliability and scalability logic of the
underlying support framework. Virtual machines often serve as the abstract unit of
deployment and are loosely coupled with the cloud storage architecture. Applications are
built on the programming interfaces of Internet-accessible services, which typically involve
multiple cloud components communicating with each other over application programming
interfaces. Many of the simplified interfaces and service abstractions belie the inherent
complexity that affects security.
Attack Surface.
The hypervisor or virtual machine monitor is an additional layer of software between an
operating system and hardware platform that is used to operate multi-tenant virtual machines.
Besides virtualized resources, the hypervisor normally supports other application
programming interfaces to conduct administrative operations, such as launching, migrating,
and terminating virtual machine instances. Compared with a traditional non-virtualized
implementation, the addition of a hypervisor causes an increase in the attack surface. The
complexity in virtual machine environments can also be more challenging than their
traditional counterparts, giving rise to conditions that undermine security.For example,
paging, checkpointing, and migration of virtual machines can leak sensitive data to persistent
storage, subverting protection mechanisms in the hosted operating system intended to prevent
such occurrences. Moreover, the hypervisor itself can potentially be compromised. For
instance, a vulnerability that allowed specially crafted File Transfer Protocol (FTP) requests
to corrupt a heap buffer in the hypervisor, which could allow the execution of arbitrary code
at the host, was discovered in a widely used virtualization software product, in a routine for
Network Address Translation (NAT)
B) Client-Side Protection.
A successful defense against attacks requires securing both the client and server side
of cloud computing. With emphasis typically placed on the latter, the former can be easily
overlooked. Web browsers, a key element for many cloud computing services, and the
various available plug-ins and extensions for them are notorious for their security problems.
Moreover, many browser add-ons do not provide automatic updates, increasing the
persistence of any existing vulnerabilities. Maintaining physical and logical security over
clients can be troublesome, especially with embedded mobile devices such as smart phones.
Their size and portability can result in the loss of physical control. Built-in security
mechanisms often go unused or can be overcome or circumvented without difficulty by a
knowledgeable party to gain control over the device. Smart phones are also treated more as
fixed appliances with a limited set of functions, than as general-purpose systems. No single
operating system dominates and security patches and updates for system components and
add-ons are not as frequent as for desktop clients, making vulnerabilities more persistent with
a larger window of opportunity for exploitation.
The increased availability and use of social media, personal Webmail, and other
publicly available sites also have associated risks that are a concern, since they can
negatively impact the security of the browser, its underlying platform, and cloud services
accessed, through social engineering attacks. For example, spyware was reportedly installed
in a hospital system via an employee’s personal Webmail account and sent the attacker more
than 1,000 screen captures, containing financial and other confidential information, before
being discovered . Having a backdoor Trojan, keystroke logger, or other type of malware
running on a client does not bode well for the security of cloud or other Web-based services
it accesses. As part of the overall security architecture for cloud computing, organizations
need to review existing measures and employ additional ones, if necessary, to secure the
client side. Banks are beginning to take the lead in deploying hardened browser environments
that encrypt network exchanges and protect against keystroke logging
C) Server-Side Protection.
Virtual servers and applications, much like their non-virtual counterparts, need to be
secured in IaaS clouds, both physically and logically. Following organizational policies and
procedures, hardening of the operating system and applications should occur to produce
virtual machine images for deployment. Care must also be taken to provision security for the
virtualized environments in which the images run. For example, virtual firewalls can be used
to isolate groups of virtual machines from other hosted groups, such as production systems
from development systems or development systems from other cloud-resident systems.
Carefully managing virtual machine images is also important to avoid accidentally deploying
images under development or containing vulnerabilities. Hybrid clouds are a type of
composite cloud with similar protection issues. In a hybrid cloud the infrastructure consists
of a private cloud composed with either a public cloud or another organization’s private
cloud. The clouds themselves remain unique entities, bound together by standardized or
proprietary technology that enables unified service delivery, but also creates
interdependency. For example, identification and authentication might be performed through
an organization’s private cloud infrastructure,as a means for its users to gain access to
services provisioned in a public cloud. Preventing holes or leaks between the composed
infrastructures is a major concern with hybrid clouds, because of increases in complexity and
diffusion of responsibilities. The availability of the hybrid cloud, computed as the product of
the availability levels for the component clouds, can also be a concern; if the percent
availability of any one component drops, the overall availability suffers proportionately.
A) Data Isolation.
Data can take many forms. For example, for cloud-based application development, it
includes the application programs, scripts, and configuration settings, along with the
development tools. For deployed applications, it includes records and other content created or
used by the applications, as well as account information about the users of the applications.
Access controls are one means to keep data away from unauthorized users; encryption is
another. Access controls are typically identity-based, which makes authentication of the
user’s identity an important issue in cloud computing. Database environments used in cloud
computing can vary significantly. For example, some environments support a multi-instance
model, while others support a multi-tenant model. The former provide a unique database
management system running on a virtual machine instance for each cloud subscriber, giving
the subscriber complete control over role definition, user authorization, and other
administrative tasks related to security. The latter provide a predefined environment for the
cloud subscriber that is shared with other tenants, typically through tagging data with a
subscriber identifier. Tagging gives the appearance of exclusive use of the instance, but relies
on the cloud provider to establish and maintain a sound secure database environment.
an emerging area of cryptography with little practical results to offer, leaving trust
mechanisms as the main safeguard .
B) Data Sanitization.
The data sanitization practices that a cloud provider implements have obvious
implications for security. Sanitization is the removal of sensitive data from a storage device
in various situations, such as when a storage device is removed from service or moved
elsewhere to be stored. Data sanitization also applies to backup copies made for recovery and
restoration of service, and also residual data remaining upon termination of service. In a
cloud computing environment, data from one subscriber is physically commingled with the
data of other subscribers, which can complicate matters. For instance, many examples exist
of researchers obtaining used drives from online auctions and other sources and recovering
large amounts of sensitive information from them. With the proper skills and equipment, it is
also possible to recover data from failed drives that are not disposed of properly by cloud
providers.
3.2.4. Availability
In simple terms, availability is the extent to which an organization’s full set of computational
resources is accessible and usable. Availability can be affected temporarily or permanently,
and a loss can be partial or complete. Denial of service attacks, equipment outages, and
natural disasters are all threats to availability. The concern is that most downtime is
unplanned and can impact the mission of the organization.
A) Temporary Outages.
Despite employing architectures designed for high service reliability and availability,
cloud computing services can and do experience outages and performance slowdowns. A
number of examples illustrate this point. In February 2008, a popular storage cloud service
suffered a three-hour outage that affected its subscribers, including Twitter and other startup
companies. In June 2009, a lightning storm caused a partial outage of an IaaS cloud that
affected some users for four hours. Similarly, in February 2008, a database cluster failure at a
SaaS cloud caused an outage for several hours, and in January 2009, another brief outage
occurred due to a network device failure. In March 2009, a PaaS cloud experienced severe
degradation for about 22 hours due to networking issues related to an upgrade.
B) Denial of Service.
A denial of service attack involves saturating the target with bogus requests to
prevent it from responding to legitimate requests in a timely manner. An attacker typically
uses multiple computers or a botnet to launch an assault. Even an unsuccessful distributed
denial of service attack can quickly consume large amounts of resources to defend against
and cause charges to soar. The dynamic provisioning of a cloud in some ways simplifies the
work of an attacker to cause harm. While the resources of a cloud are significant, with
enough attacking computers they can become saturated [Jen09]. For example, a denial of
service attack against a code hosting site operating over an IaaS cloud resulted in more than
19 hours of downtime. Besides attacks against publicly accessible services, denial of service
attacks can occur against internally accessible services, such as those used in cloud
management.Internally assigned non-routable addresses, used to manage resources within a
cloud provider’s network, may also be used as an attack vector. A worst-case possibility that
exists is for elements of one cloud to attack those of another or to attack some of its own
elements.
A) Authentication.
A growing number of cloud providers support the SAML standard and use it to
administer users and authenticate them before providing access to applications and data.
SAML provides a means to exchange information, such as assertions related to a subject or
authentication information, between cooperating domains. SAML request and response
messages are typically mapped over the Simple Object Access Protocol (SOAP), which relies
on the eXtensible Markup Language (XML) for its format. SOAP messages are digitally
signed. For example, once a user has established a public key certificate for a public cloud,
the private key can be used to sign SOAP requests.SOAP message security validation is
complicated and must be carried out carefully to prevent attacks. For example, XML
wrapping attacks have been successfully demonstrated against a public IaaS cloud. XML
wrapping involves manipulation of SOAP messages. A new element (i.e., the wrapper) is
introduced into the SOAP Security header; the original message body is then moved under
the wrapper and replaced by a bogus body containing an operation defined by the attacker.
The original body can still be referenced and its signature verified, but the operation in the
replacement body is executed instead.
B) Access Control.
A) Hypervisor Complexity.
The security of a computer system depends on the quality of the underlying software
kernel that controls the confinement and execution of processes.A virtual machine monitor or
hypervisor is designed to run multiple virtual machines, each hosting an operating system
and applications, concurrently on a single host computer, and to provide isolation between
the different guest virtual machines.A virtual machine monitor can, in theory, be smaller and
less complex than an operating system. These characteristics generally make it easier to
analyze and improve the quality of security, giving a virtual machine monitor the potential to
be better suited for maintaining strong isolation between guest virtual machines than an
operating system is for isolating processes. In practice, however, modern hypervisors can be
large and complex, comparable to an operating system, which negates this advantage. For
example, Xen, an open source x86 virtual machine monitor, incorporates a modified Linux
kernel to implement a privileged partition for input/output operations, and KVM,another
open source effort, transforms a Linux kernel into a virtual machine monitor. Understanding
the use of virtualization by a cloud provider is a prerequisite to understanding the security
risk involved.
B) Attack Vectors.
Multi-tenancy in virtual machine-based cloud infrastructures, together with the
subtleties in the way physical resources are shared between guest virtual machines, can give
rise to new sources of threat. The most serious threat is that malicious code can escape the
confines of its virtual machine and interfere with the hypervisor or other guest virtual
machines. Live migration, the ability to transition a virtual machine between hypervisors on
different host computers without halting the guest operating system, and other features
provided by virtual machine monitor environments to facilitate systems management, also
increase software size and complexity and potentially add other areas to target in an attack.
Several examples illustrate the types of attack vectors possible. The first is mapping the
cloud infrastructure. While seemingly a daunting task to perform, researchers have
demonstrated an approach with a popular IaaS cloud. By launching multiple virtual machine
instances from multiple cloud subscriber accounts and using network probes, assigned IP
addresses and domain names were analyzed to identify service location patterns. Building on
that information and general technique, the plausible location of a specific target virtual
machine could be identified and new virtual machines instantiated to be eventually co-
resident with the target.
Once a suitable target location is found, the next step for the guest virtual machine is
to bypass or overcome containment by the hypervisor or to takedown the hypervisor and
system entirely. Weaknesses in the provided programming interfaces and the processing of
instructions are common targets for uncovering vulnerabilities to exploit. For example, a
serious flaw that allowed an attacker to write to an arbitrary out-of-bounds memory location
was discovered in the power management code of a hypervisor by fuzzing emulated I/O
ports. A denial of service vulnerability, which could allow a guest virtual machine to crash
the host computer along with the other virtual machines being hosted, was also uncovered in
a virtual device driver of a popular virtualization software product.More indirect attack
avenues may also be possible. For example, researchers developed a way for an attacker to
gain administrative control of guest virtual machines during a live migration, employing a
man-in-the-middle attack to modify the code used for authentication. Memory modification
during migration presents other possibilities, such as the potential to insert a virtual machine-
based rootkit layer below the operating system. A zero-day exploit in HyperVM, an open
source application for managing virtual private servers, purportedly led to the destruction of
approximately 100,000 virtual server-based Websites hosted by a service provider. Another
example of an indirect attack involves monitoring resource utilization on a shared server to
gain information and perhaps perform a side-channel attack, similar to attacks used in other
computing.
Chapter 4
Green Cloud Architetcture
Green Computing enables companies to meet business demands for cost-effective, energy-
efficient, flexible, secure & stable solutions while being environmentally responsible.There is
no denying it: the cost of energy is out of control and it affects every industry in the world,
including information technology. As the old adage goes, "out-of-sight, out-of-mind." With
neither drainage pipes nor chimneys, it's easy to forget that our clean, cool data centers can
have significant impact on both the corporate budget and the environment. Every data center
transaction requires power. Every IT asset purchased must eventually be disposed of, one
way or another. Efficiency, equipment disposal and recycling, and energy consumption,
including power and cooling costs, have become priority for those who manage the data
centers that make businesses run.
“Green Computing” is defined as the study and practice of using computing resources
efficiently through a methodology that combines reducing hazardous materials, maximizing
energy efficiency during the product’s lifetime, and recycling older technologies and defunct
products.
Most data centers built before 2001 were designed according to traditional capacity
models and technology limitations, which forced system architects to expand capacity by
attaching new assets. In essence, one server per workload, with every asset requiring
dedicated floor space, management, power and cooling. These silo infrastructures are
inherently inefficient, leading to asset underutilization, greater hardware expenditure and
higher total energy consumption. In a 2006 study, the respected research firm IDC found that
the expense to power and cool a company’s existing install-base of servers equated to 45.8%
of new IT spending. The analyst group forecasted that server power and cooling expense
could amount to 65.8% of new server spending by 2011. Also according to the IDC:
1. Right now, 50 cents of every dollar spent on IT equipment is devoted to powering and
cooling; by 2011 that per unit cost might well approach 70 cents of every dollar.
2. Experience has shown that growing companies typically add more servers, rather than
implementing a consolidation or virtualization solution. More servers mean larger
utility bills and potentially greater environmental issues.
3. Between 2000 and 2010 sever installations will grow by 6 times and storage by 69
times. (IBM/Consultant Studies)
4. U.S. energy consumption by data centers is expected to almost double in the next five
years (U.S. EPA, August 2007)
5. U.S. commercial electrical costs increased by 10% from 2005 to 2006 (EPA Monthly
Forecast, 2007)
6. Data center power and cooling costs have increased 800% since 1996.
(IBM/Consultant Studies)
7. Over the next five years, it is expected that most U.S. data centers will spend as much
on energy costs as on hardware, and twice as much as they currently do on server
management and administration costs. (IBM/Consultant Studies)
1. Reducing data center energy consumption, as well as power and cooling costs
2. Security and data access are critical and must be more easily and efficiently managed
3. Critical business processes must remain up and running in a time of power drain or
surge
These issues are leading more companies to adopt a Green Computing plan for
business operations, energy efficiency and IT budget management. Green Computing is
becoming recognized as a prime way to optimize the IT environment for the benefit of the
corporate bottom line – as well as the preservation of the planet. It is about efficiency, power
consumption and the application of such issues in business decision-making. Simply stated,
Green Computing benefits the environment and a company’s bottom line. It can be a win/win
situation, meeting business demands for cost-effective, energy-efficient, flexible, secure and
stable solutions, while demonstrating new levels of environmental responsibility.
demonstrates the GreenCloud architecture and shows the functions of components and their
relations in the architecture.
Migration Manager triggers live migration and makes decision on the placement of
virtual machines on physical servers based on knowledge or information provided by the
Monitoring Service. The migration scheduling engine searches the optimal placement by a
heuristic algorithm, and sends instructions to execute the VM migration and turn on or off a
server. A heuristic algorithm to search an optimal VM placement and the implementation
details of Migration Manager will be discussed in Section IV. The output of the algorithm is
an action list in terms of migrate actions (e.g. Migrate VM1 from PM2 to PM4) and local
adjustment actions(e.g. Set VM2 CPU to 1500MHz).
E-Map is a web-based service with Flash front-end. It provides a user interface (UI)
to show the real-time view of present and past system on/off status, resource consumption,
workload status, temperature and energy consumption in the system at multiple scales, from
high-level overview down to individual IT devices (e.g. servers and storage devices) and
other equipment (e.g. wateror air-cooling devices). E-map is connected to the Workload
Simulator, which predicts the consequences after a given actions adopted by the Migration
Monitor through simulation in real environment.
1. Diagnose :-
It is difficult to manage what cannot be measured, particularly when it comes to
energy efficiency. It has been estimated that 40% of small and mid-size businesses in the
United States do not know how much they spend on overall energy costs for their IT systems.
It is important for a company to collect accurate, detailed information on its energy
efficiency as a first step in pinpointing areas for potential improvement and to identify
existing systems ready for retirement. Mainline and IBM provide Energy Efficiency
Assessments, which are proven tools for diagnosing the energy demands of physical
infrastructure and IT equipment.
2. Build :–
After identifying needs and solution requirements, and reviewing Energy
EfficiencyAssessments, the second step includes planning and designing the new solution
3. Virtualize :–
Virtualization can produce the fastest and greatest impact on energy efficiency in an
information technology center. Consolidating an IT infrastructure can increase utilization and
lower annual power costs. Reducing the number of servers and storage devices through
virtualization strategies can create a leaner data center without sacrificing performance. Less
complexity, reduced cost, better utilization and improved management are all benefits of
server, storage and desktop virtualization, and helps achieve Green Computing.
4. Manage :–
Data center energy consumption is managed through provisioning and virtualization
management software, providing important power alerts, as well as trending, capping and
heat measurements. Such software can reduce power consumption by 80% annually.
5. Cool :–
Excessive heat threatens equipment performance and operating stability. Innovative
IBM cooling solutions for inside and outside the data center minimize hotspots and reduce
energy consumption. IBM's patented Rear Door Heat eXchanger "cooling doors" are now
available across most IBM Systems offerings. While requiring no additional fans or
electricity, they reduce server heat output in data centers up to 60% by utilizing chilled water
to dissipate heat generated by computer systems.
3. If at all any company goes for cloud storeage in that case it dose not need to purchase
the hardware that will be required for implementing and createing network for
themseleves.It will also help them in cost cutting as there wont be any need for
appointing of staff for managing and maintaining the network. All they need to do is
outsource their need to some cloud managing company.
4. These technique is also quite eco-friendly and economic. This is because instead of
purchasing a new server and using it less than its capacity we use the amount of
server space as per our requirement. These also helps off-loading the power
requirement for processsing in peak hours.
5. As in these technique we use the server as per our need so the space required to
strore,mangae and cool the server also reduces drastically. These not only helps us in
saving energy but it also proves to be economically vaible in all aspects.
3. There are systems management tools for the cloud environment but they may not
integrate with existing system management tools, so you are likely to need two
systems. Nevertheless, cloud computing may provide enough benefits to compensate
for the inconvenience of two tools.
4. Cloud customers may risk losing data by having them locked into proprietary formats
and may lose control of data because tools to see who is using them or who can view
them are inadequate.
5. Data loss is a real risk. In October 2009 1 million US users of the T-Mobile Sidekick
mobile phone and emailing device lost data as a result of server failure at Danger, a
company recently acquired by Microsoft.
6. It may not be easy to tailor service-level agreements (SLAs) to the specific needs of a
business. Compensation for downtime may be inadequate and SLAs are unlikely to
cover concomitant damages, but not all applications have stringent uptime
requirements. It is sensible to balance the cost of guaranteeing internal uptime against
the advantages of opting for the cloud. It could be that your own IT organization is
not as sophisticated as it might seem.
7. Standards are immature and things change very rapidly in the cloud. All IaaS and
SaaS providers use different technologies and different standards. The storage
infrastructure behind Amazon is different from that of the typical data center (e.g., big
Unix file systems). The Azure storage engine does not use a standard relational
database; Google’s App Engine does not support an SQL database. So you cannot just
move applications to the cloud and expect them to run. At least as much work is
involved in moving an application to the cloud as is involved in moving it from an
existing server to a new one. There is also the issue of employee skills: staff may
need retraining and they may resent a change to the cloud and fear job losses.
Bear in mind, though, that it is easy to underestimate risks associated with the current
environment while overestimating the risk of a new one. Cloud computing is not risky for
every system. Potential users need to evaluate security measures such as firewalls, and
encryption techniques and make sure that they will have access to data and the software or
source code if the service provider goes out of business.
Chapter 5
Future Scope
Uptil now we had discussed what is cloud computing? Why it has became such a buzz in IT
industry these days? What are its advantages and disadvantages? Then we studied about
Green Cloud architeture. From all these we can conclude is that still there is a lot of scope of
improvement in cloud architeture so as to make it more economical,reliable,energy efficent
and eco-friendly.
1. Firstly the end user will have to register himself with some cloud service provider like
Amazon, for cloud service. And then according to the need of the coustmer he will be
provided service under some some type of cloud service model
2. Whatever data coustmer needs will be stored at the data centres that are built and
maintained by the cloud serrvice provider and than user can get access to that data
from any place in world and for that ofcourse he will need an active internet
connection.
All these is pretty much similar to signing up for some type mail service like gmail wherein
we create user account and then we can read and send mails to any one by means of our mail
account.
Here comes into picture the actual entites that we need to focus. As mentioned above
whenever any user signs himself with any cloud service provider he puts his entire data,that
he wants to have access to from any place, onto the centralized server of the service provider.
Now inorder to store these enourmous amount of data, cloud service providers need to have
big data centres with server machines and many other expensive networking gadgets. Infact
couple the service providers may end up with having a grid arciteture in their place onto
which they store these massive amount of data.
Now all of use know a grid architeture is very huge,complicated and expensive to
maintain plus it has overhead of cooling. So the cloud service provider will have to not only
make a huge one time investment in expensive networking components that will be used to
constructint the grid but also they will have to put in a lot of investment in cooling and
maintaining their data centre. Moreover the life the life these arch. is also very less as
compared to the amount of investment that we need to put in them. Say for instance a
supercomputer like Param Yuva rated 68th as best supercomputer worl wide is expected to
have a life span of about 6 to 7 years. These is very less if we need to put in about 1millon $
to first of all purchase it and then again spend about 1millon $ dollars annualy in maintaing
and cooling the same.
Now we suggest some modifications from our end by which we can improve the
efficency of cloud to great extent which will inturn make it much more economic and viable
and at the same time we will also be able to address some of key issues in cloud computing
as discussed before in chapter 4.
According to the current architeture of cloud whatever data we want to put into the
cloud we store it with our cloud service provider. So what we suggest is that instead of
having these data in cloud we will have these data in some shared directory on the user hard
disk and instead of having the actual user data we will simply store the location from where
the user is subscribing for the service. Now whenever user signs up for the service all that the
service provider needs to maintain is attributes like MAC address of coustmer then diffrent
parameters like DNS address, router address etc from which we will able to track the users
sysytem even if he logs into the service via diffrent networks. Now whatever data user needs
to have access to he will place it in some local directory which is again stored in his system
say cloudshare, and he need to put all that data into that directory.
No doubt these system seems to be very economic in terms of monetry gains but we
cannot ignore security in IT industry for the sake of monetry gains. So keeping in mind we
have designed the security system our architecture. As mentioned earlier service provider
database will consist of username and password of the coustmer.Inorder to have more secure
access to data, coustomers should create user accounts on their systems too. Now whenever
any person will try to gain access to data by means of cloud then first of all he needs to know
the username and password by which the authentic user has signed up himself with the
service provider, in addtion to these he needs to know the username and password by which
he will be able to access the autentic users machine. Thus it will be a two layer authentication
system which will make cloud much more secure.
Another key security issue that had been discussed in the earlier section was secure
deletion of data fron cloud. By means of these architecture we may be able to address these
security concern. Whenever any data is erased from any storeage media, let it be cloud or any
storeage device, then by using various data recovery softwares these deleted data can be
recovered from any storeage medium. At times these can be useful but again in couple of
cases it can be a serious issue especially if the concerned parties are like CBI,FBI,RAW etc.
So secure deletion of data from cloud is not only important from security point of veiw but it
is also important from the user privacy prespective. Now if any cloud services are
implemented using the above mentioned technique then we will be able to track these issue.
In order to remove data from cloud all that a user needs to do is to remove the respective files
from the cloud shared folder and either store it in some other part of disk or delete it from his
disk too. By these other users wont be able to get access to deleted data.
It is not at all the case that the above technique can completely replace the existing
cloud computing architecture. But it is best suitable for storeage as a service model. But even
if we implement the same in diffrent model like IaaS, PaaS etc. even then these technique can
prove to be very effective as it takes the benefit of reduced storeage requirement.
Chapter 6
Conclusion
We considered both public and private clouds and included energy consumption in switching
and transmission as well as data processing and data storage.We discussed difrrent models of
cloud computing like SaaS, IaaS, PasS etc. Any future service is likely to include some
combination of each of these service models. Power consumption in transport represents a
significant proportion of total power consumption for cloud storage services at medium and
high usage rates. For typical networks used to deliver cloud services today, public cloud
storage can consume of the order of three to four times more power than private cloud
storage due to the increased energy consumption in transport. Nevertheless, private and
public cloud storage services are more energy efficient than storage on local hard disk drives
when files are only occasionally accessed. However, as the number of file downloads per
hour increases, the energy consumption in transport grows and storage as a service consumes
more power than storage on local hard disk drives. The number of users per server is the
most significant determinant of the energy efficiency of a cloud software service. Cloud
software as a service is ideal for applications that require average frames rates lower than the
equivalent of 0.1 screen refresh frames per second. Significant energy savings are achieved
by using lowend laptops for routine tasks and cloud processing services for computationally
intensive tasks, instead of a midrange or high-end PC, provided the number of
computationally intensive tasks is small. Energy consumption in transport with a private
cloud processing service is negligibly small. Our broad conclusion is that the energy
consumption of cloud computing needs to be considered as an integrated supply chain
logistics problem, in which processing, storage, and transport are all considered together.
Using this approach, we have shown that cloud computing can enable more energy-efficient
use of computing power, especially when the users’ predominant computing tasks are of low
intensity or infrequent. However, under some circumstances, cloud computing can consume
more energy than conventional computing where each user performs all computing on their
own PC. Even with energy-saving techniques such as server virtualization and advanced
cooling systems, cloud computing is not always the greenest computing technology.
References :-
[1] Green Cloud Computing: Balancing Energy in Processing, Storage and Transport
By Jayant Baliga, Robert W. A. Ayre, Kerry Hinton, and Rodney S. Tucker.
[2] Back to Green by Anand R. Prasad, Subir Saha, Prateep Misra, Basavaraj Hooli and
Masahide Murakami
[4] GreenCloud: A New Architecture for Green Data Center by Liang Liu, Hao Wang,
Xue Liu, Xing Jin, WenBo He, QingBo Wang, Ying Chen
[5] Guidelines on Security and Privacy in Public Cloud Computing by Wayne Jansen
And Timothy Grance
[7] White Paper: 5 Steps to a Successful Green Computing Solution by Mainline and
IBM