You are on page 1of 26

Application of Green Cloud Computing for

Efficient Resource Energy Management in


Data Centres
Abstract
The perception of cloud computing has not only
reshaped the field of distributed systems but also
fundamentally changed how businesses utilize computing
today. Cloud computing is offering utility oriented IT services
to users worldwide. It enables hosting of applications from
consumer, scientific and business domains based on pay-asyou-
go model. However data centres hosting cloud computing
applications consume huge amounts of energy, contributing to
high operational costs and carbon footprints to the
environment. With energy shortages and global climate
change leading our concerns these days, the power
consumption of data centres has become a key issue. The area
of Green computing is also becoming increasingly important
in a world with limited energy resources and an ever-rising
demand for more computational power. Therefore, we need
green cloud computing solutions that can not only save energy,
but also reduce operational costs. In this paper, an
architectural framework and principles that provides efficient
green enhancements within a scalable Cloud computing
architecture with resource provisioning and allocation
algorithm for energy efficient management of cloud computing
environments to improve energy efficiency of the data centre.
Using power-aware scheduling techniques, variable resource
management, live migration, and a minimal virtual machine
design, overall system efficiency will be vastly improved in a
data centre based Cloud with minimal performance overhead.

Keywords - Cloud Computing, Green Computing,


Virtualization, Energy Efficiency, Resource Management,
Virtualization, Scheduling

INTRODUCTION

The vision of computing utilities based on a service provisioning model


anticipated the massive transformation of the entire computing industry
whereby computing services will be readily available on demand. Similarly,
users (consumers) need to pay providers only when they access the computing
services. In addition, consumers no longer need to invest heavily or encounter
difficulties in building and maintaining complex IT infrastructure. This model
has been referred recently as Cloud computing in which users access services
based on their requirements without regard to where the services are hosted.
Later it denotes the infrastructure as a “Cloud” from which businesses and users
can access applications as services from anywhere in the world on demand.
Hence, Cloud computing can be classified as a new paradigm for the dynamic
provisioning of computing services supported by state-of-the-art data centres
that usually employ Virtual Machine (VM) technologies for consolidation and
environment isolation purposes.

Modern data centres, operating under the Cloud computing model are hosting a
variety of applications ranging from those that run for a few seconds to those
that run for longer periods of time on shared hardware platforms. The need to
manage multiple applications in a data centre creates the challenge of on
demand resource provisioning and allocation in response to time-varying
workloads. Normally, data centre resources are statically allocated to
applications, based on peak load characteristics, in order to maintain isolation
and provide performance guarantees.
Until high performance has been the sole concern in data centre deployments,
this demand has been fulfilled without paying much attention to energy
consumption. Data centres are not only expensive to maintain, but also
unfriendly to the environment. And now it drive more in carbon emissions than
both Argentina and the Netherlands .High energy costs and huge carbon
footprints are incurred due to massive amounts of electricity needed to power
and cool numerous servers hosted in these data centres.
DATA CENTER

A data center is a facility used to house computer systems and associated


components, such as telecommunications and storage systems. It generally
includes redundant or backup power supplies, redundant data communications
connections, environmental controls (e.g. air conditioning, fire suppression) and
various security devices. A large data center is an industrial-scale operation
using as much electricity as a small town.
HISTORY

Data centers have their roots in the huge computer rooms of the 1940s, typified
by ENIAC, one of the earliest examples of a data center. Early computer
systems, complex to operate and maintain, required a special environment in
which to operate. Many cables were necessary to connect all the components,
and methods to accommodate and organize these were devised such as
standard racks to mount equipment, raised floors, and cable trays (installed
overhead or under the elevated floor). A single mainframe required a great deal
of power, and had to be cooled to avoid overheating. Security became
important – computers were expensive, and were often used
for military purposes. Basic design-guidelines for controlling access to the
computer room were therefore devised.

During the boom of the microcomputer industry, and especially during the
1980s, users started to deploy computers everywhere, in many cases with little
or no care about operating requirements. However, as information
technology (IT) operations started to grow in complexity, organizations grew
aware of the need to control IT resources. The advent of Unix from the early
1970s led to the subsequent proliferation of freely available Linux-
compatible PC operating-systems during the 1990s. These were called
"servers", as timesharing operating systems like Unix rely heavily on the client-
server model to facilitate sharing unique resources between multiple users. The
availability of inexpensive networking equipment, coupled with new standards
for network structured cabling, made it possible to use a hierarchical design that
put the servers in a specific room inside the company. The use of the term "data
center", as applied to specially designed computer rooms, started to gain
popular recognition about this time.
REQUIREMENTS FOR MODERN DATA CENTERS

IT operations are a crucial aspect of most organizational operations around the


world. One of the main concerns is business continuity; companies rely on their
information systems to run their operations. If a system becomes unavailable,
company operations may be impaired or stopped completely. It is necessary to
provide a reliable infrastructure for IT operations, in order to minimize any
chance of disruption. Information security is also a concern, and for this reason
a data center has to offer a secure environment which minimizes the chances of
a security breach. A data center must therefore keep high standards for assuring
the integrity and functionality of its hosted computer environment. This is
accomplished through redundancy of mechanical cooling and power systems
(including emergency backup power generators) serving the data center along
with fiber optic cables.
The Telecommunications Industry Association's Telecommunications
Infrastructure Standard for Data Centers specifies the minimum requirements
for telecommunications infrastructure of data centers and computer rooms
including single tenant enterprise data centers and multi-tenant Internet hosting
data centers. The topology proposed in this document is intended to be
applicable to any size data center.

Telcordia GR-3160, NEBS Requirements for Telecommunications Data Center


Equipment and Spaces, provides guidelines for data center spaces within
telecommunications networks, and environmental requirements for the
equipment intended for installation in those spaces. These criteria were
developed jointly by Telcordia and industry representatives. They may be
applied to data center spaces housing data processing or Information
Technology (IT) equipment. The equipment may be used to:

 Operate and manage a carrier's telecommunication network


 Provide data center based applications directly to the carrier's customers
 Provide hosted applications for a third party to provide services to their
customers
 Provide a combination of these and similar data center applications

Effective data center operation requires a balanced investment in both the


facility and the housed equipment. The first step is to establish a baseline
facility environment suitable for equipment installation. Standardization and
modularity can yield savings and efficiencies in the design and construction of
telecommunications data centers.

Standardization means integrated building and equipment engineering.


Modularity has the benefits of scalability and easier growth, even when
planning forecasts are less than optimal. For these reasons, telecommunications
data centers should be planned in repetitive building blocks of equipment, and
associated power and support (conditioning) equipment when practical. The use
of dedicated centralized systems requires more accurate forecasts of future
needs to prevent expensive over construction, or perhaps worse — under
construction that fails to meet future needs.

The "lights-out" data center, also known as a darkened or a dark data center, is a
data center that, ideally, has all but eliminated the need for direct access by
personnel, except under extraordinary circumstances. Because of the lack of
need for staff to enter the data center, it can be operated without lighting. All of
the devices are accessed and managed by remote systems, with automation
programs used to perform unattended operations. In addition to the energy
savings, reduction in staffing costs and the ability to locate the site further from
population centers, implementing a lights-out data center reduces the threat of
malicious attacks upon the infrastructure.
ARCHITECTURE OF GREEN – CLOUD COMPUTING

People in IT industry are reassessing data center strategies to determine if


energy efficiency should be added to the list of critical operating parameters.

Issues of concern include:


1. Reducing data center energy consumption, as well as power and cooling costs
2. Security and data access are critical and must be more easily and efficiently
managed
3. Critical business processes must remain up and running in a time of power
drain or surge
These issues are leading more companies to adopt a Green Computing plan for
business operations, energy efficiency and IT budget management. Green
Computing is becoming recognized as a prime way to optimize the IT
environment for the benefit of the corporate bottom line – as well as the
preservation of the planet. It is about efficiency, power consumption and the
application of such issues in business decision-making. Simply stated, Green
Computing benefits the environment and a company’s bottom line. It can be a
win/win situation, meeting business demands for cost-effective, energy-
efficient, flexible, secure and stable solutions, while demonstrating new levels
of environmental responsibility.
A.Cloud:

Cloud computing is becoming one of the most explosively expanding


technologies in the computing industry today. It enables users to migrate their
data and computation to a remote location with minimal impact on system
performance.
These benefits include:
1. Scalable - Clouds are designed to deliver as much computing power as any
user wants.
2. Quality of Service (QoS) - Unlike standard data centres and advanced
computing resources, a welldesigned Cloud can project a much higher QoS
than typically possible.
3. Specialized Environment - Within a Cloud, the user can utilize custom tools
and services to meet their needs.
4. Cost Effective - Users finds only the hardware required for each project.
5. Simplified Interface - Whether using a specific application, a set of tools or
Web services, Clouds provide access to a potentially vast amount of
computing resources in an easy and user-centric way.
Cloud Infrastructure:

Fig 1. The high level system Architecture.

In Cloud computing infrastructure, there are four main entities involved:


1. Consumers/Brokers: Cloud consumers or their brokers submit service
requests from anywhere in the world to the Cloud. It is important to notice
that there can be a difference between Cloud consumers and users of deployed
services.
2. Green Resource Allocator: Acts as the interface between the Cloud
infrastructure and consumers. It requires the interaction of the following
components to support energy-efficient resource management:
ENERGY USE

Energy use is a central issue for data centers. Power draw for data centers
ranges from a few kW for a rack of servers in a closet to several tens of MW for
large facilities. Some facilities have power densities more than 100 times that of
a typical office building. For higher power density facilities, electricity costs are
a dominant operating expense and account for over 10% of the total cost of
ownership (TCO) of a data center. By 2012 the cost of power for the data center
is expected to exceed the cost of the original capital investment.

Greenhouse gas emissions

In 2007 the entire information and communication technologies or ICT sector


was estimated to be responsible for roughly 2% of global carbon emissions with
data centers accounting for 14% of the ICT footprint.[ The US EPA estimates
that servers and data centers are responsible for up to 1.5% of the total US
electricity consumption, or roughly .5% of US GHG emissions, for 2007. Given
a business as usual scenario greenhouse gas emissions from data centers is
projected to more than double from 2007 levels by 2020.[

Siting is one of the factors that affect the energy consumption and
environmental effects of a datacenter. In areas where climate favors cooling and
lots of renewable electricity is available the environmental effects will be more
moderate. Thus countries with favorable conditions, such as: Canada,
Finland Sweden, Norway and Switzerland, are trying to attract cloud
computing data centers.
In an 18-month investigation by scholars at Rice University's Baker Institute for
Public Policy in Houston and the Institute for Sustainable and Applied
Infodynamics in Singapore, data center-related emissions will more than triple
by 2020.

ENERGY EFFICIENCY

The most commonly used metric to determine the energy efficiency of a data
center is power usage effectiveness, or PUE. This simple ratio is the total power
entering the data center divided by the power used by the IT equipment.

Total facility power consists of power used by IT equipment plus any overhead
power consumed by anything that is not considered a computing or data
communication device (i.e. cooling, lighting, etc.). An ideal PUE is 1.0 for the
hypothetical situation of zero overhead power. The average data center in the
US has a PUE of 2.0, meaning that the facility uses two watts of total power
(overhead + IT equipment) for every watt delivered to IT equipment. State-of-
the-art data center energy efficiency is estimated to be roughly 1.2. Some large
data center operators like Microsoft and Yahoo! have published projections of
PUE for facilities in development; Googlepublishes quarterly actual efficiency
performance from data centers in operation.

The U.S. Environmental Protection Agency has an Energy Star rating for
standalone or large data centers. To qualify for the ecolabel, a data center must
be within the top quartile of energy efficiency of all reported facilities.

European Union also has a similar initiative: EU Code of Conduct for Data
Centres.

SECURITY

Physical security also plays a large role with data centers. Physical access to the
site is usually restricted to selected personnel, with controls including a layered
security system often starting with fencing, bollards and mantraps. Video
camera surveillance and permanent security guards are almost always present if
the data center is large or contains sensitive information on any of the systems
within. The use of finger print recognition mantraps is starting to be
commonplace.
APPLICATIONS
The main purpose of a data center is running the IT systems applications that
handle the core business and operational data of the organization. Such systems
may be proprietary and developed internally by the organization, or bought
from enterprise software vendors. Such common applications
are ERP and CRM systems.

A data center may be concerned with just operations architecture or it may


provide other services as well.

Often these applications will be composed of multiple hosts, each running a


single component. Common components of such applications are databases, file
servers, application servers, middleware, and various others.

Data centers are also used for off site backups. Companies may subscribe to
backup services provided by a data center. This is often used in conjunction
with backup tapes. Backups can be taken off servers locally on to tapes.
However, tapes stored on site pose a security threat and are also susceptible to
fire and flooding. Larger companies may also send their backups off site for
added security. This can be done by backing up to a data center. Encrypted
backups can be sent over the Internet to another data center where they can be
stored securely.
CARRIER NUTRALITY

Today many data centers are run by Internet service providers solely for the
purpose of hosting their own and third party servers.

However traditionally data centers were either built for the sole use of one large
company, or as carrier hotels or Network-neutral data centers.

These facilities enable interconnection of carriers and act as regional fiber hubs
serving local business in addition to hosting content servers.
CONCLUSION AND FUTURE WORK

As the prevalence of Cloud computing continues to rise, the need for power
saving mechanisms within the Cloud also increases. This paper presents a Green
Cloud framework for improving system efficiency in a data center. To
demonstrate the potential of framework, presented new energy efficient
scheduling. Though in this paper, we have found new ways to save vast
amounts of energy while minimally impacting performance. Not only do the
components discussed in this paper complement each other, they leave space for
future work. Future opportunities could explore a scheduling system that is both
power-aware and thermal-aware to maximise energy savings both from physical
servers and the cooling systems used. Such a scheduler would also drive the
need for better data center designs, both in server placements within racks and
closedloop cooling systems integrated into each rack. While a number of the
Cloud techniques are discussed in this paper, there is a growing need for
improvements in Cloud infrastructure, both in the academic and commercial
sectors.

You might also like