You are on page 1of 11

Future Generation Computer Systems 79 (2018) 715–725

Contents lists available at ScienceDirect

Future Generation Computer Systems


journal homepage: www.elsevier.com/locate/fgcs

Energy modeling in cloud simulation frameworks


Antonios T. Makaratzis, Konstantinos M. Giannoutakis *, Dimitrios Tzovaras
Information Technologies Institute, Centre for Research and Technology Hellas, Thessaloniki 57001, Greece

highlights

• A survey on modeling of open source cloud simulation frameworks is conducted.


• Energy consumption models of CPUs, memory, storage and network are examined.
• Experimentation with four cloud simulation platforms is performed.
• Comparisons of the models and the impact of the hardware components on the total energy consumption is examined.

article info a b s t r a c t
Article history: There is a quite intensive research for Cloud simulators in the recent years, mainly due to the fact that
Received 28 February 2017 the need for powerful computational resources has led organizations to use cloud resources instead of
Received in revised form 19 May 2017 acquiring and maintaining private servers. In order to test and optimize the strategies that are being
Accepted 16 June 2017
used on cloud resources, cloud simulators have been developed since the simulation cost is substantially
Available online 24 June 2017
smaller than experimenting on real cloud environments. Several cloud simulation frameworks have been
proposed during the last years, focusing on various components of the cloud resources. In this paper, a
Keywords:
Cloud computing survey on cloud simulators is conducted, in order to examine the different models that have been used for
Cloud simulators the hardware components that constitute a cloud data center. Focus is given on the energy models that
Modeling have been proposed for the prediction of the energy consumption of data center components, such as CPU,
Energy consumption memory, storage and network, while experiments are performed in order to compare the different power
Data center models used by the simulation frameworks. The following cloud simulation frameworks are considered:
CloudSched, CloudSim, DCSim, GDCSim, GreenCloud and iCanCloud.
© 2017 Elsevier B.V. All rights reserved.

1. Introduction Energy consumption of cloud data centers is an important factor


since the amount of the consumed energy is increasing, mainly due
Cloud computing is an emerging technology, due to the flex- to non energy aware usage of cloud infrastructures. It has been
ibility of using powerful computing resources without the need reported that about 0.5% of the worldwide energy consumption
of in-house infrastructures. Organizations are shifting from tra- concerned cloud data centers, while this is expected to quadruply
ditional in-house computing towards cloud computing solutions, in 2020, [1]. For this reason, cloud simulation frameworks have
as they provide higher reliability, flexibility, broad network access introduced energy models in order to develop strategies for the
on demand usage, security etc. As this transition is evolving, the optimization of the usage of the cloud resources. This paper is
need for evaluation of several aspects of cloud infrastructures
focused on the energy consumption modeling of data centers,
becomes of great importance. Cloud simulation tools have been
and more specifically on models for the simulation of the CPU,
proposed during the last years, providing experimentation plat-
network, memory and storage components. The most known open
forms for evaluating different strategies of the cloud resources
source cloud simulation frameworks were considered with their
usage. Simulation saves the expenses of experimenting on real
cloud infrastructures, since a comprehensive study of the whole corresponding models, while experimentation was performed for
problem in real cloud resources is extremely difficult. Thus, cloud the impact of the models on the total energy consumption of a
simulators provide simplicity, repeatability of experiments and cloud data center.
reduces experimentation costs. Many cloud simulation surveys have been performed during the
last years. Most of them discuss the architecture, the main concepts
and functionality of each tool, while some of them provide compar-
* Corresponding author.
E-mail addresses: antomaka@iti.gr (A.T. Makaratzis), kgiannou@iti.gr isons on the high-level and technical features of each simulator, [2–
(K.M. Giannoutakis), Dimitrios.Tzovaras@iti.gr (D. Tzovaras). 6]. In [7], a survey was performed on various cloud simulators

http://dx.doi.org/10.1016/j.future.2017.06.016
0167-739X/© 2017 Elsevier B.V. All rights reserved.
716 A.T. Makaratzis et al. / Future Generation Computer Systems 79 (2018) 715–725

containing some energy models, though comparisons were made and functionality. The work in [13] extends DCSim by adding data
only regarding the performance and the memory requirements of center organization components such as racks and clusters, and
the cloud simulators, and not for the affect of the models on the proposes new built-in features, as well as new modifications to
total energy consumption of a cloud data center. the underlying simulator to provide more flexibility for further
The remainder of the paper is structured as follows: In Sec- extensions.
tion 2, an introduction to the cloud simulation frameworks that
are considered in this survey is performed, while in Section 3
2.4. GDCSim
the models of the cloud simulators are presented. In Section 4,
experimentation on the cloud simulation platforms is conducted,
in order to compare the proposed energy consumption models. GDCSim (Green Data Center Simulator), [14], was proposed for
Recapitulation and conclusions are presented in Section 5. studying the energy efficiency of data centers under various data
center geometries, workload characteristics, platform power man-
2. Cloud simulation frameworks agement schemes and scheduling algorithms. It aims to simulate
both the management and physical design of a data center by
In this section, a brief introduction of the well-known open examining their interactions and relationships. The purpose is to
source cloud simulation frameworks is given in alphabetic order, fine-tune the interactions between management algorithms and
namely CloudSched, CloudSim, DCSim, GDCSim, GreenCloud and the physical layout of the data center, such as thermal and cooling
iCanCloud. In Table 1 the basic characteristics of the examined interactions with workload placement, [11]. GDCSim is developed
simulators are presented. as a part of the BlueTool infrastructure project at Impact Lab, [14].

2.1. CloudSched
2.5. GreenCloud
CloudSched, [7], is a discrete event simulator that developed in
order to assist users to identify and explore appropriate solutions GreenCloud, [15–17], is build on top of the NS2 network simu-
considering different resource scheduling algorithms. Unlike tra- lator and it is focused on simulating the communication between
ditional scheduling algorithms, considering only one factor such as processes running in a cloud at packet level. Being build on top of
CPU, CloudSched treats multi-dimensional resources such as CPU, NS2, it implements a full TCP/IP protocol reference model which
memory and network bandwidth, integrated for both physical and allows integration of different communication protocols with the
virtual machines for different scheduling objectives. CloudSched simulation. This limits its scalability to only small data centers due
offers features such as modeling and simulation of large scale to the large simulation time and high memory requirements, [10].
cloud computing environments, modeling of different resource It also provides a detailed modeling and analysis of the energy
scheduling policies and algorithms at IaaS layer for clouds, and consumed by the elements of the network servers, routers and
both graphical and textual outputs. links between them. GreenCloud includes two methods of energy
reduction: DVS (Dynamic Voltage Scaling) to decrease the voltage
2.2. CloudSim of switches and DNS (Dynamic Network Shutdown) that allows
to shut down switches when it is possible, [9]. GreenCloud also
CloudSim, [8], is the most used toolkit for modeling and sim- provides plugins that allow the use of physical layer traces that
ulation of Infrastructure as a Service (IaaS) cloud computing en- make experiments more detailed.
vironments. It is a discrete event simulator that provides a vir-
tualization engine with extensive features for modeling the life-
2.6. iCanCloud
cycle management of virtual machines in a data center, including
policies for provisioning of virtual machines to hosts, schedul-
ing of resources of hosts among virtual machines, scheduling of iCanCloud, [18], is a discrete event simulator that provides
tasks in virtual machines, and modeling of costs incurring in such features such as flexibility, scalability, performance and usability.
operations. A model for Device Voltage and Frequency Scaling The iCanCloud simulator has been built to provide a set of ad-
(DVFS) technique has been developed and incorporated in the vanced features: to conduct large experiments, to provide a flexible
CloudSim simulator too, since this simulator contains abstractions and fully customizable global hypervisor for integrating any cloud
for representing distributed Cloud infrastructures and power con- brokering policy, to reproduce the instance types provided by the
sumption, [9]. NetworkCloudSim, [10], is an extension of CloudSim given cloud infrastructure and to provide a user-friendly GUI for
which considers communication costs between VMs performing configuring and launching simulations. A basic characteristic of
parallel computations. NetworkCloudSim has a scalable network iCanCloud is that supports executions of parallel simulations, so
and generalized application model, which allows more accurate one experiment can be executed spanning several distributed ma-
evaluation of scheduling and resource provisioning policies, in chines. Additionally, iCanCloud utilized the E-mc2 framework, [19],
order to optimize the performance of a cloud infrastructure. for advanced analysis of energy modeling in cloud computing.
Aspects like modeling the behavior of different simulated users,
2.3. DCSim support of resource provisioning policies and configuration of dif-
ferent cloud scenarios are achieved using E-mc2 .
DCSim, [11,12] is a discrete event simulator that models a virtu-
alized data center providing IaaS to multiple tenants, with a focus
on interactive workloads such as web applications. It can model 3. Modeling of cloud simulation frameworks
replicated VMs sharing incoming workload, as well as dependen-
cies between VMs that are part of a multi-tiered application. It The proposed models of the cloud simulation frameworks are
also provides metrics to gauge SLA violations, power consump- presented in this Section. For each simulator, CPU, Network, Mem-
tion, and other performance metrics that serve to evaluate a data ory, Storage and Energy models are briefly described. In Cloud-
center management approach or system. Furthermore, DCSim is Sched, the multidimensional load balancing is also included as an
designed to be easily extended for implementing new features additional subsection.
A.T. Makaratzis et al. / Future Generation Computer Systems 79 (2018) 715–725 717

Table 1
Basic characteristics of cloud simulators.
Simulator Characteristics and limitations
• Focus on comparing different resource scheduling algorithms in IaaS
• Provides a uniform view of all resources
CloudSched
• Simulation of thousands of requests in a few minutes
• GUI support
• Does not provide a ready-to-use environment for execution
• Complex scheduling support
• Supports time-shared and space-shared provisioning policies
• Limited network model (only transmission delay)
CloudSim • Extension of Network model in NetworkCloudSim
• Support for energy models (first time in version 3.0)
• No TCP/IP support
• No GUI support
• No MPI and workflow applications support
• Focused on a virtualized data center providing IaaS to multiple
tenants
• Provides a detailed application/service model
DCSim
• No TCP/IP support
• Developed for transactional, continuous workloads
• Reports a set of metrics that determine the behavior of the simulated
data center
• Generates a log for creating statistics
• Developed to study the energy efficiency of data centers
• Integrated simulation: physical data center description and
management description
GDCSim
• Real time simulation of management decisions
• Workload & power management
• Thermal analysis
• Consideration of cyber–physical interdependency (temperature, air
flow, CRAC)
• Extension of ns-2 network simulator
• Detailed modeling of communication aspects of the data center
• Support of Computationally Intensive, Data Intensive and Balanced
GreenCloud Workloads
• Simplistic applications model
• Implementation of a full TCP/IP protocol model
• Limited GUI support (via Nam for visualizing the output)
• Energy models (servers + network)
• Extension of OMNET++ network simulator
• Support of various cloud computing architectures
• Support of parallel processing
• Support of a wide range of storage systems configuration models
iCanCloud
• Implementation of detailed modeling of communication aspects of
data center
• GUI support
• Easy to add new modules
• Energy model available in Emc2

3.1. CloudSched system load, [7,20]:

3.1.1. CPU modeling P(u) = kPmax + (1 − k)Pmax u, (3)


CPU capacity is measured in GHz and is used as a metric for the
allocation of VMs to physical servers. The CPU utilization of the ith where Pmax is the maximum power consumption when the server
CPU of a data center (Ciu ), is defined as the average value of the is fully utilized, k is the fraction of power consumed by an idle
measured CPU utilization during an observed period of time. The server (according to studies it is about 0.7) and u is the CPU
average (denoted as A) utilization of all CPUs in a data center can utilization. The total energy consumption of a CPU server over
then be defined as, [7]: a period of time is then computed as the integral of the energy
∑N consumption function over a period of time, [7]:
Ciu Cin
CuA = i
∑N , (1)

i Cin Ei = P(u(t))dt , (4)
where N is the total number of physical servers in a cloud data
center and Cin the total number of CPUs of server i. By using these while the total energy consumption of a cloud data center can
two metrics, the imbalance value of all CPUs in a data center is be computed as the sum of the energy consumed by all the CPU
defined as, [7]: servers. Additionally, if utilization is considered constant over
N
time, the average CPU utilization of the data center umean can
)2 be used, and thus the energy consumption of each host can be

Ciu − CuA .
(
IBLC = (2)
computed as follows:
i

A linear CPU power consumption model is also proposed, [7], in Ei = (t1 − t0 )P(umean ). (5)
which the CPU utilization is considered proportional to the overall
718 A.T. Makaratzis et al. / Future Generation Computer Systems 79 (2018) 715–725

3.1.2. Network modeling The average imbalance value of a Cloud Data Center (CDC) is then
Network is modeled using a set of metrics similar to those defined as:
proposed in CPU modeling. The average utilization of network IBLC + IBLM + IBLN
bandwidth (measured in Mbps) of the ith server of a data center IBMaCDC
vg = . (14)
N
(Niu ) is defined as the average value of the measured network band-
width utilization during an observed period of time. The average
3.2. CloudSim
utilization of the network bandwidth in a data center is defined as:
∑N
Niu Nin 3.2.1. CPU modeling
NuA = i
, (6)
∑N
Nin CPU capacity in CloudSim is measured in MIPS and is used as
i
a metric for the allocation of VMs to physical machines. CloudSim
where N is the total number of physical servers in the cloud supports time shared or space shared policies for application ex-
data center and Nin the total network bandwidth of the ith server. ecution on CPUs, while in [8] performance models are given for
By using these two metrics, the imbalance value of the network the estimation of the finish time of a task for the two policies. A
bandwidth in a data center is defined as: performance model for the estimation of the execution time of a
n task p by using the space-shared policy is given by, [8]:
∑ )2
Niu − NuA .
(
IBLN = (7) rl
i eft(p) = est(p) + , (15)
capacity × cores(p)
It should be noted that the network impact is not considered in the
where est(p) is the estimated start time, rl is the total number of
computation of the energy consumption of the data center.
instructions required by the task, cores(p) is the number of cores
required by the task and capacity is the total computing capacity
3.1.3. Memory modeling
(MIPS) of a host having np processing elements, and is computed
Similarly, the average utilization of the data center memory
as follows, [8]:
(measured in GB) is defined as:
np
∑N ∑ cap(i)
Miu Min capacity = , (16)
MuA = i
∑N , (8) np
n i=1
i Mi
where cap(i) is the computing capacity of individual elements. The
where Min is the total memory of server i. The imbalance value of
estimated start time depends on the position of the cloud task in
the memory in a data center is defined as:
the execution queue, since each processing unit is used exclusively
n
∑ )2 by one task. By using a time-shared policy, the performance model
Miu − MuA .
(
IBLM = (9) for the estimation of the finish time of a cloud task managed by a
i VM is given by, [8]:
The impact of memory is not considered in the computation of the rl
energy consumption of the data center. eft(p) = ct + , (17)
capacity × cores(p)
where ct is the current simulation time and capacity is computed
3.1.4. Storage modeling
as follows, [8]:
Storage is not considered in CloudSched, thus no models or ∑np
metrics have been proposed. cap(i)
capacity = (∑ i=1 ), (18)
cloudlets
max i=1 cores(i), np
3.1.5. (Multidimensional) Load balancing modeling
A load balancing strategy has been proposed, which treats where cloudlets is the number of cloud tasks. If a VM request
multidimensional resources such as CPU, memory and network exceeds the available CPU capacity, then it is simply rejected, [8].
bandwidth integrated for both physical machines and VMs, [7]. The For the computation of the energy consumption of physical
integrated load imbalance value ILB of the ith server is applied to CPU servers, CloudSim provides an abstract implementation called
indicate the load imbalance level, by comparing the CPU utiliza- ‘‘Power Model’’ which can be extended in order to support different
tion, the memory and the network bandwidth of a server, and is power consumption models, [8]. In recent releases of CloudSim, the
defined as, [7]: provided models are the following:
)2 )2 ( )2 Linear model:
Av gi − CuA + Av gi − MuA + Av gi − NuA
( (
ILBi = , (10)
3 P(u) = Pidle + (Pmax − Pidle )u, (19)
where Square model:
( )
Ciu + Miu + Niu
Av gi = . (11) P(u) = Pidle + (Pmax − Pidle )u2 , (20)
3
The total imbalance value of all servers in a cloud data center is Cubic model:
defined as:
P(u) = Pidle + (Pmax − Pidle )u3 , (21)
N

IBLtot = ILBi , (12) Square root model:
i √
P(u) = Pidle + (Pmax − Pidle ) u, (22)
and the average imbalance value of the ith physical machine (PM)
is defined as: Linear interpolation model:
IBLtot u − u1
IBLPM
av g = . (13) P(u) = P(u1 ) + (P(u2 ) − P(u1 )) , (23)
N u2 − u1
A.T. Makaratzis et al. / Future Generation Computer Systems 79 (2018) 715–725 719

where u1 and u2 are utilization values that power consumption In addition to the seek and transfer time that is computed by the
measurements have been performed, such that: aforementioned equations, the network impact of the file transac-
tions is given by the following equation:
u1 < u < u2 . (24) filesize
net w orkImpactTime = net w orkLatency + . (28)
The required inputs for the first four models is the maximum bandw idth
power value Pmax and the idle power value Pidle of the CPU server, The impact of storage in the energy consumption of the data center
while the last model computes power consumption values using is also not considered in CloudSim.
data from real measurements, such as the SPEC benchmark, [21].
Although the last model gives more accurate results in comparison 3.3. Dcsim
with the first four models, these measurements are not available
for each possible server. Thus, the first four models are used for 3.3.1. CPU modeling
servers that there are no published measurements. The energy CPU in DCSim is treated as a single scalar value, with the total
consumed by a host is then computed as the integral of power over CPU capacity being the sum of the capacity of all of the CPU cores
a period of time. available on the host. A CPU scheduler is responsible for scheduling
the execution of VMs on the hosts. DCSim reports a number of
3.2.2. Network modeling metrics for the CPU modeling, such as the overall utilization of the
Network is simulated using a latency matrix. This matrix con- data center (the percentage of total CPU capacity in the data center
tains the time that is required for the transmission of a message that was used), the active hosts (the minimum, maximum and
from one entity to an other. The topology information is gener- average number of hosts that where activated), the host-hours (the
combined total active time of each host in the simulation) and the
ated using a BRITE topology generator, [8]. The NetworkCloudSim
active host utilization (the CPU utilization of all hosts that where
extension, [10], introduces a network model that also considers
activated), [12]. DCSim provides two different energy consumption
the impact of bandwidth in transmission delay, and also consid-
models for CPU servers, which are similar with the linear and the
ers three types of switches (root, aggregate and edge level), in
linear interpolation models of CloudSim.
order to simulate more realistic network topologies (instead of
the CloudSim assumption that each VM is connected with all the 3.3.2. Network modeling
others). The required time for the transmission of a message is Network elements are connected through links, which have
given by, [10]: predefined bandwidth capacity, while network elements can be
size either network cards or switches, [13]. Two types of networks are
delay = latency + , (25) considered in this approach, the data network which is used for
av ail_bandw idth
the communication between VMs, and the management network
where size is the number of bytes of the message and latency is which is used for the internal management of the data center, [13].
the network latency given by the latency matrix. This equation is The impact of network in the energy consumption of the data
used only for directly connected entities, such that the total delay center is not considered in the DCSim simulator.
occurs as the sum of all the delays during the transmission. In the
case of multiple transmissions the bandwidth is shared equally 3.3.3. Memory and storage modeling
between the users, [10]. The energy consumption of network is not Memory and storage capacity are managed by the DCSim Re-
considered in CloudSim. source Manager, which is responsible for the allocation of re-
sources to VMs. The memory utilization of the active hosts and the
3.2.3. Memory modeling data center are also computed during the simulation. The impact of
The memory capacity in CloudSim is measured in MBs and is memory and storage in the energy consumption of the data center
used for the allocation of VMs to physical machines, such that the is not considered.
total amount of memory required by the VMs is always smaller
than the total available memory of the physical machine. The 3.4. GDCSim
impact of memory in the energy consumption of the data center
is not considered in CloudSim. 3.4.1. CPU modeling
A performance model for computing response times has been
proposed in GDCSim, [22]. Two types of jobs were considered, the
3.2.4. Storage modeling
event based and the time discretized. In the event based, a queue
Similarly, storage capacity in CloudSim is measured in MBs and
of events is maintained and metrics like the job arrival, start and
is used for the allocation of VMs to physical machines. A storage
completion time were used. In time discretized jobs, job arrival is
model is also provided for the computation of the time that is
broken into blocks of time and a probability distribution is used in-
needed for finding a file, and then storing, retrieving or deleting side individual blocks for the computation of performance metrics
it. This model uses two parameters, the disk transfer rate and the such as average arrival frequency and average service time, [23].
average seek time. For the seek time a random sample is generated A generic performance model is presented in [22], which assumes
considering the average seek time. The seek time is then computed that performance depends on the incoming rate of jobs (λ) and the
as follows, [8]: number of available servers (n):
filesize
seekTime = randomSample + , (26) Performance = f (λ, n). (29)
capacity
A model for the computation of the average response time of a
where capacity represents the capacity of the hard disk drive and
server was also provided, [22,24]:
filesize is the size of the file. The total transaction time is:
λ 2
Carri 2
v al + Cser v ice
totalTransactionTime = seekTime + transactionTime. (27) tres = , (30)
µ(µ − λ) 2
A storage model is also provided based on the Storage Area Net- where µ is the servicing capacity, λ is the workload arrival rate,
work (SAN) architecture. In this model two additional attributes and Carriv al and Cser v ice are the coefficients of variation of arrival and
are taken into account, the bandwidth and the network latency. service times respectively.
720 A.T. Makaratzis et al. / Future Generation Computer Systems 79 (2018) 715–725

3.4.2. Memory, network and storage modeling and line cards constant over time, and the power consumption
Memory and network in GDCSim are considered as resources of network ports to scale with the volume of the forwarded traf-
that are shared between the servers of the data center, [22]. No fic, [15,27,28]:
reference for storage was made in [14,22].
R

3.4.3. Energy modeling Pswitch = Pchassis + nc Plinecard + nrp Ppr urp , (36)
r =1
The provided power model computes the total power con-
sumed by each server, and takes into account information such where Pchassis is the power consumed by the switch chassis, Plinecard
as the server models, c-states, p-states, t-states and utilization is the power consumed by a single line card, nc is the number of
levels for time periods. These information are provided using a line cards plugged into switch, Ppr is the power consumed by a port
resource utilization matrix. The power model outputs a power running at rate r, nrp is the number of ports operating at rate r and
consumption distribution vector of the form: P = {P1 , P2 , . . . , PN }, urp ∈ [0, 1] is the port utilization which is given by, [15]:
where N is the number of servers. A Thermodynamic model is
t +T t +T
also provided, which computes the inlet and outlet }temperatures
∫ ∫
1 Bp (t) 1
vectors Tin = Tin1 , Tin2 , . . . , TinN and Tout = up = dt = Bp (t)dt , (37)
{
{of 1each2server. The T t Cp TCp t
Tout , Tout , . . . , Tout
N
}
are computed for the time periods and the
temperatures of each server are computed by, [22,25]: where Bp (t) is the instantaneous throughput at the port’s link at
)−1 the time t, Cp is the link capacity, and T is a measurement interval.
Tout = Tsup + K − AT K P,
(
(31) The simulator provides the ability to insert the active and the idle
Tin = Tout + K −1 P , (32) power consumption of the network interface cards. The network
interface cards consumes the idle power when not used and the
where Tsup is the computer room air-conditioning supply tempera- active power when they are used.
ture, K is the matrix of heat capacity of air through each chassis and
A is a heat recirculation matrix. Two different cooling models are
provided for computing the power needed for cooling, the dynamic 3.5.3. Memory and storage modeling
cooling model and the constant cooling model. The first considers GreenCloud considers the impact of memory and storage in
the high and low modes of operation, in which a threshold of the energy consumption of the host. The simulator provides the
temperature is considered and the power consumed for cooling ability to insert the idle and the maximum power consumption
each server is either Phigh or Plow . The second considers a constant of the memory and storage components, that consume the idle
power for removing all the heat of the data center and does not power when they are not used and the maximum power when they
take into account the temperature vectors [22]. utilized.

3.5. Greencloud
3.5.4. Total energy of data center
GreenCloud provides a detailed power analysis of data centers,
3.5.1. CPU modeling
while the provided models can operate at the packet level as well.
A CPU power consumption model is proposed, [17], which
considers a cubic relationship between frequency and CPU power The total energy consumption of data centers is computed as the
consumption: sum of the energy consumed by the network switches and the
servers, while the consumed energy of computing servers includes
P = Pfixed + Pf f 3 , (33) the memory, hards disks and network interface cards consump-
tion.
where Pfixed is the power consumption which does not scale with
the frequency f , while Pf is the frequency-dependent CPU power
consumption. In [15], a linear power consumption model of a 3.6. iCanCloud
server was proposed, which scales with offered CPU load:
(Pmax − Pidle ) ( l
) 3.6.1. CPU modeling
Ps (l) = Pidle + 1 + l − e− a , (34) CPU power consumption of iCanCloud is modeled using the
2
linear equation, [19,29]:
where Pidle is the power consumption when CPU is idle, Pmax is the
power consumed at the peak load, l is the server load, and a is the u
PCState = (Pmax − Pidle ) + Pidle , (38)
utilization level at which the server attains asymptotic. Taking into 100
account the DVFS power management, which operates frequency where u is the % percentage processor utilization rate, Pmax is the
when a server is underutilized, [26], the Eq. (34) can be rewritten
maximum power consumption and Pmin is the minimum power
as follows, [15]:
( ) consumption of the CPU state. The energy consumption of each
(Ppeak − Pfixed ) l3 CPU state is given as the integral of PCPU for the time period that
Ps (l) = Pfixed + 1 + l3 − e− a . (35)
2 the CPU state was activated, while the total energy consumption
is computed as the sum of the energy consumption values of each
CPU state as follows:
3.5.2. Network modeling ∫ tState
GreenCloud offers a detailed modeling of data center network.
ECState = PCState dt , (39)
It allows the simulation of protocols such as IP, TCP, UDP, while
t0
messages are fragmented in packets which are bounded in size States
with the Maximum Transmission Unit (MTU) of the network, [17]. ∑
EC = ECState , (40)
GreenCloud simulator also considers the power consumption of
k=1
the network switches in the total power consumption of the data
center. A power consumption model for network switches is pro- where States is the number of states and tState is the recorded
posed, which considers the power consumption of switch chassis accumulated time for each state.
A.T. Makaratzis et al. / Future Generation Computer Systems 79 (2018) 715–725 721

3.6.2. Network modeling where S is a request’s file size, B is the bandwidth (thus BS gives the
iCanCloud provides a detailed network model by utilizing the time period the disk remained active for that request), and t0 to tidle
INET1 framework, that includes network protocols like TCP and is the time period that the disk remained in idle state. iCanCloud
UDP, and provides modules such as routers and switches. Thus, supports storage power states (‘‘disk off’’, ‘‘disk IO’’, ‘‘disk active’’
iCanCloud offers the functionality for building a wide range of and ‘‘disk idle’’ by default), where the power consumption of each
networks, though the main drawback is that the high level of state can be inserted. The energy consumption of each state of
accuracy impacts the performance of the simulations, [18]. The storage is computed as the integral of power consumption for the
power consumption of network is also considered. It supports time period that the state was activated, while the total energy
network interface cards power states (‘‘network on’’ and ‘‘network consumption of the power is computed as the sum of the energy
off’’ by default), where the power consumption of each state can consumption values in each of the storage states.
be inserted. The energy consumption of a network interface card
of each state is computed as the integral of consumed power for 3.6.5. Total energy of data center
the time period that the state was activated, while the total energy The iCanCloud framework considers also the energy lost by the
consumption of a network interface card is computed as the sum power supply units (EPSU ), during the transformation of AC to DC,
of the energy consumption values of each state. which is given by, [19]:
tPSU
∫ ( )
100PHW
3.6.3. Memory modeling EPSU = − PHW dt , (46)
The memory manager of iCanCloud is responsible for manag- t0 %efficiency
ing, allocating and freeing memory for applications, as well as where PHW is the power consumption of the hardware components
managing cache systems for disk data. The energy consumption of (EC , EM , EN , EDD ) and is given by, [19]:
memory is also considered. The simulator supports memory power
d(EC ) d(EM ) d(EN ) d(EDD )
states (‘‘memory off’’, ‘‘memory idle’’, ‘‘memory read’’, ‘‘memory PHW = + + + , (47)
write’’ and ‘‘memory search’’ by default), where the power con- dt dt dt dt
sumption of each state can be inserted. The energy consumption while the percentage of the PSU efficiency is given by, [19]:
of each state of memory is computed as the integral of power {
20% if 0 ≤ %load ≤ 30%
consumption for the time period that the state was activated, while %efficiency = 50% if 30% ≤ %load ≤ 70%, (48)
the total energy consumption of the memory is computed as the 100% if 70% ≤ %load
sum of the energy consumption values in each memory states as
follows: and the percentage of load is computed using the rated output
∫ tState power of the PSU, [19]:
EMState = PMState dt , (41) 100
t0 %load = PHW . (49)
States Rated_output_pow er

EM = EMState , (42) The total energy consumption of a computing node is computed as
k=1 the sum of the energy values of all the hardware components of
where States is the number of states and tstate is the recorded the node, [19]:
accumulated time for each state.
ENode = EC + EM + EN + EDD + EPSU , (50)
3.6.4. Storage modeling while the sum of the energy consumption values of all the com-
The storage system considers three components: the File sys- puting nodes gives the total energy consumption of the whole data
tem, the Volume manager and the Disk drive. The File system is center.
used for creating a list of blocks that contain the data that applica-
tions have requested, the Volume manager is used for receiving the 4. Experimentation
list of blocks from the File system and redirecting them to the disk
that contains them, while the Disk drive is used for computing the Several experiments were conducted using the simulation plat-
time needed for processing the data block operations that came forms presented, focusing on the adopted modeling approaches.
for the volume manager, [18]. The impact of storage in energy The simulated time period was set to 1 h, while the models were
consumption is also considered. The energy consumed by a hard tested for different number of VMs and number of physical ma-
disk drive (EDD ), servicing N requests, including I (idle time) is chines (hosts).
given by, [19]: The CPU servers of the data centers were selected arbitrary to
I N
the ‘‘ProLiant ML110 G5’’, while their characteristics and the power
consumption values where obtained from SPEC benchmark, [21].
∑ ∑
EDD = Eidle + EActiv e , (43)
In order to simulate CPU intensive applications, the processing
k=0 k=0
length of each application was set equal to 9,000,000 MIs (Millions
where the energy during the active state of the hard disk (EActiv e ) is of Instructions), while the input and output size of each application
given by, [19]: was set equal to 5 MB and 30 MB respectively. Input and output
S sizes refer to the size of input and output files of the job, while the
EActiv e = PActiv e m, (44) processing length is a way to infer the time it takes to execute a
B
job in a cloud resource (whose computing power is measured in
and the energy consumed when the hard disk is at the idle state is Millions of Instructions Per Second).
given by, [19]: The VM placement and allocation to hardware resources was
∫ tidle static, where each host was hosting one VM. The characteristics of
Eidle = Pidle dt , (45) the physical machines and the VMs that were used in the experi-
t0
ments are depicted in Table 2, while the inputs of the simulation
experiments are depicted in Table 3. Although [7] proposes power
1 https://inet.omnetpp.org/. models for CloudSched as presented in Section 3.1, the source
722 A.T. Makaratzis et al. / Future Generation Computer Systems 79 (2018) 715–725

Table 2 Since no real data exist for all utilization percentages, no safe
Hosts and VMs characteristics used in experiments. conclusion can be obtained regarding the accuracy of the mod-
Characteristics Hosts VMs els. Despite that, since linear interpolation model uses measured
CPU frequency (no of cores) 2.66 GHz (2 cores) 2.66 GHz (1 core) power consumption values, the approximation is imposed on the
Memory size 4 GB 4 GB intermediate values (interpolated) of the intervals, and thus it can
Storage size 1000 GB 10 GB
be assumed that it is the most accurate model. The relative errors
Network bandwidth 1 GB/s MB/s 100 MB/s
of the other models remain at low levels.
Table 3
4.2. Memory, storage and network models
Simulation experiments inputs.
Simulation time 1h
The energy consumption of the memory, storage and network
VM placement First fit approach—No migrations
VMs per host 1 VM per host components is examined for both GreenCloud and iCanCloud sim-
Application type CPU intensive ulation frameworks using the inputs of Table 3.
Number of applications Equal to the number of VMs In GreenCloud, the total energy consumption has been com-
Application length 9,000,000 MIs puted for the whole data center (includes the energy consumed by
Application input size 5 MB
network switches) and is compared with the energy consumption
Application output size 30 MB
Interconnection topology All servers communicate each other directly that is computed only for the servers. With this experiment, the
impact of the network hardware switches for specific application
Table 4 types can be obtained. The CPU server power consumption was
Approximation of energy consumption in kWh of different simulation frameworks, computed by using the maximum and idle power of the CPU (linear
for different number of VMs and hosts. model). In Table 6 the energy consumed by the switches and the
Hosts, VMs CloudSim DCSim GreenCloud iCanCloud servers for different number of Hosts and VMs is presented, while
50 4.6700 4.6850 4.6519 4.6800 in the last column the impact of the network switches on the
100 9.3400 9.3700 9.3038 9.3576 total energy consumption is depicted. The network impact was
200 18.6800 18.7400 18.6075 18.7128 computed using the formula:
500 46.7100 46.8500 46.5189 46.7785
1000 93.4200 93.7000 92.0378 93.5547 Eswitches
Nimpact = . (51)
Etotal
It can be observed that the energy consumed by the switches is
code2 does not implement any power metrics, thus no results a considerable percentage of the total energy of the data center for
could be obtained. Additionally, the source code for GDCSim was the application characteristics, though this percentage is reducing
not found in the link3 provided by the authors in [14,22]. Thus, as the number of servers/VMs is growing. Thus, for large data
experiments for CloudSched and GDCSim simulators were not centers consisting by thousands/millions of servers this percentage
conducted in this survey. is expected to be reduced even more.
The power consumption levels (in watts) of ‘‘ProLiant ML110 The energy consumption of servers contains also the power
G5’’ server are given in 10% CPU utilization intervals and are the fol- that is consumed by the memory, the disk and the network in-
lowing: [93.7, 97, 101, 105, 110, 116, 121, 125, 129, 133, 135], [21]. terface cards of them. In Table 7, the energy consumption of the
The estimations of the energy consumption (in kWh) for the individual components, i.e. memory, disk and nics are given for
different simulation frameworks, with different number of VMs different number of hosts/VMs. It should be noted that the power
and hosts using the characteristics and inputs of Tables 2 and 3, consumption values that were used for these components were set
are depicted in Table 4. to: 8 W when hard disk is active and 5.4 W when is idle, 9 W when
It can be observed that the estimated energy consumption in all network interface cards are active and 2 W when they are idle,
simulation frameworks is at the same level for all values of hosts and 1.915 W when memory is active and 1.65 W when is idle. The
and VMs. The same tendency has been noticed also when using impact for each component was computed as:
different hosts/VMs characteristics and simulation inputs, i.e. all Ecomponent
simulations produced similar estimations.
Impact = . (52)
Etotal
Further experiments were conducted in order to simulate appli-
4.1. Comparison of CPU power models cations with different characteristics, such as I/O intensive appli-
cations. For this reason, the input and output size of applications
Several executions were performed using CloudSim framework was set to 5 GB and 100 GB respectively, while the applications
in order to examine the power consumption using the different length was set to 1,000,000 MIs. The energy consumption of the
power models for CPU servers (Eqs. (19)–(23)). The inputs of the data center using this setup is presented in Table 8.
simulations were those of Table 3. For the linear interpolation The impact of the aforementioned components in the total
model the values provided by SPEC were used, [21], while for the energy consumption remained low and constant as the number of
linear, square, cubic and square root models only the maximum hosts/VMs increased for both application setups. More specifically,
and idle power consumption values of ‘‘ProLiant ML110 G5’’ were the impact of the disk was 2.62%, of memory 2.15% and of nics
utilized. 6.91%.
The approximations of the energy consumption (in kWh) for For the iCanCloud experiments, the hard disks energy states
the different CPU power models and number of VMs and hosts are were set to: diskoff : 0W , diskIO : 8W , diskactiv e : 8W and
depicted in Table 5. It should be noted that the same tendency diskidle : 5.4W , while the energy states of the memory were set
observed when implemented and experimented with the same to the following values: memoryoff : 0W , memoryidle : 1.65W ,
CPU power consumption models in the DCSim framework. memoryread : 1.899W , memorywrite : 1.915W and memorysearch :
1.693W . The network interface cards energy states were set to:
net w orkoff : 2.0W and net w orkon : 9.0W . The energy consumption
2 https://sourceforge.net/projects/cloudsched-uestc/. of the data center components for different number of hosts and
3 http://impact.asu.edu/BlueTool/. VMs is depicted in Table 9 using the setup of Table 3.
A.T. Makaratzis et al. / Future Generation Computer Systems 79 (2018) 715–725 723

Table 5
CloudSim: Approximation of energy consumption in kWh, for different CPU power consumption models and different
number of VMs and hosts.
Hosts, VMs Linear Interp. Linear Square Cubic Square root
50 4.6700 4.6500 4.2300 4.0500 5.0400
100 9.3400 9.3100 8.4600 8.0900 10.0800
200 18.6800 18.6100 16.9200 16.1800 20.1600
500 46.7100 46.5300 42.3000 40.4600 50.3900
1000 93.4200 93.0600 84.6000 80.9200 100.7800

Table 6
GreenCloud: Energy Consumption in kWh of servers and switches, and impact of network switches in the total energy
consumption.
Hosts, VMs Core switches Agg. switches Access switches Servers Net Impact
50 2.8240 5.6480 0.3138 4.6519 65.38%
100 2.8240 5.6480 0.3348 9.3038 48.63%
200 2.8240 5.6480 0.3768 18.6075 32.22%
500 2.8240 5.6480 0.5028 46.5189 16.17%
1000 2.8240 5.6480 0.7128 93.0378 8.99%

Table 7 Table 10
GreenCloud: Energy consumption in kWh along with the energy consumption of iCanCloud: Energy consumption in kWh along with the energy consumption of the
the individual server components. individual server components.
Hosts, VMs Total Disk Memory Nics Hosts, VMs Total Disk Memory Network
50 4.6519 0.1219 0.1001 0.3215 50 4.6838 0.4568 0.6854 0.5580
100 9.3038 0.2438 0.2002 0.6430 100 9.3652 0.9136 1.3708 1.1135
200 18.6075 0.4876 0.4004 1.2860 200 18.7279 1.8272 2.7415 2.2245
500 46.5189 1.2190 1.0010 3.2150 500 46.8162 4.5680 6.8539 5.5575
1000 93.0378 2.4380 2.0020 6.4200 1000 93.6299 9.1361 13.7078 11.1125

Table 8
GreenCloud: Energy consumption in kWh along with the energy consumption of were conducted in order to compare the different simulation en-
the individual server components using different application setup.
ergy consumption models where possible.
Hosts, VMs Total Disk Memory Nics In CloudSched, energy models were proposed only for the CPU
50 4.6541 0.1220 0.1002 0.3218 physical servers, though these models were not implemented in
100 9.3082 0.2440 0.2004 0.6436 the source code of the simulator. The iCanCloud simulator con-
200 18.6163 0.4880 0.4008 1.2872
siders the energy impact of multiple components of the data cen-
500 46.5409 1.2200 1.0020 3.2180
1000 93.0818 2.4400 2.0040 6.4360 ter, and more specifically it provides energy models for the CPU
servers, for the memory, the storage, the network and the power
Table 9
supply unit (PSU). CloudSim uses several power models for CPU
iCanCloud: Energy consumption in kWh along with the energy consumption of the servers though does not consider the impact of other components
individual server components. in the energy consumption of the data center. GreenCloud con-
Hosts, VMs Total Disk Memory Network siders the energy impact of CPU servers, network, memory and
storage, while it also computes the energy consumption of network
50 4.6800 0.4530 0.6853 0.5580
100 9.3576 0.9061 1.3707 1.1135 switches. DCSim includes CPU power consumption models and
200 18.7128 1.8123 2.7413 2.2245 does not consider any other component in the energy consumption
500 46.7785 4.5310 6.8533 5.5575 of the data center. GDCSim considers the energy consumption of
1000 93.5547 9.0622 13.7066 11.1125 CPU physical servers and also provides a cooling model for the
computation of the energy consumed by the cooling system. In
Table 11, cloud simulation frameworks’ support of energy con-
The same executions were performed for different applications sumption models is summarized.
setup, as in the GreenCloud experiments, where applications input From the experimentation with CloudSim, DCSim, GreenCloud
and output size was set to 5 GB and 100 GB respectively, while the and iCanCloud frameworks, it can be observed that all simulators
applications length was set to 1,000,000 MIs. The results of this provide the same tendency of the power consumption of data
simulation are depicted in Table 10. It should be noted that the im- center servers as hosts and VMs increases. The approximations of
pact of the individual server components for the two applications the total energy consumption for all cases were at the same level.
setups, although the application input and output requirements The experimentation with CloudSim revealed that the estimations
were significantly increased, remained at the same level, e.g. for of the linear CPU power consumption model are very close to
1000 hosts and VMs the hard disk impact was increased from 9.69% those provided by the linear interpolation model using data from
to 9.76%. real measurements. Memory, hard disks and network interface
cards energy consumption in GreenCloud and iCanCloud simulator
was examined for different applications setups, and was found to
5. Conclusions remain constant as increasing the number of servers.
Results show the same tendency on all simulation platforms,
In this paper, the most popular open source cloud simulation but due to the poor scalability of the tools it is difficult to accurately
frameworks were examined, focusing on the models and metrics predict the energy consumption of large scale cloud environments.
that are proposed in order to simulate the behavior of cloud infras- Further work includes the consideration of parallel versions of the
tructures. The survey presents the energy models that were used tools in order to scale to millions of cloud nodes, so that safe results
for simulating the data center components, while experiments can be obtained by testing with real case experiments.
724 A.T. Makaratzis et al. / Future Generation Computer Systems 79 (2018) 715–725

Table 11
Energy consumption models of cloud simulation frameworks.
Simulator CPU Network Memory Storage PSU Cooling model
CloudSched ✓ ✗ ✗ ✗ ✗ ✗
CloudSim ✓ ✗ ✗ ✗ ✗ ✗
DCSim ✓ ✗ ✗ ✗ ✗ ✗
GDCSim ✓ ✗ ✗ ✗ ✗ ✓
GreenCloud ✓ ✓ ✓ ✓ ✗ ✗
iCanCloud ✓ ✓ ✓ ✓ ✓ ✗

Acknowledgment [15] Dejene Boru, Dzmitry Kliazovich, Fabrizio Granelli, Pascal Bouvry, Albert Y.
Zomaya, Energy-efficient data replication in cloud computing datacenters,
Cluster Comput. 18 (1) (2015) 385–402.
This work is partially funded by the European Union’s Horizon [16] Claudio Fiandrino, Dzmitry Kliazovich, Pascal Bouvry, Albert Y. Zomaya,
2020 Research and Innovation Programme through CloudLight- Performance and energy efficiency metrics for communication systems of
cloud computing data centers, IEEE Trans. Cloud Comput. (2015).
ning project (http://www.cloudlightning.eu) under Grant Agree-
[17] Dzmitry Kliazovich, Pascal Bouvry, Samee Ullah Khan, GreenCloud: a packet-
ment No. 643946. level simulator of energy-aware cloud computing data centers, J. Supercom-
put. 62 (3) (2012) 1263–1283.
[18] Alberto Núñez, Jose L. Vázquez-Poletti, Agustin C. Caminero, Gabriel G.
References Castañé, Jesus Carretero, Ignacio M. Llorente, iCanCloud: A flexible and scalable
cloud infrastructure simulator, J. Grid Comput. 10 (1) (2012) 185–209.
[19] Gabriel G. Casta, Alberto Nez, Pablo Llopis, Jess Carretero, E-mc2: A formal
[1] David Guyon, Anne-Cécile Orgerie, Chrtistine Morin, Deb Agarwal, How much
framework for energy modelling in cloud computing, Simul. Model. Pract.
energy can green HPC cloud users save?in: 2017 25th Euromicro International
Theory 39 (2013) 56–75 S.I.Energy efficiency in grids and clouds.
Conference on Parallel, Distributed and Network-Based Processing, (PDP),
[20] Rajkumar Buyya, Manzur Murshed, GridSim: a toolkit for the modeling and
IEEE, 2017, pp. 416–420.
simulation of distributed resource management and scheduling for Grid com-
[2] James Byrne, Sergej Svorobej, Konstantinos M. Giannoutakis, Dimitrios
puting, Concurr. Comput.: Pract. Exper. 14 (13–15) (2002) 1175–1220.
Tzovaras, P.J. Byrne, Per-Olov Östberg, Anna Gourinovitch, Theo Lynn, A
[21] SPEC. Server Power and Performance characteristics, 2008. http://www.spec.
review of cloud computing simulation platforms and related environments,
org/power_ssj2008/.
in: Proceedings of the 7th International Conference on Cloud Computing
[22] Sandeep K.S. Gupta, Ayan Banerjee, Zahra Abbasi, Georgios Varsamopoulos,
and Services Science, CLOSER 2017, SCITEPRESS - Science and Technology
Michael Jonas, Joshua Ferguson, Rose Robin Gilbert, Tridib Mukherjee, Gdcsim:
Publications, Lda, Portugal, 2017, pp. 679–691.
A simulator for green data center design and analysis, ACM Trans. Model.
[3] Rahul Malhotra, Prince Jain, Study and comparison of various cloud simulators
Comput. Simul. (TOMACS) 24 (1) (2014) 3.
available in the cloud computing, Int. J. Adv. Res. Comput. Sci. Softw. Eng. 3 (9)
[23] Zahra Abbasi, Georgios Varsamopoulos, Sandeep K.S. Gupta, Thermal aware
(2013).
server provisioning and workload distribution for internet data centers,
[4] Georgia Sakellari, George Loukas, A survey of mathematical models, simulation
in: Proceedings of the 19th ACM International Symposium on High Perfor-
approaches and testbeds used for research in cloud computing, Simul. Model.
mance Distributed Computing, HPDC’10, ACM, New York, NY, USA, 2010,
Pract. Theory 39 (2013) 92–103 S.I.Energy efficiency in grids and clouds.
pp. 130–141.
[5] Utkal Sinha, Mayank Shekhar, Comparison of various cloud simulation tools
[24] Faraz Ahmad, T.N. Vijaykumar, Joint optimization of idle and cooling
available in cloud computing, Int. J. Adv. Res. Comput. Commun. Eng. 4 (3)
power in data centers while maintaining response time, in: Proceedings of
(2015).
the Fifteenth Edition of ASPLOS on Architectural Support for Programming
[6] Wei Zhao, Yong Peng, Feng Xie, Zhonghua Dai, Modeling and simulation
Languages and Operating Systems, ASPLOS XV, ACM, New York, NY, USA, 2010,
of cloud computing: A review, in: 2012 IEEE Asia Pacific Cloud Comput-
pp. 243–256.
ing Congress, APCloudCC, Nov 2012, pp. 20–24. http://dx.doi.org/10.1109/
[25] Q. Tang, S.K.S. Gupta, G. Varsamopoulos, Energy-efficient thermal-aware task
APCloudCC.2012.6486505.
scheduling for homogeneous high-performance computing data centers: A
[7] Wenhong Tian, Yong Zhao, Minxian Xu, Yuanliang Zhong, Xiashuang Sun, A
cyber-physical approach, IEEE Trans. Parallel Distrib. Syst. 19 (11) (2008)
toolkit for modeling and simulation of real-time virtual machine allocation in
1458–1472.
a cloud data center, IEEE Trans. Autom. Sci. Eng. 12 (1) (2015) 153–161.
[26] Li Shang, Li-Shiuan Peh, Niraj K. Jha, Dynamic voltage scaling with links
[8] Rodrigo N. Calheiros, Rajiv Ranjan, Anton Beloglazov, César A.F. De Rose,
for power optimization of interconnection networks, in: Proceedings of the
Rajkumar Buyya, CloudSim: a toolkit for modeling and simulation of cloud
9th International Symposium on High-Performance Computer Architecture,
computing environments and evaluation of resource provisioning algorithms,
HPCA’03, IEEE Computer Society, Washington, DC, USA, 2003, pp. 91–102.
Softw. - Pract. Exp. 41 (1) (2011) 23–50.
[27] Priya Mahadevan, Puneet Sharma, Sujata Banerjee, Parthasarathy Ranganathan,
[9] Tom Gurout, Thierry Monteil, Georges Da Costa, Rodrigo Neves Calheiros,
A power benchmarking framework for network devices, in: Luigi Fratta, Hen-
Rajkumar Buyya, Mihai Alexandru, Energy-aware simulation with {DVFS},
ning Schulzrinne, Yutaka Takahashi, Otto Spaniol (Eds.), NETWORKING 2009:
Simul. Model. Pract. Theory 39 (2013) 76–91 S.I.Energy efficiency in grids and
8th International IFIP-TC 6 Networking Conference, Aachen, Germany, May
clouds.
11-15, 2009. Proceedings, Springer Berlin Heidelberg, Berlin, Heidelberg, 2009,
[10] Saurabh Kumar Garg, Rajkumar Buyya, NetworkCloudSim: Modelling parallel
pp. 795–808.
applications in cloud simulations, in: Proceedings of the 2011 Fourth IEEE
[28] Pedro Reviriego, Vijay Sivaraman, Zhi Zhao, Juan Antonio Maestro, Arun
International Conference on Utility and Cloud Computing, UCC’11, IEEE Com-
Vishwanath, Alfonso Sanchez-Macian, Craig Russell, An energy consumption
puter Society, Washington, DC, USA, 2011, pp. 105–113.
model for energy efficient ethernet switches, in: 2012 International Confer-
[11] Gaston Keller, Michael Tighe, Hanan Lutfiyya, Michael Bauer, DCSim: A data
ence on High Performance Computing Simulation, HPCS, July 2012, pp. 98–
centre simulation tool, in: 2013 IFIP/IEEE International Symposium on Inte-
104. http://dx.doi.org/10.1109/HPCSim.2012.6266897.
grated Network Management, IM 2013, May 2013, pp. 1090–1091, ISSN 1573-
[29] Lauri Minas, Brad Ellison, Energy Efficiency for Information Technology:
0077.
How to Reduce Power Consumption in Servers and Data Centers, Intel Press,
[12] Michael Tighe, Gaston Keller, Michael Bauer, Hanan Lutfiyya, DCSim: A data
2009.
centre simulation tool for evaluating dynamic virtualized resource manage-
ment, in: 2012 8th International Conference on Network and Service Man-
agement (Cnsm) and 2012 Workshop on Systems Virtualiztion Management
(Svm), Oct 2012, pp. 385–392, ISSN 2165-9605. Antonios T. Makaratzis is a Research Assistant in the
[13] M. Tighe, G. Keller, J. Shamy, M. Bauer, H. Lutfiyya, Towards an improved Information Technologies Institute of CERTH, Greece. He
data centre simulation with DCSim, in: Proceedings of the 9th International received the diploma of Electrical and Computer Engineer-
Conference on Network and Service Management, CNSM 2013, Oct 2013, pp. ing from the Democritus University of Thrace in 2015. His
364–372. http://dx.doi.org/10.1109/CNSM.2013.6727859 ISSN 2165-9605. research is focused on High Performance Scientific Com-
[14] Sandeep K.S. Gupta, Rose Robin Gilbert, Ayan Banerjee, Zahra Abbasi, Tridib puting, sparse matrix algorithms and parallel systems. His
Mukherjee, Georgios Varsamopoulos, GDCSim: A tool for analyzing green data research work has been published at Panhellenic Confer-
center design and resource management techniques, in: 2011 International ence on Informatics and in Journal of Supercomputing in
Green Computing Conference and Workshops, July 2011, pp. 1–8. http://dx. years 2015 and in 2016 respectively.
doi.org/10.1109/IGCC.2011.6008612.
A.T. Makaratzis et al. / Future Generation Computer Systems 79 (2018) 715–725 725

Konstantinos M. Giannoutakis is a Research Associate in Dimitrios Tzovaras is Director of Information Technolo-


the Information Technologies Institute of CERTH. He holds gies Institute of CERTH and a Visiting Professor at the
a B.Sc. in Mathematics from University of Aegean, a M.Sc. Imperial College, London. He holds a Diploma in Electrical
in Computational Science from National and Kapodis- Engineering and a Ph.D. in 2D and 3D Image Compression
trian University of Athens, department on Informatics and from the Aristotle University of Thessaloniki (AUTH). Prior
Telecommunications, and Ph.D. in Computational Science to his current position, he was a Senior Researcher on the
from Democritus University of Thrace, department of Elec- Information Processing Laboratory at the Electrical and
trical and Computer Engineering. His research interests in- Computer Engineering Department of the Aristotle Uni-
clude high performance computing, scientific computing, versity of Thessaloniki. His main research interests include
parallel systems, grid/cloud computing, service oriented knowledge management, multimodal interfaces, security,
architectures and software engineering techniques. He has data fusion, artificial intelligence and visual analytics. His
over 50 publications in journals, international conferences and books in the above involvement with those research areas has led to the co-authoring of over 80
research areas. He was also an adjunct professor in Hellenic Open University (2010– articles in refereed journals and more than 200 papers in international conferences.
2014). Since 1992, Dr. Tzovaras has been involved in more than 60 European and national
projects, being the scientific and technical manager for more than 40 projects.

You might also like