You are on page 1of 38

J Supercomput

https://doi.org/10.1007/s11227-018-2274-0

Design and energy-efficient resource management


of virtualized networked Fog architectures for the
real-time support of IoT applications

Paola G. Vinueza Naranjo1 · Enzo Baccarelli1 ·


Michele Scarpiniti1

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Abstract With the incoming 5G access networks, it is forecasted that Fog comput-
ing (FC) and Internet of Things (IoT) will converge onto the Fog-of-IoT paradigm.
Since the FC paradigm spreads, by design, networking and computing resources over
the wireless access network, it would enable the support of computing-intensive and
delay-sensitive streaming applications under the energy-limited wireless IoT realm.
Motivated by this consideration, the goal of this paper is threefold. First, it provides
a motivating study the main “killer” application areas envisioned for the considered
Fog-of-IoT paradigm. Second, it presents the design of a CoNtainer-based virtualized
networked computing architecture. The proposed architecture operates at the Middle-
ware layer and exploits the native capability of the Container Engines, so as to allow
the dynamic real-time scaling of the available computing-plus-networking virtualized
resources. Third, the paper presents a low-complexity penalty-aware bin packing-type
heuristic for the dynamic management of the resulting virtualized computing-plus-
networking resources. The proposed heuristic pursues the joint minimization of the
networking-plus-computing energy by adaptively scaling up/down the processing
speeds of the virtual processors and transport throughputs of the instantiated TCP/IP
virtual connections, while guaranteeing hard (i.e., deterministic) upper bounds on the
per-task computing-plus-networking delays. Finally, the actual energy performance-
versus-implementation complexity trade-off of the proposed resource manager is

B Paola G. Vinueza Naranjo


paola.vinueza@uniroma1.it
Enzo Baccarelli
enzo.baccarelli@uniroma1.it
Michele Scarpiniti
michele.scarpiniti@uniroma1.it

1 Department of Information Engineering, Electronics and Telecommunications,


Sapienza University of Rome, Rome, Italy

123
P. G. Vinueza Naranjo et al.

numerically tested under both wireless static and mobile Fog-of-IoT scenarios and
comparisons against the corresponding performances of some state-of-the-art bench-
mark resource managers and device-to-device edge computing platforms are also
carried out.

Keywords Fog computing · IoT · Real-time streaming applications · Energy


efficiency · Design of virtualized networked computing architectures · Adaptive
management of virtualized resources · Trustworthiness-enforcing mechanisms for
container-based virtualization

1 Introduction

Internet of Things (IoT) aims at implementing the vision of a spatially distributed net-
work of a number of interconnected objects (i.e., things) that are uniquely addressable
and leverage the Internet as the universal communication protocol. When integrat-
ing such a number of objects with the communication capabilities envisioned by IoT,
the application scope automatically broads to cover new services, such as automo-
tive, healthcare, retail, logistics smart cities, manufacturing, advanced environment
monitoring, media and entertainment industry [1].
However, three main factors still hamper the full implementation of the pillar
paradigm of the IoT vision. First, IoT devices are typically resource poor, so that
the provisioning of suitable forms of device augmentation is a key ingredient in the
IoT ecosystem. Second, resorting to remote Cloud data centers for the provision of
device augmentation may introduce non-negligible latencies that degrade the perfor-
mance of real-time streaming applications, and cause traffic congestion phenomena in
the Internet backbone. Third, IoT devices are, by design, heterogeneous and this can
hamper the implementation of effective policies for the multiplexing of the available
computing-plus-networking resources [2].
In principle, a suitable integration of the IoT paradigm with three emerging tech-
nologies featuring the incoming 5G era (namely Fog computing (FC), resource
virtualization (RV) and broadband wireless Internet (BWI)) can speed up the full
migration of the IoT vision from the theory to the practice [3]. This is the focus of the
present paper. It deals with the design and energy-efficient dynamic resource manage-
ment of networked virtualized Fog computing architectures for the real-time support
of streaming applications.
The considered reference scenario is sketched in Fig. 1. It refers to a three-tier net-
worked computing Fog architecture for the distributed support of applications launched
by wireless IoT devices. Specifically, it is composed by:

1. A number of spatially distributed wireless (possibly, mobile) heterogeneous


devices (i.e., wireless things), such as sensors, tablets, smartphones, access points,
connected vehicles, smart meters for power grids, control systems for Industry 4.0
factories. Being battery-powered, these devices are energy-limited and equipped
with limited computing resources. These devices operate at the edge of the network
and constitute the IoT Layer of the overall three-tier architecture.

123
Design and energy-efficient resource management of…

Fig. 1 General Fog-based networked computing platform. FN Fog node, IGWR internet gateway router,
WAN wide-area network

2. A set of spatially distributed (possibly, interconnected) Fog nodes (FNs). Each


FN serves a local cluster of proximate devices by providing them computing and
networking resources on an on-demand and per-device basis. For this purpose,
each FN acts as a (small-size) data center and it is equipped with a (limited) num-
ber of physical servers interconnected by Ethernet-type switches. Delay-sensitive
demands are locally processed by the serving FNs, while more delay-tolerant
requests may be forwarded by the serving FNs to remote (but more powerful)
Cloud nodes. The set of FNs constitutes the Fog Layer of the overall architecture.
3. A set of remote powerful Cloud data centers. Each Cloud node serves the delay-
tolerant computing requests of the underlying set of FNs and acts as a portal toward
remote users and service providers. The set of Cloud nodes constitutes the Cloud
Layer of the overall architecture.

In the considered scenario, each IoT device may decide to offload to the serving FN
its most computing-intensive tasks. Hence, in order to support device augmentation,
in Fig. 1 we have that:

1. Each device is associated to a virtual clone that runs on a physical server hosted by
the serving FN. The clone acts as a virtual processor and exploits the computing
and network physical resources of the hosting FN, in order to execute tasks on
behalf of the corresponding device;
2. Device–Fog communication is implemented through end-to-end UDP-TCP/IP
transport-layer two-way connections. They rely on single-hop short-range WiFi

123
P. G. Vinueza Naranjo et al.

or Bluetooth-based wireless (possibly, mobile) transmission links (see the green


rays of Fig. 1);
3. Inter-clone communication within a same FN is implemented through reliable end-
to-end TCP/IP transport-layer two-way connections that rely on wired Ethernet-
type switched transmission links;
4. Inter-Fog communication is guaranteed by a medium-range wireless backbone
that exploits broadband transmission technologies (such as IEEE 802.11a/g/n or
WiMax), in order to sustain reliable Fog-to-Fog high-throughput TCP/IP connec-
tions (see the orange ray of Fig. 1). For this purpose, multi-antenna technologies
could be also used [4,5];
5. Cloud–Fog communication is still implemented through TCP/IP connections that
are sustained by wide-area network (WAN) Internet backbones. These last may
be multi-hop and, depending on the actually considered application scenario, may
rely on 3G/4G long-range cellular transmission technologies [6].

1.1 Technical contribution of the paper

Being mainly powered by renewable energy sources (such as solar panels or wind
turbines [3]), the energy budget available at each FN is limited, so that an integrated
energy-efficient management of all computing-plus-networking resources available
at the devices and FNs is required. This is, indeed, the focus of this paper. Its main
contributions embrace:

1. A comparative study of the “killer” application areas envisioned for the considered
Fog-of-IoT paradigm. The goal is to characterize the main Quality of Service (QoS)
requirements of the supported streaming services;
2. The design and specification of the functions of the main blocks of the virtual-
ized architecture envisioned for the Middleware layer equipping the FNs. Main
features of the proposed architecture are that: (a) it exploits the emerging con-
tainer (CN)-based virtualization technology, in order to increase the per-server
virtualization density and allow the virtualization of a large number of (pos-
sibly, heterogeneous) IoT devices; and (b) it leverages the native capability of
the Container Engines, in order to allow dynamic real-time scaling of the per-
server virtualized resources. Viable security-enhancing mechanisms and related
container-over-virtual machine-based solution are also envisioned and discussed;
3. The design of a penalty-aware bin packing (PABP)-type heuristic for the dynamic
resource management of the virtualized resources hosted by each FN. Specifi-
cally, the following main features are retained by the proposed PABP heuristic:
(a) it pursues the joint minimization of the networking-plus-computing consumed
energy by adaptively scaling up/down the computing frequencies (i.e., the process-
ing speeds) of the virtual processors and the transport throughputs of the virtual
connections instantiated at the Middleware layer of each FN; (b) it guarantees hard
(i.e., deterministic) upper bounds on the computing-plus-networking delays of the
processed tasks; and (c) it allows admission control and resource consolidation on
a per-FN basis.

123
Design and energy-efficient resource management of…

4. A discussion of the main implementation aspects of the proposed Middleware


platform and PABP-based resource management heuristic, in order to analyze
the resulting overall implementation complexity. Interesting enough, the proposed
PABP resource management algorithm may be implemented in a distributed way,
so that the resulting asymptotic implementation complexity does not depend on
the size of the considered FNs.
5. The software implementation of the main blocks of the overall proposed Fog-based
computing architecture by exploiting the recently proposed iFogSim toolkit in [7].
The actual energy performance of the proposed PABP resource manager is numer-
ically tested by simulations and compared against the corresponding ones of some
state-of-the-art resource managers modified best-fit decreasing (MBFD) [8], maxi-
mum density consolidation (MDC) [9]. This is done by considering both wireless
static and mobile application scenarios under real-world workload traces. Further,
performance comparisons of the overall proposed Fog-of-IoT computing architecture
against a Fog-free benchmark one based on device-to-device (D2D) communication
are also carried out.
The rest of this paper is organized as follows. After reviewing the main related work
in Sects. 2, 3 shortly overviews four emerging broad QoS application areas that moti-
vate the development of the Fog-of-IoT framework of Fig. 1. Hence, after reviewing the
native features of the emerging CN-based virtualization technology in Sect. 4, Sect. 5
discusses some security and privacy-related aspects and presents a spectrum of cross-
layer security-enforcing mechanisms that could be effectively employed, in order to
enhance the trustworthiness of CN-based Fog computing platforms. Afterward, Sect. 6
details the proposed virtualized Fog architecture for the support of real-time IoT-based
streaming applications and, then, formalizes the afforded QoS resource management
problem. Section 7 presents the proposed PABP-type resource management heuris-
tic and discusses a number of related implementation aspects. Sections 8 and 9 are
devoted to numerically test and compare the energy and delay performances of the
overall proposed Fog-of-IoT networked computing architecture under both wireless
static and mobile application scenarios. Finally, the conclusive Sect. 10 recaps the
main contributions of the paper and gives some hints for future research.

2 Related work

The technical contribution of this paper focuses on the design of Fog-based networked
computing architecture for the energy-efficient joint management of the networking
and computing resources under hard constraints on the overall tolerated computing-
plus-communication delay. At this regard, an examination of the current literature
supports the conclusion that, in principle, the body of the related work is scattered
over the three main research directions.
A first research direction focuses on the resource allocation and task scheduling
aspects, while neglecting the energy consumption issues [10–13]. Specifically, goal
of [10] is the minimization of the task execution times. For this purpose, an approach
based on the constrained linear programming is developed and its time performance is
tested. The work in [11] introduces a fully automated solution to the problem of dynam-

123
P. G. Vinueza Naranjo et al.

ically managing physical resources of a data center in IaaS platforms by minimizing


of service-level objectives (SLOs) violations. The authors of [12] propose an earliest
deadline first (EDF) greedy heuristic, in order to properly perform the assignment of
the processed tasks to the available execution slots. The objective is the minimization
of the number of the utilized per-task execution slots under per-task deadline con-
straints. The authors of [13] formulate the resource allocation and scheduling problem
for data centers subject to continuous task arrivals as a constrained optimization prob-
lem, and their objective is the minimization of the number of tasks which miss their
deadlines. Overall, like our paper, all these contributions consider task deadline as the
principal constraint to be met. However, unlike our paper, these works do not consider
the related energy consumption issue.
A more recent second research direction stems from the consideration that the
energy consumption is becoming a main factor in supporting real-time services through
Fog computing architectures. Roughly speaking, the related technical contributions on
the energy-efficient resource management may be classified into two broad (partially
overlapped) areas, namely: (i) the contributions that attain energy saving by adaptively
scaling the CPU resources during task executions by resorting to dynamic frequency
and voltage scaling (DVFS) techniques [14,15]; and (ii) the contributions that save
energy by adaptively turning OFF the underutilized servers and migrating the hosted
pending tasks [16–19].
Among the contributions that deal with the adaptive scaling of the server resources,
the work in [14] focuses on the impact of the dynamic scaling of the CPU computing
frequencies on the energy consumption experienced by the execution of MapReduce-
type jobs. Goal of [15] is the attainment of an optimized delay-versus- energy trade-off
under bag-of-task-type applications that are executed on DVFS-enabled data centers.
Passing to consider the contributions that focus on the consolidation of underutilized
server resources as the main means for attaining energy reduction, the authors of [16]
rely on virtual machine (VM) migration as a primitive function, in order to imple-
ment server consolidation. Interestingly, they present two heuristic algorithms that
exploit server heterogeneity for the reduction of the energy consumption of deadline-
constrained batch workloads. Topic of the contribution in [17] is the optimization
of the VM placement in heterogeneous data centers. For this purpose, the approach
in [17] deals with the trade-off between two contrasting goals. The first goal is the
efficient spatial placement of VMs on servers, in order to minimize the number of
turned ON servers. The second goal is the balanced time placement of VMs that
exhibit similar resource demands, in order to avoid server overload. The authors of
[18] develop a resource management approach, in order to attain energy saving in
heterogeneous data centers subject to burst input workload. For this purpose, short-
time forecasting of the future task arrivals is periodically performed [20] and resource
provisioning is carried out on the basis of the forecast peak workload. In order to
attain a similar goal, authors of [19] resort to a threshold-based approach, whose tar-
get is the reduction of the energy consumed by underutilized servers. According to this
approach, underutilized servers are put into a sleep mode and, then, they stay sleep-
ing until the number of queued pending tasks exceeds a given (possibly, adaptive)
threshold.

123
Design and energy-efficient resource management of…

Overall, like our work, all the contributions in [14–19] consider the energy con-
sumption as a main issue. However, unlike our work, we point out that: (i) all these
contributions neglect the energy consumed by the networking infrastructures, in order
to support the inter and intra-data center traffic flows; (ii) the contributions in [14,16]
do not consider the streaming nature of the input workload; and (iii) the papers in
[18,19] do not consider hard constraints on the allowed task execution times.
Motivated by these considerations, a third last research direction is focusing on
the resource management of Cloud/Fog-based distributed computing architectures for
the energy-efficient support of real-time Big Data Streaming (BDS) applications by
resource-limited wireless devices [21–24]. Specifically, the S4 and DS streams man-
agement frameworks in [21,22] perform dynamic resource scaling of the available
virtualized resources by explicitly accounting for the delay-sensitive nature of the
streaming workload offloaded by proximate wireless devices. The time stream and
PLAstiCC resource orchestrators in [23,24] integrate dynamic server consolidation
and inter-server live VM migration. Overall, like our contribution, the shared goals of
[21–24] are: (i) the provisioning of real-time computing support to BDS applications
run by resource-limited wireless devices through the exploitation of the virtualized
resources done available by proximate FNs; and (ii) the minimization of the overall
inter/intra-data center computing-plus-networking energy consumption under BDS
applications. However, unlike our contribution, we point out that: (i) the resource man-
agement techniques developed in [21,22] are not capable to self-tune to the (possibly,
unpredictable) time fluctuation of the workload to be processed; and (ii) the man-
agement approaches pursued in [23,24] do not guarantee hard limits on the resulting
per-task execution times.

3 Motivations for the Fog paradigm: emerging QoS applications and


related challenges

FC stems as an innovative model that can complement the Cloud model through the
distribution of the computing-plus-networking resources from remote data centers
toward edge devices. The final goal of the FC paradigm is to save energy and band-
width, while simultaneously increasing the QoS level provided to the users.
As a consequence, FNs are virtualized networked data centers, which run atop
(typically, wireless) access points (APs) at the edge of the access network, in order to
give rise to a three-tier IoT–Fog–Cloud hierarchical architecture (see Fig. 1).
Main innovative attributes of the Fog paradigm are the following ones [3,25–27]:

– Edge location and location awareness Being deployed in proximity of the served
IoT devices, FNs may efficiently leverage the awareness of the states of the com-
munication links (i.e., WiFi-based single-hop TCP/IP transport-layer connections)
for the support of delay and delay–jitter-sensitive applications, like video stream-
ing;
– Pervasive spatial deployment FNs support distributed applications, which demand
for wide spatial deployments, like wireless sensor network (WSN)-based applica-
tions;

123
P. G. Vinueza Naranjo et al.

– Support for the mobility of the served devices FNs may exploit Fog-to-Thing (F2T)
and Thing-to-Fog (T2F) single-hop WiFi links for data dissemination/aggregation;
– Low energy consumption through adaptive resource scaling A main target of
the Fog paradigm is the reduction of both the computing and networking energy
consumptions through the adaptive horizontal (i.e., intra-FNs) and vertical (i.e.,
inter-FNs) scaling of the overall available resource pool;
– Heterogeneity of the served devices According to the IoT paradigm, FNs must
be capable to serve a large spectrum of heterogeneous devices that ranges from
simple RFID tags to complex user smartphones, tablets and multimedia mobile
sensors;
– Dense virtualization IoT devices are resource-limited and densely deployed over
the spatial domain. Hence, in order to provide device augmentation at minimum
resource costs, FNs must be capable to multiplex a large number of virtual clones
with different resource demands onto few physical servers;
– Device isolation In order to guarantee trustworthiness to the served devices, the
corresponding clones must run atop Fog servers as isolated virtual clones.

3.1 Emerging application scenarios, QoS requirements and motivating open


issues

The quantitative analysis of the (typically heterogeneous) QoS demands of Fog-


supported IoT services is a topic that, up to date, seems to be largely unexplored.
In principle, computing and networking-intensive applications that require real-time
processing of spatially distributed environmental data may gain benefit from the
integration of the pillar Fog and IoT paradigms. As deeply detailed in [28], these
characteristics are retained, indeed, by four broad application areas of growing prac-
tical interest, namely Industry 4.0 [29], Internet of Energy, Big Data Streaming and
Smart City.
By design, all the envisioned application scenarios require a vertical integration
of four main subsystems, namely the IoT-based physical resource layer, the network
layer, the proximate Fog layer and the remote control layer. Specifically:
1. The physical resource layer is IoT-based and comprises smart things, like smart
products, smart machines, smart buildings, vehicular mobility, smart conveyors,
gateways, actuators.
2. The network layer provides, in turn, the communication services that are required
by the underlying physical smart things, in order to implement the needed inter-
thing negotiation mechanisms and communicate with the Fog layer.
3. Thank to the virtualization of the underlying IoT things, it is expected that the Fog
layer is capable to provide the scalable processing environment required by big
data applications, in terms of computing, storage and networking resources.
4. Finally, the control layer allows remote people to access to the services offered
by the smart environments through user-friendly application interface, Web-based
portals and Internet gateways.
Let us observe that all the above described application scenarios share three common
features. First, they rely on three-tier device–proximate Fog–remote Cloud architec-

123
Design and energy-efficient resource management of…

tures. Second, they exploit single-hop WLANs and multi-hop WANs, in order to attain
device–Fog and Fog–Cloud connectivity, respectively. Third, in the case in which there
are multiple proximate FNs, these platforms are typically equipped with (possibly,
wireless) backbones, in order to provide inter-Fog communication. By accounting for
these common features, we can place the described application scenarios (Industry 4.0,
Internet of Energy, Big Data Streaming and Smart City) into the general technological
Fog platform of Fig. 1.
Due to their delay-sensitive nature, the energy-saving support of the aforementioned
applications requires a preliminary characterization of the corresponding per-service
resource usages. Motivated by this consideration, in Table 1 we report a synoptic
(gross) indication of the networking QoS requirements that are forecast to arise from
the considered application fields [1–3].
Meeting these requirements opens the doors, in turn, to a set of challenging issues
that should be suitably addressed, in order to allow the actual implementation of the
technological platform of Fig. 1. Specifically, we note that:

– The convergence of the IoT, Fog and Cloud models fostered by the Industry 4.0
paradigm requires [30]: (i) the design of energy-efficient distributed decision-
making mechanisms, in order sustain inter-thing real-time cooperation; and (ii) the
design of an ecosystem of adaptive controllers, in order to enforce self-organization
of the underlying factory things;
– In the realm of the Internet of Energy, main open issues concern [31]: (i) the
design of distributed resource management techniques for the dynamic allocation
of the available bandwidth, computing and storage, so as to cope with the spatial
fluctuations of the energy demands; and (ii) the design of efficient and user-friendly
mechanisms that allow the users to be capable to control the services provided by
the Internet of Energy technological platform;
– The support Big Data Streaming services require the real-time offloading of data to
proximate and/or remote data centers over the available access-plus-Internet net-
works, together with the corresponding real-time reconfiguration of the intra-data
center computing-plus-networking resources [32]. The final target is the mini-
mization of the overall inter-/intra-data center computing-plus-networking energy
consumptions under the QoS requirements of Table 1;
– Since Smart cities aim at offering a number of innovative services (i.e., real-time
traffic information, intelligent transportation, multimodal ticketing services and
dynamic interface with the public administration), a main open issue regards the

Table 1 Forecast QoS requirements under the application environments of Sect. 3.1

Fields Latency (ms) Latency jitter (ms) Packet loss rate Bandwidth (Mb/s)

Industry 4.0 ≤5 ≤ 0.5 ≤ 10−4 ≥ 0.2


Internet of energy ≤ 200 ≤ 15 ≤ 10−2 ≥ 0.05
Big data streaming ≤ 100 ≤ 10 ≤ 10−2 ≥ 10
Smart city ≤ 10 ≤3 ≤ 10−3 ≥2

123
P. G. Vinueza Naranjo et al.

design of energy-efficient scalable optimization tools for the real-time distributed


analysis of environmental data streams and user’s queries [33].
Overall, motivated by these considerations, in the following sections we develop
the architecture of a virtualized networked computing platform for the Fog-aided
support of real-time streaming services and, then, we present a suitable low-complexity
heuristic for the corresponding energy-efficient management of the hosted virtualized
computing-plus-networking resources.

4 Service models and adopted virtualization of the Fog-of-IoT realm

In computing systems that utilize only remote Cloud data centers, the IoT devices at
the edge of the network may communicate to the Cloud servers by exploiting only
Internet-based multi-hop WANs. In fact, under this scenario, all the computing and
storage resources needed by device augmentation are in the remote Clouds and IoT
devices may access these remote resources by exploiting only the server–client model.
The picture changes radically under the IoT-Fog ecosystem of Fig. 1: the physical
resources needed by device augmentation are no longer concentrated into the remote
Cloud. This makes available, in turns, three basic service models for the workload
execution, namely the offloading, aggregation and peer-to-peer models.
Under the Offloading model, FNs act as switches, in order to offload the traffic from
the devices to the remote Cloud (i.e., up-offloading) and from the remote Cloud to
the devices (i.e., down-offloading). In both cases, FNs perform a (possibly, partial)
processing of the switched workload, in order to reduce the communication latency
and forward to the remote Cloud only the most computing-intensive and delay-tolerant
tasks.
Under the aggregation model, data streams generated by multiple devices are gath-
ered (and, possibly, suitably fused) by a multiplexing node at the edge of the network
and, then, are routed to the remote Cloud through the Internet WAN for further pro-
cessing (see Fig. 1). The multiplexing node is a gateway router. It is connected to
a 3G/4G or a (future) 5G base station, and it may be also equipped with (limited)
processing capability.
Finally, under the peer-to-peer (P2P) model, proximate devices make available
their computing and storage capabilities, in order to share tasks and cooperate for
workload execution. For this purpose, the cooperating devices build up D2D links that
typically rely on short-range wireless communication technology, like UWB, WiFi or
Bluetooth [34].
The choice of the right service model depends on the considered application and the
level of context awareness of the involved IoT devices. At this regard, we anticipate
that the simulated scenarios of Sects. 8 and 9 mainly refer to the up/down-offloading
service model that is used for providing device augmentation [3].

4.1 The proposed container-based virtualization

Virtualization is employed in Cloud- and Fog-based data centers, in order to [35]:


(i) dynamically multiplex the available physical computing, storage and networking

123
Design and energy-efficient resource management of…

Fig. 2 Proposed container-based virtualization of a physical server equipping a Fog node. a Proposed
virtualized server architecture; b architecture of a virtual processor. HW CPU hardware, NIC network
interface card, HOS host operating system, VC virtual core, VP virtual processor, f processing frequency

resources over the spectrum of the served devices; (ii) provide homogeneous user
interface atop (possibly) heterogeneous served devices; and (iii) isolate the applications
running atop a same physical server, in order to provide trustworthiness. Roughly
speaking, in virtualized data centers, each served physical device is mapped into a
virtual clone that acts as a virtual processor and executes the programs on behalf of
the cloned device. In principle, two main virtualization technologies could be used to
attain device virtualization, namely the (more traditional) virtual machine (VM)-based
technology [35] and the (emerging) container (CN)-based technology [36].
In a nutshell, their main architectural differences are that [36]: (i) the VM technology
relies on a Middleware software layer (i.e., the so-called Hypervisor) that statically
performs hardware virtualization, while the CN technology uses an execution engine,
in order to dynamically carry out resource scaling and multiplexing; and (ii) a VM is
equipped with an own (typically, heavy-weight) guest operating system (GOS), while
a CN comprises only application-related (typically, lightweight) libraries and shares
with the other containers the host operating system (HOS) of the physical server.
As a consequence, the main pros of the CN-based virtualization technology are
that: (i) CNs are lightweight and can be deployed significantly quicker than VMs; and
(ii) the physical resources required by a running container can be scaled up/down in
real-time by the corresponding execution engine, while, in general, physical resources
are statically assigned to a VM during its bootstrapping.
Overall, due to the expected large number of devices to be virtualized in IoT appli-
cation environments, resorting to the CN-based virtualization would allow to increase
the number of virtual clones per physical server (i.e., the so-called virtualization den-
sity).
Motivated by this consideration, in Fig. 2a, we report the main functional blocks
of the proposed virtualized architecture of the physical servers at the Fog node [36].
At this regard, three main explicative remarks are in order.
First, each server hosts a number MCN ≥ 1 of containers. All these containers share
the pool of computing (i.e., CPU cycles) and networking (i.e., I/O bandwidth) physical
resources done available by the CPU and network interface card (NIC) that equip the

123
P. G. Vinueza Naranjo et al.

host server. Task of the Container Engine of Fig. 2a is to dynamically allocate to the
requiring containers the bandwidth and computing resources done available by the
host server.
Second, each container plays the role of virtual clone for the associated physical
thing. For this purpose, each container is equipped with a virtual processor (VP) that
comprises (see Fig. 2b): (i) a buffer that stores the currently processed application
tasks; and (ii) a virtual core (VC), that runs at the processing frequency f dictated by
the Container Engine. Therefore, goal of the task manager of Fig. 2a is to allocate the
pending application tasks over the virtual core of Fig. 2b.
Finally, without loss of generality, in the sequel, we assume that the processing
frequency f of Fig. 2b is measured in bit-per-second (b/s), while the task sizes are
measured in bit. However, according to [37], the corresponding number: s of CPU
cycles per second may be directly computed as in:

s = δ × f, (1)

where δ [measured in CPU cycles per bit, i.e., (CPU cycles/b)] is the so-called process-
ing density of the running application. It fixes the (average) number of CPU cycles per
processed bit, and its value increases with the computing intensity of the considered
application (see [37] and references therein for more details on this aspect).

5 Enforcing security in container-based virtualized platforms

Table 2 summarizes the native features of the VM-based and CN-based virtualization
technologies, in order to give insight on the corresponding pros and cons. At this regard,
we point out that, since all containers hosted by a physical server exploit the same HOS
(see Fig. 2a), in principle, a possible drawback of the CN-based virtualization tech-
nology is that the level of inter-application trustworthiness of this technology may be
below than the corresponding one guaranteed by VM-based virtualized architectures.
Motivated by this consideration, the following three subsections are devoted to: (i)
discuss the main security threats that could affect CN-based virtualized networked
computing Fog platforms; (ii) envision possible security-enforcing cross-layer mech-
anisms; and (iii) present some emerging security-enhancing solutions that aim at

Table 2 Container (CN)-versus-virtual machine (VM) comparison

CN native features VM native features

Low per-CN bootstrapping delays Large per-VM bootstrapping delays


Equipped with only the host operating system Equipped with both host and guest operating systems
High number of CNs per physical server Low number of VMs per physical server
Dynamic resource allocation on a per-CN basis Static resource allocation on a per-VM basis
Low level of CN isolation High level of VM isolation

123
Design and energy-efficient resource management of…

suitably combining the best of both CN-based and VM-based virtualization tech-
nologies.

5.1 Main expected security threats in CN-based virtualized Fog platforms

In principle, sharing the same HOS and Container Engine (see Fig. 2a) may do CN-
based virtualized networked platforms to be prone to the following spectrum of security
threats:
– Information sniffing and privacy leakage Being located at the edge of the wireless
access networks (see Fig. 1), CN-based virtualized Fog nodes implement various
software (SW) interfaces (e.g., APIs) that could make available to a malicious
container sensitive information about the surrounding cyber-physical environment.
This is, for example, the case of information about the data traffic conveyed by the
utilized access networks and/or the IP addresses of the served users.
– Privilege misuse Malicious containers can attempt to take advantage of the non
perfect inter-CN isolation through an escalation of their access privileges, in order
to acquire the control of strategic infrastructures of the hosting Fog nodes. This
threat is exacerbated by the fact that virtual containers may perform, by design,
inter-Fog migrations.
– Misuse of virtualized resources By leveraging the shared nature of the Container
Engine (see Fig. 2a), a malicious container can execute malicious codes that do not
target the hosting Fog node, but affect (possibly, remote) vulnerable IoT devices
that are served by other Fog nodes. For this purpose, the malicious container may
attempt to crack device passwords, or host botnet servers.
– Denial of Service (DoS) A malicious container can try to saturate (e.g., fully
consume) the communication and/or computing resources of the hosting Fog
node. This threat is quite serious in the considered Fog environment, because
the resources available at each Fog node are not so large as those equipping Cloud
data centers and, then, they may be quickly saturated.
– Container manipulation A malicious container equipped with sufficient privileges
can launch a spectrum of attacks to other CNs running atop the hosting Fog node.
These attacks can range from the sniffing of sensitive stored data to the manip-
ulation of the codes that are executed by the attacked CNs. Further, a malicious
container can also infect the attacked CNs with viruses that, in turn, may com-
promise the security of other Fog nodes through the migration of the infected
CNs.
– Attacking the reliability of the virtualized network infrastructures The communi-
cation technologies supporting the access of IoT devices (e.g., 3G/4G/5G cellular
and WiFi ones) are typically equipped with their own security protocols. How-
ever, different security protocols generate different trust domains. Hence, a first
challenge of the networking security in Fog environments concerns the secure dis-
tribution of credentials to geographically scattered IoT devices, in order to make
possible the negotiation of session keys among different trust domains. Besides,
emerging virtualization networking techniques [like as, for example, software-
defined networking (SDN) and network function virtualization (NFV)] are, by

123
P. G. Vinueza Naranjo et al.

design, software and, then, vulnerable. This increases the chance of successfully
attacking single IoT devices, which, in turn, provide Troy’s horses for launching
attacks to the overall Fog ecosystem of Fig. 1.

5.2 Envisioned security-enforcing cross-layer mechanisms for CN-based


virtualized Fog platforms

The above considerations call for robust cross-layer security mechanisms that are
capable to cover the overall computing-plus-networking virtualized protocol stack of
Fig. 2a. Roughly speaking, a viable approach may be to build containers over virtual
machines (e.g., the CN-over-VM approach). The final goal of this approach would be
to retain the best of the CN and VM-based virtualization technologies, namely low
latencies, supporting of device mobility, attainment of interoperability and service
support under intermittent network connectivity. Hence, according to the CN-over-
VM approach, we envision the following main security-enhancing mechanisms to be
integrated into the virtualized platform of Fig. 2a:
– Enforcing access control by user’s trust and authentication Trust mechanisms
allow to know the true identity of the virtualized devices the Fog system are
interacting with, in order to selectively allow them to access only specified Fog
resources. Hence, the presence of an authentication/authorization infrastructure is
mandatory, in order to: (i) check the credentials of the querying IoT devices; and
then, (ii) selectively authorize their requests on the basis of their own credentials.
Hence, motivated by the distributed nature of the Fog system of Fig. 1, we envision
the deployment of an ecosystem of trust agents, that: (i) operate on a per-Fog
node basis; (ii) allow the distributed process of the credentials of geographically
scattered IoT devices; and then, (iii) enable the inter-Fog migration of only trusted
containers.
– Enforcing data/code privacy In the ecosystem of Fig. 1, personal data may be
offloaded to serving Fog nodes and, then, stored and processed by agents that are
outside the control of their owners. Therefore, it is mandatory to provide users
with suitable mechanisms that not only protect the offloaded data but also allow
the users to selectively query and /or process them. These contrasting requirements
demand, in turn, for a balanced mix of anonymity and responsibility. This means
that, in principle, any container hosted by a Fog node could be authorized (if
necessary) to access any data stored by any other container, but, if the querying
container misbehaves, it should be possible to use some mechanisms to identify
and banning it.
– Enforcing fault resilience Due to the shared nature of the CN-based virtualized
platforms, misconfigurations, vulnerabilities and outdated software may allow
malicious containers to disable or take the control of core elements of the whole
hosting Fog node. Hence, it is valuable to integrate into the virtualized platform
of Fig. 2a suitable mechanisms (e.g., fail-over procedures, redundant operations,
disaster recovery strategies) that allow the hosted containers to continue their tar-
get operations even in the presence of fault events. At this regard, we envision
recovery mechanisms that are capable to exploit the interconnected nature of the

123
Design and energy-efficient resource management of…

Fog nodes of Fig. 1, in order to cope with fault events that locally affect single
data centers through suitable migrations of the compromised containers.

5.3 Promising security-enhancing solutions for CN-based virtualized Fog


platforms

In this subsection, we shortly present some recent security-enhancing solutions that


pursue the (aforementioned) CN-over-VM approach and, then, could be directly inte-
grated into the considered virtualized platform of Fig. 2a.
– Promising solutions for secure access control and distributed authentica-
tion/authorization The OPENi framework [38] develops a fine-grained access
control mechanisms that may be suitable for Fog-of-IoT application scenarios.
In fact, in the framework of [38], the administrator of a Fog node autonomously
defines the access rights on a per-resource and per-device basis by dynamically
building up and updating a local access control list. The main merit of this approach
is to allow each Fog owner (e.g., Fog administrator) to define what operations can
be performed on a specific resource by a specific device.
Other approaches [39,40] resort to cryptographic primitives to enforce attribute-
based access control policies. For this purpose, each user device is equipped with a
specific list of attributes, and suitably defined access control rules link the attributes
to the allowed operations and resource usages. The key feature of these crypto-
graphic mechanisms is to allow the service providers to directly use the device
attributes as credentials, in order to get permissions to dynamically instantiate
containers onto Fog nodes.
According to the policy management framework for Fog computing proposed in
[41], the Container Engine of Fig. 2a is supported by a policy management module,
which implements various components, such as, a repository of access rules and an
attribute database. These components may be employed for collectively enforcing
access control polices at multiple levels, e.g., at level of Fog nodes, containers and
also IoT devices. As a consequence, the approach in [41] looks (very) suitable for
the support of inter-domain trust migration of containers.
– Promising solutions for private computation Offloading computation-intensive
tasks to serving Fog nodes is the basic service offered by the Fog ecosystem of
Fig. 1. Since the offloaded data may contain sensitive information, information
leakage may happen when the serving Fog nodes are not trusted. In addition to
information leakage, due to (possible) code misbehaving (e.g., SW bugs), Fog
nodes may also return to the IoT devices wrong results. Hence, to attain secure
and private computation, it is required that the containers hosted by the serving
Fog nodes run the submitted tasks and check the correctness of the performed
computations without any knowledge of the original data offloaded by the users
[42]. This is the goal of the solution presented in [43], where the offloaded tasks are
split into two components, namely the public-owned solvers and the private-owned
data. After performing a (secret) privacy-preserving transformation of the private
data, the device offloads the encrypted data for container execution, and then, the
container will return back the results for the transformed data. At this point, a

123
P. G. Vinueza Naranjo et al.

set of conditions are applied by the device, in order to check for the correctness
of the returned data. If the check is positive, the device passes to decrypt the
returned solution for the original problem by using its own secret key. Overall,
the appealing feature of this approach to the private computation is that it incurs
minimal additional time and computing overhead on both the device and associated
container.
– Promising solutions for improving trustworthiness and reliability Due its dis-
tributed nature, a viable solution for improving the trustworthiness of CN-based
virtualized Fog data centers may resort to the notion of virtual trusted platform
modules (vTPMs) recently presented in [44]. By enclosing each container into a
suitable vTPM, this approach is capable to provide trusted service primitives (e.g.,
cryptographic functions and secure data storage) to any container that runs atop
a (shared) HOS (see Fig. 2a). These trusted service primitives may be, in turn,
composed, in order to implement various other security services that are relevant
to the Fog environment, like, for example, CN bootstrapping, CN migration and
CN duplication. At this regard, we observe that, since the Fog ecosystem is com-
posed by a number of spatially distributed devices and networking connections,
the (aforementioned) CN duplication (also referred to as CN cloning) constitutes
a basic means for improving the Fog reliability. This topic is discussed in [45],
where it is pointed out that it is essential to plan for failure of IoT devices and
network links, in order to improve the reliability of the overall Fog ecosystem.
Roughly speaking, the solution presented in [45] leverages the native redundancy
of the data acquired by spatially colocated IoT devices, in order to effectively cope
with device and/or link failure events.

6 The proposed virtualized Fog-of-IoT architecture for the support of


streaming applications

By leveraging the CN-based virtualization of Figs. 2 and 3 reports the main blocks of
the proposed virtualized networked architecture. It operates at the Middleware layer
of each FN. In Fig. 3, time is slotted, t is the discrete-valued slot index, and the t-th slot
spans the semi-open time interval: [t TS , (t + 1)TS ), where TS (measured in seconds)
is the time duration of each slot. According to Fig. 3, the proposed architecture is
composed by:
– The input buffer that stores the workload received during each time slot. At the
beginning of each slot, the input buffer is emptied and the input workload is
processed by the computing platform of Fig. 3. In order to perform access control
and guarantee hard bounds on the overall resulting processing delay, the storage
capacity: Sbufmax (measured in bit) of the input buffer is limited up to the maximum

workload that the computing platform of Fig. 3 may process during a single time
max ≤ f max T , where f max (bit/s) is the summation of the maximum
slot, that is, Sbuf S
processing frequencies of all available containers of Fig. 3;
– The output buffer that stores the workload processed during each time slot. At the
beginning of each time slot, the output buffer is emptied and the output workload
is rendered to the IoT devices;

123
Design and energy-efficient resource management of…

Fig. 3 The proposed Fog virtualized architecture. It operates at the middleware layer of the corresponding
protocol stack. CN container, MCN number of available containers

– The dynamic workload dispatcher that allocates the input workload and dynam-
ically reconfigures and consolidates the available computing-plus-networking
physical resources, in order to minimize the consumed energy;
– The virtual switch that manages the TCP/IP transport connections on an end-to-end
basics and performs network flow control;
– The bank of containers (see Fig 2);
– The Container Engine that dynamically allocates to the requiring containers the
bandwidth and computing resources done available by the host server (see the
bottom layers of Fig. 3).
Further, in the proposed architecture, we have also that: (i) {SSE (s), s = 1, . . . , SSE }
is the set of the available physical servers; (ii) CN (s; v) is the v-th CN hosted by the
s-th server; (iii) f s,v (t) is the processing rate of CN (s; v) at slot t; (iv) Ls,v (t)is the
workload processed by CN (s; v) at slot t; (v) f s,v max is the maximum processing rate

of CN (s; v).

6.1 The considered resource management problem

By referring to the proposed container-based virtualized architecture of Fig. 3, let


Evidle (s) and Evmax (s) be the idle and maximum energies (measured in J) wasted by the
v-th container hosted by the s-th physical server. Hence, the corresponding computing
energy Es,v com (t) (J) consumed by the container during the t-th slot may be modeled as

in (see, for example, [35] and references therein):

123
P. G. Vinueza Naranjo et al.

 w
  f s,v (t)  
Es,v
com
(t) = Evidle (s)u −1 f s,v (t) − ON
f s,v + max
Evmax (s) − Evidle (s) . (2)
f s,v

In Eq. (2), we have that: (i) u −1 (.) is the unit-step Heaviside’s function (i.e., u −1 (x) = 1
for x ≥ 0, and u −1 (x) = 0 otherwise); (ii) f s,v ON (bit/s) is the minimum processing

frequency needed to turn ON the virtual processor in Fig. 2b of the v-th container
ON ∼ 1010 × f max [46]; and (iii) the
hosted by the s-th physical server. Typically, f s,v = (s,v)
(dimensionless) power exponent w fixes the rate of increment of the energy consumed
by the container for increasing values of the corresponding processing frequency
f s,v (t). Typically, w ≥ 2 [46].
Furthermore, the first (resp., second) term at the right-hand side of Eq. (2) accounts
for the static (resp., dynamic) energy consumed by the considered container and it
may be reduced by performing container consolidation (resp., dynamic scaling of the
processing frequency f s,v (t)).
Passing to consider the energy Es,v net (t) (J) consumed by the (s, v)-th TCP/IP con-

nection working at the Middleware layer of the proposed architecture of Fig. 3, several
formal analysis and numerical measurements support the conclusion that, when the
connection operates in the congestion avoidance state, its energy-versus-TCP through-
put relationship may be well described by the following power-like formula (see, for
example, [27] and references therein):
   L (t) γ
setup s,v
Es,v
net
(t) = Es,v u −1 Ls,v (t) − Ls,v +
ON
Ωs,v
net
. (3)
TS

setup
In Eq. (3), we have that: (i) Es,v (J) is the energy consumed for the setup of the
(s, v)-th connection. It mainly depends on the actually adopted Fast/Giga/Ten Giga
Ethernet switching technology; (ii) since Ls,v (t) (measured in bit) is the volume of the
traffic conveyed by the (s, v)-th connection at slot t, the ratio: Ls,v /TS (measured in
(bit/s)) is the corresponding conveyed throughput; (iii) LON s,v  f s,v × TS (measured
ON

in bit) is the minimum traffic that the (s, v)-th connection must convey, in order to be
turned ON; (iv) the (dimensionless) exponent γ fixes the rate of the increment of the
energy consumed by the (s, v)-th connection for increasing values of the sustained
throughput. Typically, γ ∼ = 1.2−1.3 [35]; and (iv) Ωs,v net ((J)/(bit/s)γ ) is the energy

profile of the considered connection that, in turn, depends on the actually adopted
Ethernet-type switching technology [46].
Let us note that the introduced models of the computing and the networking energies
in Eqs. (2) and (3) are fully parametric in terms of their parameters Evidle , Evmax , f s,v
max ,
ON and E setup
s,v , Ωs,v , respectively. In this way, energy consumption of the proposed
f s,v net

virtualized Fog-of-IoT architecture is independent of the particular hardware used and


it can be scaled accordingly.
Before proceeding, we also observe that switching from the processing frequency:
f s,v (t − 1) (i.e., the processing frequency of the (s, v)-th container at slot (t − 1)) to
the processing frequency: f s,v (t) (i.e., the corresponding processing frequency at slot
t) induces an energy overhead: Es,v sw (t) (J). Although its actual value depend on the

adopted DVFS technique, it may be typically modeled as in [27]:

123
Design and energy-efficient resource management of…

Es,v
sw
(t) = ke | f s,v (t) − f s,v (t − 1)|β , (4)

where (i) ke ((J)/(Hz)β ) is the energy cost induced by a unit-size frequency switching.
Typically, it is limited up to few hundreds of (μJs) per (MHz) [27]; and (ii) β is a
dimensionless power exponent, with β ∼ = 2.
Overall, by summing the energy contributes in Eqs. (2), (3) and (4) over the full
set of the available containers and physical servers, we obtain the total energy: ET (t)
(J) consumed by the proposed architecture of Fig. 3 at slot t that is formally defined
as in: 
ET (t)  Es,v
com
(t) + Es,v
net
(t) + Es,v
sw
(t). (5)
s v

In order to formally introduce the afforded optimization problem, let LT (t) be the
overall workload (measured in bit) that is drained from the input queue of Fig. 3 at
the beginning of slot t (i.e., the total workload to be processed by the set of avail-
able containers of Fig. 3 during the t-th time slot). Hence, the constrained resource
management problem to be afforded at slot t is stated as follows:

min ET (t),
{Ls,v (t), f s,v (t)}

s.t.: (6.1)
 

LT (t) − Ls,v (t) = 0, (6.2)
s v
f s,v (t) − f s,v ≤
max
0, ∀s, v, (6.3)
Ls,v (t) − TS f s,v (t) ≤ 0, ∀s, v, (6.4)
f s,v (t) ≥ 0, Lmax
s,v ≥ 0, ∀s, v. (6.5)

Before proceeding, three main remarks are in order about the formulation of the
considered optimization problem. First, the constraint in Eq. (6.2) guarantees that all
the input workload LT (t) is actually processed by the containers of Fig. 3. Second,
max the current processing frequency f (t) of the (s, v)-th
Eq. (6.3) limits up to f s,v s,v
container. Third, Eq. (6.4) forces the processing frequency f s,v (t) to be high enough
to allow the container to process the assigned workload Ls,v (t) within a (single) time
slot TS . Hence, since the delays induced by the input and output buffer of Fig. 3 are
limited up to two slot times by design, we conclude that the total per-task queue-plus-
networking-plus-computing delay introduced by the proposed virtualized platform
of Fig. 3 is limited in a hard (i.e., deterministic) way up to 3 × TS seconds. This
confirms, in turn, that the proposed platform is capable to support real-time streaming
applications.

7 The proposed PABP-type resource management heuristic

An examination of the expressions in Eqs. (2), (3) and (4) points out that: (i) due to
the presence of the power terms, the afforded optimization problem in (6) is no linear;

123
P. G. Vinueza Naranjo et al.

and (which is the most): (ii) due to the presence of the step function, the problem is
non convex and its objective function in (6.1) does not admit formal derivatives. As
a consequence, in principle, we cannot resort to gradient-like Kharus–Khun–Tucker
(KKT)-based approaches, in order to solve the considered problem. Hence, motivated
by these considerations, in the sequel, we propose a low-complexity solving heuristic,
that relies on the key observation that the problem in Eq. (6) may be re-cast under the
form of a penalty-aware bin packing optimization problem.
According to this observation, at the beginning of each time slot and on the basis of
the overall workload LT (t) to be actually processed, we compute how many containers
must be turned ON at the t-th slot. After that, we consider servers as bins and contain-
ers as packs that must be served based on their energy consumptions and frequency
limitations. This process continues in an iterative way. Specifically, at each iteration we
give penalty or reward to each server, based on their energy characteristics (e.g., idle
energy, maximum energy, and maximum frequency). The penalty and reward policy
is applied, in order to minimize the overall energy consumption. The penalty means
that the considered server is punished and banned to be used again during some next
iterations. When the freezing period of the penalized server expires, it is added to the
list of available servers for the remaining iterations. Note that the server s can host
CNs when it does not already host the maximum number: CN max (s) of containers.
Otherwise, the server s is deleted from the list of the available processing servers. The
maximum number of iterations is limited up to MCN (i.e., the number of available
containers).
In a nutshell, at each iteration, we are looking for the best servers to be turned ON.
At the end, we calculate the overall energy consumption for the current slot time. A
detailed pseudo-code of the proposed resource management algorithm is reported in
Algorithm 1, with Ts being the set of the considered time slots.

7.1 Implementation aspects and implementation complexity

In this subsection, we address some aspects related to the adaptive and distributed
implementation of the proposed Fog architecture of Fig. 3 and the related PABP
resource management heuristic of Algorithm 1. In so doing, we also point out some
possible generalizations of the considered Fog application scenario.

7.1.1 Managing multiple applications and multi-tier Fog data centers

The proposed resource manager of Algorithm 1 can be also applied to the more general
cases of multi-application and multi-tier Fog data centers.
Specifically, in multi-application data centers, NAP ≥ 2 applications generate
parallel input workloads that are processed by a bank of networked computing plat-
forms [47]. Since each per-application computing platform retains the architecture
of Fig. 3 and manages own dedicated networking-plus-computing resources, the
overall resource manager reduces to NAP parallel sub-managers that operate on a
per-application basis and still apply the proposed heuristic of Algorithm 1.

123
Design and energy-efficient resource management of…

Algorithm 1 Pseudo-code of the proposed PABP-type heuristic.


max , E idle (s), E max (s), T , T , penalt y , f lag, κ.
INPUT: SSE , S pun , MCN , LT (t), f s,v v v s s s
OUTPUT: ET (t), {Ls,v (t)}, { f s,v (t)}

1: while t ∈ Ts do
2: S pun = {}
3: Check feasibility times in input
4: Check feasibility of input
5: penalt ys = 0, s ∈ SSE
6: f lag(t) = 0;
7: for itr = 1 : MCN do
8: if LT (t) ≤ f s,v max T then
s
9: f s,v (t) = LT (t)/Ts ;
% Find a set of servers with zero penalty
10: if (Ŝ(t) find( penalt ys ) = 0, ∀ s ∈ SSE ) then
11: Smin (t)  Find the minimum Evidle (s) in Ŝ(t);
12: if ( f lag(t) = 1) then
13: penalt ys = penalt ys − 1; ∀ s ∈ S pun
14: end if
15: CN (Smin (t); v) = CN (Smin (t); v + 1);
% Add a new CN to Smin (t) and add Smin (t) to S pun
16: penalt y Smin (t) = κ;
17: S pun ← Smin (t)
18: f lag(t) = 1;
19: end if
20: end if
21: end for
22: Calculate E(t) as in (5);
23: return ET (t), {Ls,v (t)}, { f s,v (t)};
24: end while

A similar conclusion holds also for multi-tier data centers for emerging Web 2.0
applications [47]. They are composed by the cascade of NAP ≥ 2 networked computing
platforms that sequentially process the input workload in a pipelined way. Hence, after
indicating by L(i+1)
T (t) the workload to be processed by the (i + 1)-th tier at slot t,
each tier may be controlled by a local resource manager that still applies the proposed
PABP heuristic.

7.1.2 The container engine

Main task of the Container Engine of Fig. 2 is to map the demands for the
per-connection throughputs and per-CN processing frequencies formulated at the Mid-
dleware layer by the proposed resource manager into adequate channel bandwidths
and CPU cycles at the underlying network and server layers. In principle, these map-
pings may be performed in real-time by equipping the Container Engine of Fig. 2 by
the so-called mClock and SecondNet mappers in [48,49], respectively. Specifically,
according to Table 1 of [48], the mClock mapper is capable to guarantee CPU cycles
on a per-CN basis by adaptively managing the computing power of the underlying
DVFS-enabled physical servers (see the bottom part of Fig. 3). Likewise, the Sec-
ondNet network mapper provides Ethernet-type contention-free links atop any set of

123
P. G. Vinueza Naranjo et al.

TCP-based (possibly, multi-hop) end-to-end connections by resorting to a suitable


port-switching-based source routing [49]. In so doing, it supports the actual imple-
mentation of the virtual switch of the proposed architecture of Fig. 3.

7.1.3 Dynamic profiling of the intra-Fog computing–networking energy


consumptions

The per-container maximum and idle energies in Eq. (3) may be dynamically profiled
on a per-slot basis by equipping the Container Engine of Fig. 2 with the Joulemeter
tool in [46]. It is a software tool that provides the energy metering functionalities as
currently exist in hardware for physical servers. For this purpose, Joulemeter uses
Middleware layer-observable hardware power states, in order to measure the per-
container energy consumption of the used hardware components (see Section 5 of
[46] for a detailed description). Interestingly, the field trials reported in [46] confirm
that, at least when the CNs hosted by each physical server are homogeneous, the per-
slot maximum and idle energies wasted by a running container are proportional to the
maximum and idle powers consumed by the hosting physical server, that is [see Eq.
(3)],

Es,v
idle
= TS PSE
idle
/MCN(s,v) , (7)
Es,v
max
= TS PSE
max
/MCN(s,v) . (8)

where (i) MCN(s,v) is the (possibly, time-varying) number of CNs running on the same
physical server that hosts the considered (s, v)-th container; (see Fig. 3); and (ii) PSE
idle

(W) (resp., PSE (W)) is the idle (resp., maximum) power consumed by the physical
max

server. When MCN(s,v) vanishes, the host physical server is turned OFF and also the
corresponding energies Es,vidle and E max in Eqs. (7) and (8) vanish.
s,v
Passing to consider the online profiling of the setup and dynamic energies in Eq. (4)
of the (s, v)-th TCP/IP intra-Fog connection, we observe that, in emerging broadband
data centers, each physical network interface card (NIC) typically supports a single
TCP/IP connection, in order to reduce the resulting average round-trip-time and, then,
setup dyn
saving energy. Hence, after indicating by PSW (W) (resp., PSW (W)) the setup (resp.,
dynamic) energy consumed by each NIC hosted by a physical switch, the resulting
setup and dynamic energies of the (s, v)-th TCP/IP connection may be profiled online
by summing the setup and dynamic energies consumed by the NICs along the end-
to-end route (i.e., path) from the source end to the destination one (see [46] for the
implementation details).

7.1.4 Managing discrete processing frequencies

Dynamic scaling of the per-container processing frequencies relies on DVFS-


enabled physical servers that, in general, are equipped with a finite set: S 
{ fˆ(0) , fˆ(1) , . . . , fˆ(Q−1) } of Q ≥ 2 discrete CPU processing speeds. Hence, in order
to deal with continuous and discrete frequency settings under a unified framework, the
proposed heuristic of Algorithm 1 may be generalized by including a post-processing

123
Design and energy-efficient resource management of…

time-sharing stage. For this purpose, let fˆ(s) and fˆ(s+1) be the discrete allowed fre-
quencies that surround the continuous processing frequency f ∗ generated by the
proposed heuristic, that is, fˆ(s) ≤ f ∗ ≤ fˆ(s+1) . Hence, the considered container
runs at fˆ(s) (resp., fˆ(s+1) ) during a fraction  (resp., (1 − )) of the slot time. In
orderto leave unchanged the resulting per-slot energy consumption,  is set as in:
 = (f ˆ(s+1) w ∗ w
) −(f ) / (f ˆ(s+1) w ˆ(s) w
) − ( f ) . In practice, the frequency hop-
ping mechanism required by the described time-sharing operation may be implemented
by equipping the physical servers with commodity delta modulators [35].

7.1.5 Implementation complexity of the proposed heuristic

Regarding the implementation complexity of the resource management heuristic of


Algorithm 1, two main remarks are in order. First, in order to track the (possibly,
unpredictable) time fluctuations of the input stream to be processed, it should run
at the beginning of each time slot. Hence, its per-slot execution time TI (s) must be
lower than the corresponding slot time TS . In the carried out tests, we have numerically
tested that: TI ≤ (TS /300) suffices. Second, since each container updates the size of
the task to be processed and the corresponding processing frequency, the total number
of variables to be updated scales up linearly with the number MCN of the running
containers. Therefore, the total per-slot implementation complexity of the proposed
resource management algorithm scales as in O(MCN ). Furthermore, since the pro-
posed algorithm allows each container to update its current task size and processing
frequency through local measurements (see Algorithm 1), it may be implemented in a
distributed way. As a consequence, the resulting per-container asymptotic implemen-
tation complexity of the proposed algorithm is independent from the number MCN of
the running containers and it is of the order of O(1).
Overall, from the above remarks, we conclude that the implementation of the pro-
posed resource manager at the Middleware layer of each FN is adaptive and distributed,
while the resulting per-slot implementation complexity does not depend on the on the
size of the considered Fog data center. These considerations do appealing, indeed, the
actual implementation of the proposed heuristic as resource manager of the available
virtualized resources.
In the following Sects. 8 and 9, we pass to numerically test and compare the per-
formance of the considered CN-based virtualized Fog-of-IoT architecture of Fig. 2
equipped with the proposed resource manager of Algorithm 1. At this regard, in order
to put the reported numerical results under the right perspective, we shortly antici-
pate the following three points. First, Sect. 8 focuses on a wireless static scenario,
in which the IoT devices are fixed and the wireless access links are impaired by
static fading. Under this scenario, the implementation complexity-versus-energy per-
formance trade-off of the proposed resource manager of Algorithm 1 is compared with
the corresponding ones of the resource management algorithms recently reported in
[50,51]. Second, Sect. 9 considers a mobile application scenario, in which the IoT
devices are mobile and the corresponding virtualized containers follow the spatial
trajectories of the associated devices by performing suitable inter-Fog migrations.
The energy-versus-delay performances of the Fog platform of Fig. 1 equipped with

123
P. G. Vinueza Naranjo et al.

the resource manager of Algorithm 1 are compared with the corresponding ones of a
benchmark edge computing platform that utilizes only D2D links for building up inter-
device mobile communication paths. Third, in all tested cases, intra-Fog, inter-Fog,
Thing-to-Fog and D2D network links are built up through suitable TCP/IP end-to-end
connections.

8 A first case study: a static application scenario

The performances of the proposed architecture of Fig. 3 and the resource manage-
ment heuristic of Algorithm 1 have been numerically tested by utilizing the recently
proposed iFogSim toolkit as SW simulation platform [7]. Under the called edge-ward
placement mode, the iFogSim toolkit allows the simulation of FNs and IoT devices, by
tuning their computing, communication and storage capabilities; setting the number of
computing cores and their CPU speed-versus-computing power profiles; bandwidths
of the NICs and their corresponding transmission rate-versus-communication power
profiles; and as the available RAM for task storage.

8.1 The simulated static application scenario

In the carried out tests, the IoT devices are assumed placed over a spatially limited
intranet. We consider a physical topology with four heterogeneous FNs (see Fig. 4).

Fig. 4 An illustrative screenshot of the run iFogSim simulator under the static application scenario of Sect.
8

123
Design and energy-efficient resource management of…

Table 3 Main parameters of the simulated Fog nodes under the static application scenario of Sect. 8

Device type CPU (GHz) RAM (GB) Latency (ms)

Cloud VM 2.67 4 100


FN 2.67 4 50
WiFi access point 2.67 4 30

Table 4 Energy (J) profile of


Frequency (GHz)
the simulated FNs at various
computing frequencies (GHz) 1.60 1.867 2.133 2.40 2.67
and workloads (MIPS) and max
under the static application % f s,v 59.93 69.93 79.89 89.89 100
scenario of Sect. 8 iFogSim (MIPS) 1797 2097 2396 2696 3000
idle (J)
Es,v 82.70 82.85 82.95 83.10 83.25
max (J)
Es,v 88.77 92.00 95.50 99.45 103.00

Table 3 details the main simulated parameters of the devices used in the iFogSim
framework of Fig. 4. In order to model the CPU consumptions of the FNs, we refer
to the Intel® Core™ 2 CPU Q6700 with 2.67 GHz frequency rate and 4 GB of RAM
memory. Under the iFogSim simulator, the CPU frequency is expressed in MIPS and it
is evaluated proportionally to the FN frequency values. The corresponding simulated
idle and E max are reported in Table 4. Further, the iFogSim MIPS is calculated
values of Es,v s,v
according to the following formula [7]:
max )
max(MIPS) × (% f s,v
iFogSim (MIPS)  . (9)
100
In the carried out simulations, we have considered three main settings of containers
and fog nodes, namely.
– C N1 : 300 MIPS, F N1 : 450 Millions of Instructions;
– C N2 : 150 MIPS, F N2 : 320 Millions of Instructions;
– C N3 : 90 MIPS, F N3 : 270 Millions of Instructions.
In order to properly characterize each simulated case, we need to specify the CPU
load-versus-execution time relationship that, in turn, may be expressed as in:
 
CN max
CPU load (%)  × 100, (10)
FN1

and
FNmax
Duration time (s)  . (11)
CN max
According to the considered tested cases, we define 3 CNs and 3 FNs, so we have a
total of 9 cases to be simulated, i.e., Case1  [CN 1 , FN1 ], Case2  [CN 1 , FN2 ], . . .,
Case9  [CN 3 , FN3 ]. CPU loads and duration times are reported in Table 5 under
each simulated setup.

123
P. G. Vinueza Naranjo et al.

Table 5 Numerically evaluated CPU workloads-versus-execution times under the static application sce-
nario of Sect. 8

Case 1 2 3 4 5 6 7 8 9

CPU load (%) 10 10 10 5 5 5 3 3 3


Execution time (s) 1.5 1.07 0.9 3 2.13 1.8 5 3.56 3

Table 6 Asymptotic implementation complexities of the benchmark resource managers (RMs) considered
for the numerical tests of Sect. 8

Proposed PABP RM RM in MBFD [8] RM in MDC [9]

O(1) O(nm) O(nm)

8.2 Competing resource management algorithms

Numerical performance comparisons have been carried out against the benchmark
resource managers recently proposed in MBFD [8] and MDC [9]. The goal is to
appreciate the actual effectiveness of the proposed resource management heuristic. At
this regard, we shortly point out that:
– The resource manager in MBFD [8] exhibits a runtime cost of: O(n log n) +
O(m log m) for sorting of vmList and serverList and O(nm) for VM allocation,
where n and m are the number of VMs and servers, respectively. Hence, the overall
asymptotic complexity scales as O(nm);
– The resource manager in MDC [9] sorts the VMs according to the sizes of the VMs.
The sorting complexity is O(n log n) + O(m log m) × O(n log n) ∼ = O(m log m),
and O(nm) for the VM allocation. Hence, the overall asymptotic complexity scales
as O(nm).
In order to put the carried out performance comparisons under a right perspec-
tive, Table 6 reports the per-container asymptotic implementation complexities of the
considered benchmark resource managers and the proposed one.

8.3 Performance results and comparisons under the static application scenario

This subsection presents the tested energy performance of the proposed resource man-
ager under two sets of real-world workloads traces, namely the workload extracted
from World Cup 98 [47] and Microsoft RAID [52], respectively. At this regard, we
point out that a main reason for considering these two real-world workload traces is
that they exhibit, indeed, quite complementary features in terms of peak-to-mean ratio
(PMR) and cross-covariance coefficients (CCC). In fact, the (numerically evaluated)
PMR of the World Cup 98 trace (resp., Microsoft RAID trace) is 1.5 (resp., 2.5), while
the corresponding CCC value is 0.97 (resp., 0.25). Therefore, we may conclude that:
(i) the RAID workload trace presents a number of large spikes. Hence, since it exhibits
a white-noise time behavior, it well characterizes workloads generated by Big Data-

123
Design and energy-efficient resource management of…

Fig. 5 Per-Fog average total energy consumption under the Microsoft RAID workload trace in [52]. The
static application scenario of Sect. 8 is considered

like applications; and (ii) the World Cup 98 trace exhibits a smoother time behavior,
and then, it is better suited to emulate the self-similar (e.g., highly time correlated)
time behavior of workload flows generated by Web-like applications.
In the first test scenario, we run the proposed resource manager and evaluate the
resulting average total consumed energy E T under the Microsoft input arrivals for
various intra-Fog Ethernet-type switching technologies (see Fig. 5). An examination
of Fig. 5 points out that: (i) E T increases for increasing values of Ωs,v
net (i.e., the network

energy cost); and: (ii) E T increases for increasing values of the available containers
and the increment rate depends on the adopted intra-Fog Ethernet technology.
In the second set of tests, we compare the energy performance of the proposed
heuristic against the corresponding ones of the considered benchmark resource man-
agers. The main goal is to numerically evaluate and compare the reductions in the
overall average energy consumptions under various network setups. The obtained
results are drawn in Fig. 6. It reports the instantaneous energy consumption of the pro-
posed resource manager and the benchmark ones under the workload traces reported
in [47,52]. It is interesting to note that the numerically evaluated average energy
consumption of the proposed heuristic is of about 20.36 and 33.84% less than the
corresponding ones of the benchmark algorithms in MBFD [8] and MDC [9], respec-
tively.
A last set of simulations aim at comparing the average energy consumption of our
solution with respect to MBFD [8] and MDC [9] under different configurations of con-
tainers and Fog nodes. Figure 7 reports the obtained energy consumptions averaged
over the corresponding average numbers of actually turned ON containers. Interest-
ingly, since ET (t) in (6) is the minimum requested energy when up to MCN containers
may be instantiated, at fixed Es,v com , E in Fig. 7 decreases for increasing M
T CN and
quickly approaches to a minimum value that does not change when MCN is further
increased (see the semi-flat segments of the two lowermost plots of Fig. 7b).

123
P. G. Vinueza Naranjo et al.

(a) (b)
Fig. 6 Instantaneous computing-plus-communication energy consumption of the proposed resource man-
ager and the benchmark ones in MBFD [8] and MDC [9] under the static application scenario of Sect. 8. a
Instantaneous energy behaviors at (5CNs, 10FNs) under the World Cup 98 workload in [47]. b Instantaneous
energy behaviors at (5CNs, 10FNs) under the Microsoft RAID workload in [52]

(a) (b)
Fig. 7 Average computing-plus-communication energy consumption under Fast Ethernet (− F) and Giga
net = 1.8×
Ethernet (− G). The static application scenario of Sect. 8 is considered. a Proposed with − F: Ωs,v
10 and − G: Ωs,v = 2.5×10 . b Proposed with − F: Ωs,v = 1.8×10 and − G: Ωs,v = 2.5×10−2
−3 net −2 net −3 net

9 A second case study: a mobile application scenario

A sketch of the simulated mobile application scenario is reported in Fig. 8.


According to Fig. 8, it is composed by four main building blocks, namely:
i. the IoT physical layer, where a number of IoT devices move over multiple spatial
clusters;
ii. the wireless mobile access network that sustains Fog-to-Thing (F2T) and Thing-
to-Fog (T2F) communications;
iii. a set of interconnected static Fog nodes that act as virtualized cluster headers and
provide access points for the currently served IoT devices; and
iv. a wireless static broadband backbone that interconnects the FNs.
Shortly, each FN is equipped with a (limited) number of physical servers that are
interconnected by wired (typically, Ethernet type) intra-Fog switches. Each FN covers

123
Design and energy-efficient resource management of…

Fig. 8 The simulated mobile application scenario

a spatial area of radius Rd (m) and acts as a wireless access point for the currently
served IoT devices. Each device is associated to a container (e.g., a virtual clone),
that is hosted by the serving FN. The container synchronizes the corresponding device
and works on behalf of it, in order to provide device augmentation. T2F and F2T
communications are guaranteed by TCP/IP connections that run atop IEEE 802.11b
up/down single-hop links (see the green rays of Fig. 8). Further, a wireless broad-
band backbone interconnects all FNs (see the brown rays of Fig. 8), in order to: (i)
assure inter-container communication among different FNs; and (ii) allow container
migration, e.g., when a device moves from a source cluster to a different destination
cluster, the associated container follows it by performing a migration over the inter-
Fog backbone. Finally, in Fig. 8, all containers are logically connected by end-to-end
TCP/IP connections. For this purpose, intra-Fog wired Ethernet links (resp., inter-Fog
backbone-supported wireless links) are used to sustain transport-layer connections
among containers hosted by a same FN (resp., by different FNs).
From the outset, it follows that the orchestration of the resulting Fog-of-IoT tech-
nological platform of Fig. 8 requires that the following main service primitives are
implemented: (i) Thing-to-Fog intra-cluster access; (ii) Fog-to-Thing data broadcast;
(iii) intra-Fog networked computing through virtualized containers; and (iv) mobility-
induced inter-Fog container migration. For this purpose, under the umbrella of the
GAUChO project,1 we have designed, checked and implemented in software a suit-
able Fog-of-IoT protocol stack that provides the aforementioned service primitives.
This stack integrates the functionalities natively provided by the (aforementioned)
iFogSim tool with some specific services “ad hoc” tailored for supporting the instanti-
ation, synchronization and migration of the virtualized containers hosted by the FNs
of Fig. 8.

1 https://www.gaucho.unifi.it/.

123
P. G. Vinueza Naranjo et al.

Table 7 Main simulated parameters under the application scenario of Sect. 9

Default setting of the main simulated parameters


TS = 1.2 (s) max = 250 (W)
PSE idle = 130 (W)
PSE
SZ CN = 280 (Mb) Rmax
WDD = 300 (Mb/s) Rmax
WSS = 8.5 (Mb/s)
Or = 0.25 RT T WDD = 0.6 (ms) RT T WSS = 12 (ms)
max = 12 (Mb/s) Λnet Λnet
f CN WDD = 5 (mW/Mb) WSS = 15 (mW/Mb)
setup setup
f = 3 (Mb/s) (average value) Pnet,W D D = 2 (mW) Pnet,W SS = 560 (mW)
v max = 50 (km/h) αWDD = 1.3 αWSS = 2.4
v = {15, 25, 35} (km/h) NCLU = 8 NCAR = 160
β = {0.4, 0.5, 0.6} Rd = 350 (m)

The subscripts W D D and W SS denote wired and wireless TCP/IP connections, respectively

9.1 The simulated mobile Fog-of-IoT scenario

In this subsection, we detail the simulated scenario of Fig. 8 and the main functional
features implemented by the developed GAUChO simulation tool. Table 7 presents
the default values of the main simulated parameters.

Simulated Fog nodes and container power profile In Fig. 8, a Fog node is placed
at the center of each cluster. It comprises NSER = 5 homogeneous quad-core Dell
PowerEdge-type physical servers, which are equipped with 3.06 GHz Intel Xeon
CPU and 4 GB of RAM. The per-server maximum and idle power consumption are:
max = 250 (W) and P idle = 130 (W), respectively. A commodity wired Giga
PSE SE
Ethernet switch connects the servers and each server hosts up to MCN containers. Each
container is implemented as a Docker container of size: SZ CN = 280 (Mb). It acts as
a virtual computing server and, according to Eqs. (3), (7) and (8), its (instantaneous)
computing power PCN (W) is formally given by:
max

idle  3
PCN
idle
PSE − PSE f
PCN = + , (12)
MCN MCN max
f CN

max (bit/s)) is the instantaneous (resp., maximum) process-


where f (bit/s) (resp., f CN
ing frequency of the container. In practical implementation of virtualized containers,
both the processing frequencies f and f CN max in Eq. (12) are dictated at runtime by

the Container Engine of Fig. 2a on the basis of the actually implemented resource
management policy.

Power profiles of the simulated TCP/IP connections The simulated Thing-to-Fog,


Fog-to-Thing and Fog-to-Fog wireless links of Fig. 8 are affected by frequency-flat
block-type Rice fading. The Ricean factors of the Thing-to-Fog and Fog-to-Thing
links (resp., Fog-to-Fog links) are set to 15 (dB) (resp., 50 (dB)), while the corre-
sponding average signal-to-noise ratios (SNRs) are set to 18 (dB) (resp., 35 (dB)).
Furthermore, all the resulting wireless end-to-end connections of Fig. 1 implement

123
Design and energy-efficient resource management of…

the TCP NewReno protocol and are sustained by commodity IEEE 802.11b network
interface cards. According to Eq. (4), we note that the power Pnet (W) consumed by
a TCP connection is related to the corresponding average throughput R (bit/s) by the
following relationship (see also [6]):


α setup
Pnet = Λnet RT T × R + Pnet . (13)

setup
In Eq. 13, we have that: (i) α is a dimensionless positive exponent; (ii) Pnet (W)
is the setup power of the considered TCP connection; (iii) RT T (s) is the (average)
connection Round-Trip-Time; and (iv) Λnet (W/bit) is the dynamic power consumed
by the connection when the product: (round-trip-time) by (throughput) is unit value.
setup
As pointed out in Table 7, the actual values of α, Pnet , Λnet and RT T depend on
the wireless/wired nature of the adopted transmission technology.

Simulated application service and device mobility In the carried out simulations, time
is slotted and TS (s) is the slot duration. As in [53], we consider a vehicular streaming
video application, in which a number of video cameras move on board of cars. The
cameras are equipped with vTube-type interfaces [53], so that they may lunch Peer-
to-Peer video streaming sessions when they come in contact. After its launching, a
session continues to run while the involved cars move away. The session duration
TSS (resp., the intersession gap TTG ) is randomly distributed over the interval: 50 TS –
130 TS (resp., 90 TS –150 TS ).
Passing to consider the simulated mobility model, we point out that the car maxi-
mum speed is: v max = 50 (km/h), while the considered average car speeds are: v = 15,
25 and 35 (km/h). A number of NCAR = 160 cars is simulated. These cars are evenly
distributed over NCLU = 8 disk-shaped clusters of radius Rd = 350 (m), which, in
turns, are arranged along a ring. According to the so-called Markovian random walk
with random placement [54], at the beginning of each slot period, each car randomly
moves clockwise to the next cluster with probability β, or stays in the current cluster
with probability (1 − β). After selecting the cluster, the car picks at random a location
over it. In the carried simulations, we set the probability β to 0.4, 0.5 and 0.6 at v = 15,
25 and 35 (km/h), respectively.

Inter-Fog migration of containers and related energy consumptions In order to sim-


ulate the inter-Fog migration of containers over the wireless backbone of Fig. 8, the
(aforementioned) GAUChO simulation tool implements in SW the main primitives
of the Follow-Me-Cloud paradigm in [55]. In a nutshell, the main relevant features of
this paradigm are that [55]: (i) it develops a signaling protocol and the associated logic
for enabling the inter-Fog migration of containers in response to the mobility (e.g.,
cluster change) of the associated IoT devices; and (ii) it designs a suitable dynamic
bandwidth manager, that guarantees live container migration (e.g., container migra-
tion without service interruption) at the minimum energy consumption. Hence, since
the per-container migration energy Emig (J) equates the product: (network power) by
(migration time), from Eq. (13), we have that:

123
P. G. Vinueza Naranjo et al.

 
SZ CN (1 + Or )
Emig = Pnet × Tmig ≡ Pnet × . (14)
R

In Eq. 14, the (dimensionless nonnegative) coefficient Or accounts for the traffic
overhead induced by a container migration. At this regard, we shortly point out that,
according to [55], the actual value of Or depends on: (i) the average rate of the
fading and mobility-induced connection failures that forces multiple re-transmissions
of already transmitted data; and (ii) the rate of change of the data stored by the migrating
container that may require the re-transmission of already migrated data (see [55] and
references therein for additional details on this specific topic).

9.2 The simulated D2D benchmark platform

The D2D edge computing paradigm is considered as a benchmark, in order to carry


out performance comparisons. According to this paradigm [56], proximate physical
devices build up D2D links that typically rely on short-range wireless communica-
tion technologies (see the bottom part of Fig. 8). In the carried out simulations, D2D
links are assumed to be sustained by IEEE 802.11b network interface cards. Hence,
under the D2D paradigm, two devices (e.g., Car A and Car B in Fig. 8) may commu-
nicate when their distance is within their communication range and the experienced
signal-to-noise-plus-interference ratio is larger than a QoS-dictated threshold [56].
The main pro of the D2D edge computing paradigm is that, in principle, it may be
implemented in an infrastructure-free way by exploiting direct physical links. How-
ever, as also recently pointed out in [56], this pro is counterbalanced by a number
of cons that impair the resource usage of the communicating physical devices. First,
due to fading and path loss, the per-device power consumption of current D2D-based
mobile networks scales up in a power-like way for increasing value of the involved
inter-device distances, with typical power-exponents that are larger than three. Second,
D2D mobile connections are intermittent, and their average duration (resp., intercon-
nection interval) typically scales down (resp., up) for increasing device mobility. This
induces energy and time-wasting re-connection operations and data re-transmissions
by communicating devices. Third, under the D2D operating scenario, the involved
devices need to autonomously perform device synchronization and connection setup.
These operations are power and delay-consuming, especially when, due to the inter-
mittent D2D connectivity, they abort several times before completing. We anticipate
that the numerical results and performance comparisons of the following subsection
confirm, indeed, these considerations.

9.3 Numerical results, performance comparisons and open challenges

The numerical results in Fig. 9 of the simulated Fog-of-IoT platform report the per-
connection average energy consumptions and round-trip times under the simulated
mobile Fog platform of Fig. 8, in order to: (i) sustain all the underlying wireless/wired
links; (ii) carry out the required computing activity; and (iii) perform inter-Fog con-
tainer migrations. For comparison purpose, we have also simulated the (previously

123
Design and energy-efficient resource management of…

(a) (b)

(c) (d)

(e)
Fig. 9 a Normalized trace of a Big Data-like traffic flow [57]. In the carried out simulations, actual peak
traffic values are set to 75% of the maximum throughputs R max of the corresponding TCP/IP connections
(see Table 7); b per-connection and per-slot average energy consumption of the Fog-of-IoT platform of
Fig. 8 at v = 15, 25, 35 (km/h); c per-connection and per-slot average energy consumption of the Fog-of-
IoT platform and the benchmark D2D one at MCN = 12; d per-connection average round-trip-times of
the Fog-of-IoT platform and the benchmark D2D one at MCN = 12; e per-device and per-slot energy
consumption under the simulated Fog-of-IoT and D2D platforms at MCN = 12

123
P. G. Vinueza Naranjo et al.

described) benchmark D2D platform. In order to carry out fair performance compar-
isons, the traffic flows transported by all simulated TCP/IP connections are randomly
scaled and cyclically delayed versions of the normalized traffic trace of Fig. 9a that
reports the (normalized) traffic flow generated by a Big Data-like streaming applica-
tion [57]. Furthermore, in all the carried out simulations, the resource management is
carried out according to the proposed heuristic of Algorithm 1.
An examination of the Fog-of-IoT performance curves of Fig. 9b leads to two main
insights and points out two first possible open challenges. First, since the per-Fog
number of turned ON physical servers decreases for increasing values of the server
capacity MCN , all the Fog-of-IoT curves of Fig. 9b decrease at fixed v. Hence,
since current virtualization technology for commodity servers limits MCN up to
14–15 [50,58], Fig. 9b suggests that higher virtualization levels are well come in
Fog-assisted virtualized infrastructures, and this is, indeed, a first open challenge.
Second, since, in the simulated scenario, the average number of performed container
migrations increases of about 3.5 times by passing from v = 15 (km/h) to v = 35
(km/h), the Fog-of-IoT curves of Fig. 9b scale up of about 22% at fixed MCN . Hence,
a second challenge concerns the self-adaptive energy optimization of state-of-the-
art protocols for the live migration of containers [55], specially under high mobility
outdoor scenarios.
We pass now to compare the performance of the tested Fog-of-IoT and D2D plat-
forms. For this purpose, Fig. 9c, d reports the average energy and delay performances
of the Fog-of-IoT and D2D platforms on a per-connection basis, while Fig. 9e com-
pares the average energies consumed by each device under the considered platforms.
At this regard, we note that an examination of the bar plots of Fig. 9c leads to two
main insights. First, the proposed Fog-of-IoT platform is more energy efficient than
the corresponding D2D one, with the corresponding average per-connection energy
gaps that, in our tests, are around 23, 28 and 34% at v = 15, 25 and 35 (km/h),
respectively. Second, differently from the Fog-of-IoT platform, the mobility-induced
increments in the energy consumption of the D2D connections are due to: (i) the
increment of the average spatial range of the (fading and path loss affected) D2D
connections; and (ii) the increment of the average number of TCP time-out events and
packet re-transmissions. This is, indeed, confirmed by the bar plots of Fig. 9d. Specif-
ically, since live container migrations exhibit, by design, nearly negligible service
interruption times [55], the corresponding average round-trip-times of the Fog-of-IoT
connections remain almost constant around 24–25 (ms). On the contrary, due to the
increasing propagation delays and TCP-induced re-transmission delays, the average
round-trip times of the D2D connections pass from: 28–29 (ms) at v = 15 (km/h)
up to 112–113 (ms) at v = 35 (km/h). Finally, under the Fog-of-IoT platform, the
computing and networking powers PCN and Pnet in Eqs. (12) and (13) consumed
for sustaining data processing and data communication are fully provided by the Fog
nodes that host the containers (see Fig. 8). As a consequence, we expect that the aver-
age energy consumed by each device under the Fog-of-IoT platform is lower than the
corresponding one experienced under the D2D platform. The bar plots of Fig. 9e con-
firm, indeed, this expectation. They point out that, in our tests, the per-device average
energy reductions provided by the Fog-of-IoT platform with respect to the D2D one
are noticeable and around 76, 77.5 and 78% at v = 15, 25 and 35 (km/h), respectively.

123
Design and energy-efficient resource management of…

10 Conclusions and hints for future research

In this paper, we presented a PABP-based heuristic for the combined dynamic


management of the: (i) per-flow transport throughputs; and (ii) per-container pro-
cessing frequencies, in TCP/IP-based Fog data centers, that exploit the emerging
container technology for the real-time virtualization of the networking-plus-computing
resources available at the Middleware layer. The overall goal is the reduction of the
total consumed energy per-slot. Remarkable features of the proposed PABP-based
resource management heuristic are that: (i) it is capable to enforce hard QoS bounds
on the allowed per-task overall computing-plus-networking delay without requiring
any a priori information and/or forecasting of the statistics of the input workload; and
(ii) its implementation is distributed and scalable. The carried out performance tests
and performance comparisons corroborate the actual effectiveness of the proposed
resource manager under both static and mobile Fog-of-IoT application scenarios.
Regarding the possible extensions of the presented results, we point out that emerg-
ing Big Data Stream applications may present so rapid time fluctuations of the input
workload that the here considered assumption of hard bounds on the per-task comput-
ing delays could be replaced by a soft constraint on the maximum fraction of the input
tasks that experience delays larger than an assigned QoS-dictated time threshold. The
design of a resource manager that is capable to operate without any a priori assumption
about the statistics of the input workload is a challenging goal currently investigated
by the authors. Moreover, as detailed in Sect. 9, the research proposed in this paper
has been developed under the umbrella of the GAUChO project that aims at designing
a novel distributed and heterogeneous architecture, which is capable to functionally
integrate and jointly optimize FC and network functions onto a same platform. The
final work package of this research project is devoted to a test bed implementation with
the goal of testing and validating the proposed ideas, included those described in the
paper. In addition, in the test bed it will be compared several hardware implementations
in order to identify the best one in performance and energy consumption.

Acknowledgements This work has been developed under the umbrella of the PRIN2015 project with
Grant No. 2015YPXH4W_004: “A green adaptive FC and networking architecture (GAUChO),” funded by
the Italian MIUR. Also, it has also been partially supported by the projects “Vehicular Fog energy-efficient
QoS mining and dissemination of multimedia Big Data streams (V-Fog and V-Fog2),” funded by Sapienza
University of Rome in Italy. The authors would like to thank all anonymous reviewers for their precious
and helpful comments and suggestions.

References
1. Borgia E (2014) The internet of things vision: key features, applications and open issues. Comput
Commun 54:1–31
2. Ouaddah A, Mousannif H, Elkalam AA, Ouahman AA (2017) Access control in the internet of things:
big challenges and new opportunities. Comput Netw 112:237–262
3. Bonomi F, Milito R, Zhu J, Addepalli S (2012) Fog computing and its role in the internet of things. In:
Proceedings of the First Edition of the MCC Workshop on Mobile Cloud Computing. ACM, pp 13–16
4. Baccarelli E, Biagi M (2004) Power-allocation policy and optimized design of multiple-antenna sys-
tems with imperfect channel estimation. IEEE Trans Veh Technol 53(1):136–145

123
P. G. Vinueza Naranjo et al.

5. Baccarelli E, Biagi M, Pelizzoni C, Cordeschi N (2008) Optimal MIMO UWB-IR transceiver for
Nakagami-fading and Poisson-arrivals. JCM 3(1):27–40
6. Shojafar M, Cordeschi N, Amendola D, Baccarelli E (2015) Energy-saving adaptive computing and
traffic engineering for real-time-service data centers. In: 2015 IEEE International Conference on Com-
munication Workshop (ICCW 2015), London, UK, pp 1800–1806
7. Gupta H, Dastjerdi AV, Ghosh SK, Buyya R (2016) iFogSim: a toolkit for modeling and simulation of
resource management techniques in internet of things, edge and fog computing environments. arXiv
preprint arXiv:1606.02007
8. Beloglazov A, Abawajy J, Buyya R (2012) Energy-aware resource allocation heuristics for efficient
management of data centers for cloud computing. Future Gener Comput Syst 28(5):755–768
9. Lovász G, Niedermeier F, de Meer H (2013) Performance tradeoffs of energy-aware virtual machine
consolidation. Clust Comput 16(3):481–496
10. Chang H, Kodialam M, Kompella RR, Lakshman T, Lee M, Mukherjee S (2011) Scheduling in
mapreduce-like systems for fast completion time. In: 2011 Proceedings IEEE INFOCOM. IEEE,
pp 3074–3082
11. Guazzone M, Anglano C, Canonico M (2011) Energy-efficient resource management for cloud comput-
ing infrastructures. In: Proceedings of the IEEE Third International Conference on Cloud Computing
Technology and Science (CloudSim 2011), Athens, Greece, pp 1–11
12. Verma A, Cherkasova L, Kumar VS, Campbell RH (2012) Deadline-based workload management for
mapreduce environments: pieces of the performance puzzle. In: Network Operations and Management
Symposium (NOMS), 2012 IEEE. IEEE, pp 900–905
13. Lim N, Majumdar S, Ashwood-Smith P (2014) A constraint programming-based resource management
technique for processing mapreduce jobs with SLAs on clouds. In: 43rd International Conference on
Parallel Processing (ICPP 2014). IEEE, pp 411–421
14. Wirtz T, Ge R (2011) Improving mapreduce energy efficiency for computation intensive workloads.
In: 2011 International Green Computing Conference and Workshops (IGCC). IEEE, pp 1–8
15. Kim KH, Buyya R, Kim J (2007) Power aware scheduling of bag-of-tasks applications with deadline
constraints on DVS-enabled clusters. In: Seventh IEEE International Symposium on Cluster Computing
and the Grid (CCGrid 2007), vol 7, pp 541–548
16. Beloglazov A, Buyya R (2010) Energy efficient resource management in virtualized cloud data centers.
In: Proceedings of the 2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid
Computing. IEEE Computer Society, pp 826–831
17. Cardosa M, Singh A, Pucha H, Chandra A (2012) Exploiting spatio-temporal tradeoffs for energy-
aware mapreduce in the cloud. IEEE Trans Comput 61(12):1737–1751
18. Çavdar D, Chen LY, Alagöz F (2014) Green mapreduce for heterogeneous data centers. In: 2014 IEEE
Global Communications Conference (GLOBECOM). IEEE, pp 1120–1126
19. Chiang Y-J, Ouyang Y-C, Hsu C-HR (2015) An efficient green control algorithm in cloud computing
for cost optimization. IEEE Trans Cloud Comput 3(2):145–155
20. Baccarelli E, Cusani R (1996) Recursive Kalman-type optimal estimation and detection of hidden
Markov chains. Sig Process 51(1):55–64
21. Neumeyer L, Robbins B, Nair A, Kesari A (2010) S4: distributed stream computing platform. In: 2010
IEEE International Conference on Data Mining Workshops (ICDMW). IEEE, pp 170–177
22. Zaharia M, Das T, Li H, Shenker S, Stoica I (2012) Discretized streams: an efficient and fault-tolerant
model for stream processing on large clusters. In: Proceedings of the 4th USENIX Conference on Hot
Topics in Cloud Computing (HotCloud), vol 12, p 10
23. Qian Z, He Y, Su C, Wu Z, Zhu H, Zhang T, Zhou L, Yu Y, Zhang Z (2013) Timestream: reliable
stream computation in the cloud. In: Proceedings of the 8th ACM European Conference on Computer
Systems. ACM, pp 1–14
24. Kumbhare AG, Simmhan Y, Prasanna VK (2014) Plasticc: predictive look-ahead scheduling for con-
tinuous dataflows on clouds. In: 14th IEEE/ACM International Symposium on Cluster, Cloud and Grid
Computing (CCGrid 2014). IEEE, pp 344–353
25. Bonomi F (2011) Connected vehicles, the internet of things, and fog computing. In: Proceedings of
the Eighth ACM International Workshop on Vehicular Internetworking, Las Vegas, pp 1–5
26. Kai K, Cong W, Tao L (2016) Fog computing for vehicular ad-hoc networks: paradigms, scenarios,
and issues. J China Univ Posts Telecommun 23(2):56–96

123
Design and energy-efficient resource management of…

27. Shojafar M, Cordeschi N, Baccarelli E (2016) Energy-efficient adaptive resource management for
real-time vehicular cloud services. IEEE Trans Cloud Comput. https://doi.org/10.1109/TCC.2016.
2551747
28. Baccarelli E, Vinueza Naranjo PG, Scarpiniti M, Shojafar M, Abawajy JH, Abawajy JH (2017) Fog of
everything: energy-efficient networked computing architectures, research challenges, and a case study.
IEEE Access 5:9882–9910
29. Peralta G, Iglesias-Urkia M, Barcelo M, Gomez R, Moran A, Bilbao J (2017) Fog computing based
efficient IoT scheme for the industry 4.0. In: Proceedings of the 2017 International Workshop of
Electronics, Control, Measurement, Signals and Their Application to Mechatronics (ECMSM 2017),
Donostia-San Sebastian, Spain, pp 24–26
30. Tao F, Zuo Y, Da Xu L, Zhang L (2014) IoT-based intelligent perception and access of manufacturing
resource toward cloud manufacturing. IEEE Trans Ind Inform 10(2):1547–1557
31. Ma Y, Wang X, Zhou X, Gao Z, Wu Y, Yin J, Xu X (2016) An overview of energy internet. In: 2016
Chinese Control and Decision Conference (CCDC). IEEE, pp 6212–6215
32. Baccarelli E, Cordeschi N, Mei A, Panella M, Shojafar M, Stefa J (2016) Energy-efficient dynamic
traffic offloading and reconfiguration of networked data centers for big data stream mobile computing:
review, challenges, and a case study. IEEE Netw 30(2):54–61
33. Yovanof GS, Hazapis GN (2009) An architectural framework and enabling wireless technologies for
digital cities and intelligent urban environments. Wirel Pers Commun 49(3):445–463
34. Baccarelli E, Cordeschi N, Polli V (2013) Optimal self-adaptive QoS resource management in
interference-affected multicast wireless networks. IEEE/ACM Trans Netw (TON) 21(6):1750–1759
35. Portnoy M (2012) Virtualization essentials. Wiley, New York
36. Soltesz S, Pötzl H, Fiuczynski ME, Bavier A, Peterson L (2007) Container-based operating system
virtualization: a scalable, high-performance alternative to hypervisors. In: ACM SIGOPS Operating
Systems Review, vol 41. ACM, pp 275–287
37. Kwak J, Kim Y, Lee J, Chong S (2015) DREAM: dynamic resource and task allocation for energy
minimization in mobile cloud systems. IEEE J Sel Areas Commun 33(12):2510–2523
38. McCarthy D, Malone P, Hange J, Doyle K, Robson E, Conway D, Ivanov S, Radziwonowicz L,
Kleinfeld R, Michalareas T, Kastrinogiannis T, Stasinos N, Lampathaki F (2015) Personal cloudlets:
implementing a user-centric datastore with privacy aware access control for cloud-based data platforms.
In: Proceedings of the First International Workshop on TEchnical and LEgal Aspects of Data pRIvacy
and SEcurity (TELERISE), Florence, Italy, pp 38–43
39. Huang X, Xiang Y, Bertino E, Zhou J, Xu L (2014) Robust multi-factor authentication for fragile
communications. IEEE Trans Dependable Secure Comput 11(6):568–581
40. Stojmenovic I, Wen S, Huang X, Luan H (2016) An overview of fog computing and its security issues.
Concurr Comput Pract Exp 28(10):2991–3005
41. Dsouza C, Ahn G-J, Taguinod M (2014) Policy-driven security management for fog computing: pre-
liminary framework and a case study. In: 2014 IEEE 15th International Conference on Information
Reuse and Integration (IRI), Redwood City, CA, USA, pp 16–23
42. Abdo J, Demerjian J, Chaouchi H, Atechian T, Bassil C (2015) Privacy using mobile cloud computing.
In: 2015 Fifth International Conference on Digital Information and Communication Technology and
its Applications (DICTAP). Lebanese University, Beirut, Lebanon, pp 178–182
43. Wang C, Ren K, Wang J (2016) Secure optimization computation outsourcing in cloud computing: a
case study of linear programming. IEEE Trans Comput 65(1):216–229
44. Perez R, Sailer R, Van Doorn L (2006) vTPM: virtualizing the trusted platform module. In: Proceedings
of 15th Conference on USENIX Security Symposium, pp 305–320
45. Hong K, Lillethun D, Ramachandran U, Ottenwälder B, Koldehofe B (2013) Mobile fog: a program-
ming model for large-scale applications on the internet of things. In: Proceedings of the Second ACM
SIGCOMM Workshop on Mobile Cloud Computing, Hong Kong, China, pp 15–20
46. Kansal A, Zhao F, Liu J, Kothari N, Bhattacharya AA (2010) Virtual machine power metering and
provisioning. In: Proceedings of the 1st ACM Symposium on Cloud Computing. ACM, pp 39–50
47. Urgaonkar B, Pacifici G, Shenoy P, Spreitzer M, Tantawi A (2007) Analytic modeling of multitier
internet applications. ACM Trans Web (TWEB) 1(1):2
48. Gulati A, Merchant A, Varman PJ (2010) mClock: handling throughput variability for hypervisor
IO scheduling. In: Proceedings of the 9th USENIX Conference on Operating Systems Design and
Implementation. USENIX Association, pp 437–450

123
P. G. Vinueza Naranjo et al.

49. Guo C, Lu G, Wang HJ, Yang S, Kong C, Sun P, Wu W, Zhang Y (2010) SecondNet: a data center
network virtualization architecture with bandwidth guarantees. In: Proceedings of the 6th International
Conference on Emerging Networking Experiments and Technologies (CoNEXT). ACM, p 15
50. Iyengar SS, Brooks RR (2012) Distributed sensor networks: sensor networking and applications. CRC
Press, Boca Raton
51. Da Costa F (2013) Rethinking the Internet of Things: a scalable approach to connecting everything.
Apress, New York
52. Zhou Z, Liu F, Xu Y, Zou R, Xu H, Lui JCS, Jin H (2013) Carbon-aware load balancing for geo-
distributed cloud services. In: 2013 IEEE 21st International Symposium on Modelling, Analysis and
Simulation of Computer and Telecommunication Systems. IEEE, pp 232–241
53. Abe Y, Geambasu R, Joshi K, Lagar-Cavilla HA, Satyanarayanan M (2013) vTube: efficient streaming
of virtual appliances over last-mile networks. In: Proceedings of the ACM 4th Annual Symposium on
Cloud Computing, Santa Clara, CA, USA, p 16, 1–3 Oct 2013
54. Baccarelli E, Cusani R, Galli S (1998) A novel adaptive receiver with enhanced channel tracking
capability for TDMA-based mobile radio communications. IEEE J Sel Areas Commun 16(9):1630–
1639
55. Taleb T, Ksentini A (2016) Follow me cloud: interworking federated clouds and distributed mobile
networks. IEEE Netw 27(5):12–19
56. Gandotra P, Jha RK, Jain S (2017) A survey on device-to-device (D2D) communication: architecture
and security issues. J Netw Comput Appl 78:9–29
57. Byers CC, Wetterwald P (2015) Ubiquity symposium: the Internet of Things: fog computing: distribut-
ing data and intelligence for resiliency and scale necessary for IoT. Ubiquity 11:1–12
58. Kaur R, Mahajan M (2015) Fault tolerance in cloud computing. Int J Sci Technol Manag (IJSTM)
6(1):1–4

123

You might also like