You are on page 1of 14

DYNAMIC POWER MANAGEMENT

FOR NON-STATIONARY SERVICE


REQUEST

http://eforu.page.tl/
ABSTRACT:
Dynamic Power Management (DPM) is a design methodology, which aims at reducing
power consumption of electronic systems by performing selective shutdown of idle
system resources. An online adaptive DPM scheme for systems that can be modeled as
finite-state Markov chains is presented. Online adaptation is required to deal with non-
stationary workloads that are very common in real-life systems. A power manager with
workload-learning techniques based on sliding windows (Single window) is introduced.
The adaptive policies from a precomputed look-up table of optimum stationary policies
are obtained. Percentage Idle Time of the hard disk is taken as input, which is Non-
Stationary service request because it depends on the workload. Here workload changes
with time. The effectiveness of the approach is demonstrated by DPM implementation
using Visual C++ for windows 2000 / XP on a laptop computer with a power-manageable
hard disk that compares very favorably with existing DPM schemes. Percentage of
Energy saving from our implementation is about 41 % from our policy to existing policy
(Time –Out policy).

INTRODUCTION:
Reducing The Power Consumption is a challenge to system designers. Portable
systems, such as laptop computers and personal digital assistants (PDAs), draw power
from batteries; so reducing power consumption extends their operating times. For desktop
computers or servers, high power consumption raises temperature and deteriorates
performance and reliability. Soaring energy prices early last year and rising concern
about the environmental impact of electronics systems further highlight the importance of
low power consumption.
Power reduction techniques can be classified as static and dynamic [1]. Static techniques,
such as synthesis and compilation for low power, are applied at design time. In contrast,
dynamic techniques use runtime behavior to reduce power when systems are serving light
workloads or are idle. These techniques are known as dynamic power management
(DPM). DPM can be achieved in different ways; for example, dynamic voltage scaling

http://eforu.page.tl/
(DVS) changes supply voltage at runtime as a method of power management. Here, we
use DPM specifically for shutting down unused devices like Hard disk.
System-level power management saves power of subsystems (also called
devices). For Examples the power breakdown for a well-known laptop computer shows
that the power consumed by VLSI digital components is only 21 percent, while the
display, hard disk drive, and wireless LAN card consume 36 percent, 18 percent, and 18
percent, respectively.
Fig. 1 shows 1) the system power consumption level over time without DPM, 2)
the case when the ideal DPM is applied, and 3) the case when nonideal DPM is applied.
Nonideal DPM wastes the idle interval at the second idle period and pays a
performance penalty at the third idle period. These inefficiencies come from the
inaccurate prediction of the duration of the idle period or, equivalently, the arrival time of
the next request for an idle component.

Fig. 1. System power consumption level variation with DPM.


(a) Without DPM. (b) With ideal DPM. (c) With nonideal DPM.

Thus, the goal in DPM is to minimize the performance penalty and wasted idle
intervals while, at the same time, minimizing power and energy consumption. This
objective can be achieved by an ideal PM with complete knowledge of present, past, and
future workloads [2].
In general, we can model the unavoidable uncertainty of future requests by
describing the workload as a stochastic process. Even if the realization of a stochastic

http://eforu.page.tl/
process is not known in advance, its properties can be studied and characterized. This
assumption is at the basis of many stochastic optimal control approaches that are applied
to real-life systems.
Unfortunately, in many instances of real-life DPM problems, it is hard, if not
impossible, to precharacterize the workload of a power-manageable system. Consider, for
instance, a disk drive in a personal computer (PC). The workload for the drive strongly
depends on the application mix that is run on the PC. This, in turn, depends on the user,
on the location, and similar “environmental conditions” which are not known at design or
fabrication time. In these cases, the stochastic process describing the workload is initially
unknown. Even worse, the workload is subject to large variations over time. For instance,
hard disk workloads for a workstation drastically change with the time of the day or the
day of the week. This characteristic is called nonstationarity. It is generally hard to
achieve robust and high-quality DPM without considering this effect.
Several adaptive control approaches are based on an estimation procedure that
“learns” online the unknown or time-varying parameters of the system, coupled with a
flexible control scheme that selects the optimal control actions based on the estimated
parameters. This approach is also known as the principle of estimation and control (PEC).
The contributions of this work can be summarized as follows: We propose
parameter-learning schemes for the workload source (e.g., the user, also called service
requestor) that capture the nonstationarity of its behavior. We present technique based on
sliding window to capture the time-varying parameters of the stochastic model of the
workload source. Next, we show how policies for dynamic power management under
non-stationary service request models can be determined by policies computed under the
assumption that user requests are stationary. Finally, we implement the proposed
algorithm on a Desktop computer with a hard disk and show its feasibility in a real
system environment.
The paper is organized as follows: In Section 2, we introduce general system
model of DPM for stationary service requests. In Section 3, we describe the overall
dynamic power management scheme for Non-Stationary Service Request based on
sliding window and Precomputed lookup table for making decision. In Section 4, is

http://eforu.page.tl/
dedicated to the issues raised by the implementation of adaptive DPM schemes on
general-purpose computers. The experimental results obtained system measurements, are
shown in Section 5. Finally, we conclude our work and propose future work in Section 6.

2. DPM FOR STATINARY SERVICE REQUEST


In this section, we briefly review the system model. The overall system model for
DPM is shown in Fig. 2.

Fig. 2. Overall system model for DPM.


An electronic system is modeled as a unit providing a service, called service
provider (SP), while receiving requests from another entity, called service requestor (SR).
A queue (SQ) buffers incoming unserviced requests. The service provider can be in one
of several states (e.g., active, sleep, idle, etc.). Each state is characterized by the ability
(or the inability) to provide a service and by a power consumption level. Transitions
among states may have a performance penalty (e.g., latency in reactivating a unit) and a
power penalty (e.g., power loss in spinning up a hard disk). The power manager (PM) is a
control unit that controls the transitions among states. We assume that the power
consumption of the PM is negligible with respect to the overall power dissipation. At
equally spaced instants in time, the power manager evaluates the overall state of the
system (provider, queue, and requestor) and decides to issue a command to stimulate a
state transition. We define the following notation to represent the states of the units and
PM commands:
• SP : Sp = {0, 1, … , Sp },
• SR : Sr = {0, 1, … , Sr },
• SQ : Sq = {0, 1, … , Sq },
• A : a = {1, 2, … ,N},

http://eforu.page.tl/
where A is a command set issued by PM to control the power state of SP. We
model the system components as discrete-time Markov chains we use a controlled
Markov chain model for the system provider so that its transition probabilities depend on
the command issued by the power manager. In [3], the SR as well as SP is modeled as
stationary processes. A generic requestor can have Sr states. We will discuss the case of
Sr . 2, as shown in Fig. 3.

Fig. 3. Markov chain for stationary SR.


SR stays in state 0 when no request is issued for the given time slice; Otherwise,
SR is in state 1. The corresponding transition matrix is denoted by P SR in Fig.4. We call
the diagonal elements of PSR user request probabilities and we denote them by Ri, Where i
= 0, 1.The probabilities Ri = Prob (Sr(t+1) = i / Sr(t) = i ), i = 0, 1 are time invariant
for a stationary workload.

Fig. 4.State Transition Matrix


Pij is the Probability of Transition between state i and state j. i,j = 0 or 1 when
service requester state Sr = 2. If i and j are equal then Pij is equal to Ri. Ri is the
probability of state transition between the same states.
A control policy is a sequence of decisions. At each time slice, the PM observes
the current system state and issues a command based on the probability of each command
for the given system state in the decision table.
This approach provides a way to obtain an exact solution and control the trade-off
between performance and power; the following two assumptions limit its practical
application:

http://eforu.page.tl/
• The user request probabilities (Ri) for the given workload are known through
offline analysis.
• The Ri for the given workload is constant over time.

3. DPM FOR NON-STATIONARY SERVICE REQUEST:

In many practical applications, the assumption of stationary SR does not hold [2].
Workload is subject to changes over time, as suggested by observation of typical
computer systems. In this section, we describe DPM policies tailored to non-stationary
SR models.

Fig. 4. Markov chain for non-stationary Service Request.

Our first step is to model a non-stationary SR as shown in Fig. 4. A non-stationary


workload, denoted by Ul, is modeled by a series of stationary workloads, which have
different user request probabilities. Each stationary workload is denoted by us,s = 0,1,
…,Nu -1, where Nu is the total number of stationary workloads forming the non-
stationary workload, Ul. Thus, a non-stationary workload can be represented as UNu =
(u0,u1,…uNu-1). In this model, the Ri becomes a function of the given sequence and can be
distinguished from the Ri of stationary SR as shown in Fig. 5.

http://eforu.page.tl/
Fig. 5. Ri (us) of non-stationary SR vs. stationary SR.

PSR becomes a function of the sequence and denoted by PSR (us). We will denote
the actual values as function of us because they are constant over time for a given us and
the estimated values are represented as a function of time. For example, Ri (us). is the
actual user request probability of a sequence us and Ri(t) is the estimated user request
probability at time t.
Our approach will follow classic techniques of adaptive control theory based on
the principle of estimation and control.
For the estimation technique, we introduced Single window approach for estimate
the user request probability value. We propose a novel table look-up Table method to
compute the control law to be applied at each discrete time.
3.1 Single window approach
A sliding window stores the recent user-request history to predict future user
requests. A sliding window, denoted as W, consists of lw slots and each slot, W(i) , i = 0,
1, … , lw - 1, stores one previous user request, i.e., s r = 0, 1,…,Sr - 1. The basic
window operation is to shift one slot constantly for every time slice.
An example of a window operation for a two-state user requests is shown in Fig.
6. At each time point, W(i+1) W(i), i = 0, 1,…, lw -1 and W(0) stores a new user
request from SR.

Fig. 6. Single window operation for two-state user requests.


At a given time point t, Pij in PSR can be simply estimated by the ratio between the
total number of transitions from state i to j and the total number of occurrences of state i
observed within the window.

http://eforu.page.tl/
It may be impossible to define Pij when the sliding window does not have any
information of state i at a certain time point. In this case, we define Pij as 0 when i = j.
Let us denote the total number of state i observed by the sliding window by Ai,

then . and Pij at a given time t can be formally expressed as below.

Pij =
0 if Ai = 0 and i = j

where “=” is the equivalence operation with a Boolean output (i.e., it yields “1”
when the two arguments are the same, otherwise returns “0”) and where “^” is the
“conjunction” operation.

3.2. Look-Up Table Construction

The transition probability matrix of a stationary SR can be characterized by user


request probability, Ri Є [0,1], i = 0, 1. If each dimension, R i is sampled with a finite
number of samples, each sampling point in dimension i is denoted as R ij , j = 0, 1, … ,NSi
-1, where, NSi is the number of sampling points for dimension i. Based on these sampling
points, a look-up table, called policy table, is constructed as shown in Fig. 7.

Fig. 7. An example of a 2D policy table (when NS0 = NS1 = 5)


Each cell of a policy table corresponds to an SR of which the user request
probability is (R0j, R1K)(j = 0, 1, … ,NS0 - 1 and k = 0, 1, … ,NS1 - 1). And, each cell of a
policy table is also a two-dimensional table, which we call a decision table. A decision

http://eforu.page.tl/
table is a matrix with as many rows as the total system states and as many columns as the
command issued by the PM to SP. Each cell of a policy table can be indexed as a pair (j,
k) and its corresponding request probability pair is (R0j,R1K). For each pair (j, k), a policy
optimization is performed to get the corresponding decision table and the obtained
decision table is stored to a cell of the policy table with the corresponding index. The
overall table is constructed once for all and its size is NS0 X NS1 X the size of the table
used in [3].
We can make the decision from the above policy table by using the user request
probability values of state transition between state 0 to state 0, ie, R 0 (t) = P00 (t), and
state transition between state 1 to state 1,ie,R 1 (t) = P11 (t). By using these two values, we
can select the Decision table form the Policy table. Using the current state of the system,
we can select the probability value of the decision. If user request probability of R0 is
greater than 0.5 means we have to select the sleep command for attain the low power
state.

4. POLICY IMPLEMENTATION :

We use Microsoft Windows 2000/XP in our implementation. Here Service


requester state of our implementation is two states (Active and Idle). Active state is
represented as 1 and idle state is represented as 0. Service Provider also issues two
commands (Wakeup and Sleep). Input of our implementation is the Percentage Idle Time
of the Hard disk, which is Non-Stationary Service Request in nature because it
dynamically changes with time according the user request.
The user request probability is monitored with discrete time interval i.e., average
service time of the user request probability is 1 Second. According to the user request
probability values, the commands are selected by using the Policy table, which is pre-
computed with probability of past-predicted values of the Percentage Idle Time of the
hard disk. If Percentage Idle Time of the hard disk value is 100%, which is considered as,
idle state (0). Else it is taken as active state (1). Single sliding window size is 40. Number
of samples taken is 20. So, dimension of Policy table is 20 X 20 and dimension of

http://eforu.page.tl/
decision table is 2X2. Because, Number of system state is 2 (Active and Idle) and
commands issued by power manger are 2 (Wakeup and sleep).
This work is done using Visual C++ [4]. The system resources are explored by using
PDH Interfaces, which is available in Pdh.h and Pdhmsg.h header files [5]. By using
PdhAddCounter, the Percentage idle time of the Hard disk is captured.
The Power Management API functions are used for achieving the low power state
of the hard disk. Hard disk can support only Single sleep state [6].
The implementation tested by using the Power measurement setup [7] is shown in
Fig. 8.

Fig. 8.
Power Measurement setup.
From the above implementation, +5V and –5V power supplies of the hard disk
drive are connected to the external voltage source. Then Ammeter (Range between 0mA
to 1000mA) is connected in series with +5V Power supplies from External voltage source
to hard disk drive and Voltmeter (Range between 0V to 10V) is connected parallel with
+5V and –5V power supplies.
The average power consumed by the drive over the sampling interval is calculated
using PDrive = VDriveIDrive , where VDrive is the voltage of the power supply to the disk drive

http://eforu.page.tl/
and IDrive is the current drawn from External voltage source form hard disk drive. The total
energy consumed by the drive over the sampling interval is the product of PDrive and the
Time interval.

Where E.C. is Power consumption of the Hard disk drive.


5. EXPERIMENTAL RESULTS
The experiments were performed on an IBM desktop computer. The service
providers for our experiments were commercial power manageable HDD by IBM.
Output snapshot of our Software implementation is shown in Fig. 9. Our output is
the graph between Percentage Idle Time of the hard disk versus Time. Time is the X-axis
and Percentage of Idle Time is the Y-axis for the output graph. By using these values, the
probability value of the user request probability is calculated. According to these values,
commands from the Policy table is selected for attain the Low power states.

Fig. 9. Output of our implementation.


Here, we compare our own policy with Time-out policy, which is already
available in Windows 2000/XP Operating System (OS) for default power management
for Laptop and Desktop computers.

http://eforu.page.tl/
Ebusy Eidle Eactive Esleep
Idle Period
(1) (2) (1)+(2) (2 Min)
Joules Joules Joules Joules Min.
Our
420 300 720 180 2
Policy
Time-
630 594 1224 180 3
out

Table. 1. Energy consumption comparison of our policy with existing DPM Method.

6. CONCLUSION
Here, we conclude that idle time of the hard disk was minimized and also reduce
the power and energy consumption of the hard disk. Performance penalty is also
minimized. Percentage of Energy saving when compared with Time-out (3 min.) is
41.17 %.
Our Future work consists of making concurrent Power management for multiple
devices like Processor, Hard disk and Network interface card.

BIBLIOGRAPHY
[1] Y. -H. Lu and G. De Micheli, “Comparing System-Level Power Management
Policies,” IEEE Design and Test, pp. 10-19, Mar. 2001.
[2] Eui-Young Chung, Luca Benini, Alessandro Bogliolo,Yung-Hsiang Lu, and Giovanni
De Micheli, ” Dynamic Power Management for Non-stationary Service Requests”, IEEE
transactions on computers, vol. 51, no. 11, November 2002.
[3] L. Benini, A. Bogliolo, G. Paleologo, and G. De Micheli, “Policy Optimization for
Dynamic Power Management,” IEEE Trans.Computer-Aided Design of Integrated
Circuits and Systems, vol. 18, no. 6, pp. 813-833, June 1999.
[4] Programming Microsoft Visual C++, Fifth Edition, David j.Kruglinski, Scot Wingo,
and George Shepherd, Microsoft Press.

http://eforu.page.tl/
[5] http://www.msdn.com
[6] “ACPI Component Architecture Programmer Reference”, Core Subsystem,
Debugger, and Utilities, Revision 1.16,Intel Corporation.
[7] John Zedlewski, Sumeet Sobti, Nitin Garg, Fengzhou Zheng, Arvind Krishnamurth,
Randolph Wang, “Modeling Hard-Disk Power Consumption”, This work is supported in
part by IBM.

http://eforu.page.tl/

You might also like