You are on page 1of 6

A Modified Least-Laxity-First Scheduling Algorithm for Real-Time Tasks ✱

Sung-Heun Oh Seung-Min Yang


School of Computing, Soongsil University
1-1 Sangdo-dong, Dongjak-ku, Seoul 156-743, KOREA
honestly@realtime.soongsil.ac.kr yang@computing.soongsil.ac.kr

Abstract and executed with no preemption to meet the deadline. A


task with a negative laxity indicates that the task will
The Least-Laxity-First(LLF) scheduling algorithm miss the deadline. Therefore, the laxity of a task is a
assigns higher priority to a task with the least laxity, and measure of its urgency in deadline-driven scheduling[1].
has been proved to be optimal for a uniprocessor systems. The preemptive Least-Laxity-First (LLF) scheduling
The algorithm, however, is impractical to implement algorithm always executes a task whose laxity is the
1
because laxity tie results in the frequent context switches least[1]. This algorithm has been proved to be optimal
among the tasks. The Modified Least-Laxity-First for a uniprocessor system[2]. With the LLF scheduling
(MLLF) scheduling algorithm proposed in this paper algorithm, however, if two or more tasks have same
solves the problem of the LLF scheduling algorithm by laxities, laxity-tie occurs. Once laxity-tie occurs, context
reducing the number of context switches significantly. By switches takes place every scheduling point until the tie
reducing the system overhead due to unnecessary context breaks. The laxity-tie in the LLF scheduling algorithm
switches, the MLLF scheduling algorithm avoids the results in the poor system performance due to the
degradation of system performance and conserves more frequent context switches[3]. Therefore, the LLF
system resources for unanticipated aperiodic tasks. scheduling algorithm, though it is very interesting
In this paper, we propose the MLLF scheduling theoretically, is not practical to implement.
algorithm and prove its optimality. We show the In this paper, we propose a Modified Least-Laxity-
performance enhancement of the proposed MLLF First (MLLF) scheduling algorithm to solve the frequent
scheduling algorithm by using simulation results. context switches problem of the LLF scheduling
algorithm. We prove the optimality of the MLLF
scheduling algorithm and show the performance by using
1. Introduction simulation results.
The MLLF scheduling algorithm defers the context
The laxity of a real-time task Ti at time t , Li (t ) , is switch until necessary even if the laxity-tie occurs. That
defined as follows: is, the MLLF scheduling algorithm allows the laxity
Li (t ) = Di (t ) − E i (t ) inversion where a task with the least laxity may not be
where Di (t ) is the deadline by which the task must be scheduled immediately. The MLLF scheduling algorithm
done and E i (t ) is the amount of computation remaining is an optimal algorithm in a sense that if a set of tasks
to be complete. In other words, the laxity is the time left can be scheduled by the LLF scheduling algorithm, those
2
by its deadline after the task is completed assuming that tasks are also schedulable by the MLLF scheduling
the task could be executed immediately without algorithm. The complexity of the MLLF scheduling
preemption. algorithm, however, is higher than that of the LLF
A task with zero laxity must be scheduled right away 1
An algorithm is said to be optimal if it may fail to meet a deadline
only if no other algorithms of the same class can meet it.

This work is supported partly by KOSEF(Korea Science & 2
A schedule is said to be feasible if all tasks can be completed
Engineering Foundation) under contract # 96-0101-07-01- according to a set of specified constraints. A set of tasks is said to be
3(Integrated Object-Oriented Development Environment for schedulable if there exists at least one algorithm that can produce a
Distributed Real-Time Systems) feasible schedule[4].
scheduling algorithm. In simulation experiments, we after Ti has been executed for ω i = L j (t ) − Li (t ) . If
consider both the number of context switches and the ω i ≥ E i (t ) , Li (t ) − ω other ≤ L j (t ) − ω other − E i (t ) is
cost of the scheduling algorithm. We analyze the global satisfied at time t ′ , so that Ti will be completed before
performance between two algorithms and show that the or at time t ′ . And if ω i < E i (t ) , E i (t ′) = E i (t ) − ω i > 0
MLLF scheduling algorithm performs better than the holds at time t ′ , so that the laxity of Ti and T j will
LLF scheduling algorithm.
become the same before Ti is completed. ■
The rest of this paper is organized as follows: Section
2 presents a Modified Least-Laxity-First scheduling
algorithm. In section 3, our simulation experiments are From Lemma 1, the following cases need to be
described and we conclude in section 4. considered in scheduling of Ti and T j with LLF:

2. The Modified Least-Laxity-First (MLLF) Case 1) If L j (t ) − Li (t ) ≥ E i (t ) , Ti will be completed


Scheduling Algorithm before the laxities of Ti and T j become the same.
This means D j (t ) > Di (t ) .
As long as there exists no laxity-tie, MLLF scheduling Case 2) If L j (t ) − Li (t ) < E i (t ) , the laxities of Ti and
algorithm proposed in this paper produces the same T j will be the same before Ti is completed. This case
schedule as LLF scheduling algorithm. If the laxity-tie can be divided into two cases:
occurs, MLLF scheduling algorithm allows the running Case 2.1) If E i (t ) − ( L j (t ) − Li (t )) > E j (t ) (or if
task to run with no preemption as far as the deadlines of
Di (t ) > D j (t ) ), T j is completed before the
other tasks are not missed. (If there is a laxity-tie and no
completion of Ti . Ti will be executed during
running task, a task with the earliest deadline is
selected.)That is, MLLF scheduling algorithm allows the L j (t ) − Li (t ) + E j (t ) (or D j (t ) − Li (t ) ) before T j
laxity inversion where a task with the least laxity may is completed.
not be scheduled. The laxity inversion duration is Case 2.2) If E i (t ) − ( L j (t ) − Li (t )) ≤ E j (t ) (or if
defined as follows: Di (t ) ≤ D j (t ) ), Ti will be completed before T j
is completed.
Definition 1. Laxity Inversion Duration at time t is the
duration that the current running task can continue Observe that if Di (t ) ≤ D j (t ) , Ti will be completed
running with no loss in schedulability even if there exist before T j is completed in the LLF scheduling.
a task (or tasks) whose laxity is smaller than the current Therefore, in this case, we can execute Ti with no
running task. preemption until its completion and T j will not be miss
its deadline. In case of Di (t ) > D j (t ) , we can execute
Lemma 1. In LLF scheduling, consider Ti and T j
Ti with no preemption for D j (t ) − Li (t ) and by that time
satisfying Li (t ) ≤ L j (t ) . After Ti has being executed for
T j must be executed in order not to miss its deadline.
L j (t ) − Li (t ) , the laxities of Ti and T j will become the
same. If L j (t ) − Li (t ) ≥ E i (t ) , Ti will be completed
Lemma 2. At time t , suppose there is a schedulable
before the laxities of Ti and T j becomes the same. And,
task set with Ti and T j , satisfying Li (t ) < L j (t ) and
if L j (t ) − Li (t ) < E i (t ) , the laxities of Ti and T j will tie
Di (t ) > D j (t ) . Then, Ti and T j are still schedulable at
before Ti is completed.
time t + α when Ti has been executed with no
preemption for α , where 0 < α ≤ D j (t ) − Li (t ) .
Proof:
Let t′ (t ≤ t ′) be the time the laxities of Ti and T j
Proof:
become the same, and ω other the time spent for
At time t , the following conditions are true for Ti
execution of the other tasks except Ti and T j over
[t , t ′] . Since Ti always has higher priority than T j over and T j :
[t , t ′] , T j will never be executed in the interval [t , t ′] . Li (t ) ≥ E j (t ) (1)
Let ω i be the execution time of Ti in the interval [t , t ′] . L j (t ) ≥ 0 (2)
At time t ′ , Ti and T j satisfy the following: Now, in order to be schedulable at time t + α for Ti
Li (t ′) = L j (t ′) and T j , the following conditions must be satisfied:
Li (t ) − ω other = L j (t ) − ω other − ω i . Li (t + α ) ≥ E j (t + α ) (3)
Notice that the laxities of Ti and T j are the same L j (t + α ) ≥ 0 (4)
Since Li (t + α ) = Li (t ) and E j (t + α ) = E j (t ) , Inversion Duration is 2, from time 1 to from 3.
condition (3) is true. And since L j (t + α ) = L j (t ) − α and
L j (t ) − α ≥ Li (t ) − E j (t ) ≥ 0 , condition (4) is also true. ■ Table 1 . An example of task set

Remaining
Now let Ta be a task with the smallest remaining Deadline Laxity
Execution Time
execution time among a set of tasks with the least laxities.
T1 3 6 3
This means Ta ’s deadline is the earliest among those
tasks. And let Tmin be a task which has the earliest T2 4 7 3
deadline among all tasks, but has larger laxity than Ta .
With no loss in schedulability (i.e. in order not to miss
the deadlines of other tasks), Ta may be executed with
no preemption for D min (t ) − L a (t ) , where Dmin (t ) is the T1
deadline of task Tmin , and La (t ) is the laxity of task
Ta . If D a (t ) ≤ D min (t ) is satisfied, Ta will be T2
completed with no preemption.
t
Therefore, MLLF scheduling algorithm allows laxity 0 1 2 3 4 5 6 7

inversion for E min (t ) − 1 , where E min (t ) is the


remaining execution time of task Tmin at time t . Figure 2. Schedule generated by the LLF
MLLF scheduling algorithm reevaluates the priorities of scheduling algorithm
tasks when any of the following conditions is met:
1) when the current running task Ta is terminated,
2) when the currently running task Ta has executed
for D min (t ) − L a (t ) , T1
3) when a new task arrives.
Figure 1 presents the MLLF scheduling algorithm. If T2
Tmin does not exists, Ta will be executed until its t
0 1 2 3 4 5 6 7
completion.
Consider two tasks, T1 and T2 , shown in Table 1,
whose laxities are the same. Figure 2 and Figure 3 show Figure 3. Schedule generated by the MLLF
the schedules produced by LLF and MLLF scheduling scheduling algorithm
algorithms, respectively, for the task set in table 1. The
context switches point is indicated by a down-arrow. Here, we prove the optimality of the proposed MLLF
Five context switches occur with LLF, whereas only one scheduling algorithm.
context switches with MLLF. In this case Laxity

/* Rescheduling point conditions */


/* 1. Ta Terminates */
/* 2. Ta uses up the time quantum Ea (t ) or Dmin (t ) − La (t ) */
/* 3. new tasks are requested */

Algorithm MLLF
begin
finds Ta that satisfies V1 = {Ti | Li (t ) ≤ L j (t ), Ti , T j ∈ T } and Ta = {Ti | Ei (t ) ≤ E j (t ), Ti , T j ∈ V1} ;
finds Tmin that satisfies Tmin = {Ti | Di (t ) ≤ D j (t ) and Li (t ), L j (t ) > La (t ), Ti , T j ∈ T } ;
executes Ta until satisfying any rescheduling point conditions ;
end

Figure 1. MLLF scheduling algorithm


Theorem 1. MLLF is optimal. TLLF = N LLF × (C CS + C LLF )
TMLLF = N MLLF × (CCS + CMLLF )
Proof: z global performance ratio of MLLF and LLF
The proof is by the induction. Let S n be a task set at scheduling algorithm :
the n th rescheduling point(or time t n ) and S n +1 be a T LLF
global performance ratio =
task set at the ( n + 1 )th rescheduling point(or time t n +1 ). T MLLF
Suppose S n is schedulable and if MLLF always We analyze the schedulability, the number of context
produces a schedulable set S n +1 , MLLF is optimal. switches, and global performance ratio by using the
In MLLF, rescheduling occurs ① when the current simulation results. In simulation experiments, we assume
running task Ta is terminated, ② when the currently that
running task Ta has executed for Dmin (t ) − La (t ) and ③ z All tasks are periodic and the relative deadline of
when a new task arrives. each task is equal to its period.
At time t n , a task set V and V are defined as z The period of tasks is chosen as a random variable
followings:
{
V = Ti | Li (t n ) ≤ L j (t n ), Ti , T j ∈ S n } with uniform distribution between 10 and 100 time
units.
V = Sn −V z The worst-case execution time is computed as a
Since Ta has the earliest deadline among the task set random variable with uniform distribution between
V , execution of Ta with no preemption cannot incur the 1 and 70 time units.
missing of any other task’s deadline in the task set V . z C LLF and CMLLF are n − 1 and 3n − 3 ,
However, there may be a task(s) in V with earlier respectively.
deadline than Ta . A task, T x , in V is either in one of z C CS is 500. We assume that the cost of the
the two cases. scheduling algorithm depends on the cost of
A) Da (t n ) ≤ Dx (t n ) memory accesses rather than processor operation.
B) Da (t n ) > Dx (t n ) Since the time overhead of one context switches
where D x (t n ) is the deadline of T x at time t n . In case between processes is 500 times larger than one
A, execution of Ta with no preemption cannot incur the memory access on general computer system, C CS
missing of T x ’s deadlines. In case B, from Lemma 2, is assumed to be 500.
execution of Ta with no preemption for D x (t ) − La (t ) Both the LLF and the MLLF scheduling algorithms
cannot incur the missing of T x ’s deadlines. Therefore, have the same schedulability for a task set with processor
execution of Ta with no preemption for D min (t ) − L a (t ) utilization below 1.0.
where Tmin has the earliest deadline among task set V In Figure 4, N MLLF N LLF (i.e., the number of context
cannot incur the missing of any task’s deadlines in the switches ratio) is shown as a function of processor
task set V . ■ utilization on the assumption that the number of tasks is
random. With the fixed number of tasks = 10 and 20,
3. Performance Evaluation Figure 5 shows N MLLF N LLF . As shown in Figures 4
and 5, as the processor utilization increases, MLLF
In this section, LLF and MLLF scheduling algorithms scheduling algorithm performs much better than LLF
are tested by simulation in order to compare the global scheduling algorithm. This is due to the high possibility
performance. When there are n tasks at time t , LLF of laxity-ties as the processor utilization increases.
scheduling algorithm performs the maximum of n − 1
operations to find a task with the least laxity. MLLF
scheduling algorithm, however, performs the maximum
of 3n − 3 operations to find Ta and Tmin .

Definition 2. The notations used to evaluate performance


are as follows:
z C CS : the cost of a context switches
z C LLF , CMLLF : the cost of LLF and MLLF
scheduling algorithms, respectively
z N LLF , N MLLF : the number of context
switches that is produced by LLF and MLLF
scheduling algorithms, respectively
z TLLF , TMLLF : the total scheduling cost
1.0
Figure 7 shows the global performance ratio with the
0.9
fixed number of tasks = 10 and 20. As expected, as the
0.8
processor utilization increases MLLF performs much
0.7
better than LLF algorithm.
N MLLF 0.6 2.5
N LLF 10 tasks
0.5
20 tasks
0.4 2.0

0.3

global performance ratio


0.2 1.5

0.1

1.0
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
Processor Utilization

0.5

Figure 4. Comparison of the number of context


switches
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
Processor Utilization

1.0
Figure 7. Global performance ratio with task =
0.9

0.8
10 and 20
0.7
4.0
N MLLF 0.6
N LLF
0.5
3.5
0.4
global performance ratio

0.3 3.0
10 tasks

0.2
20 tasks

0.1 2.5

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 2.0
Processor Utilization

1.5
2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Figure 5. Comparison of the number of context The number of periodic tasks
(processor utilization = 0.9)
switches with task = 10 and 20
Figure 8. Global performance ratio
In Figure 6, N MLLF N LLF is shown when the
processor utilization is 0.9. Although as the number of In Figure 8, the global performance ratio is shown
tasks increases the ratio also increases slightly, the when the processor utilization is 0.9. As the number of
number of context switches with MLLF is half on the tasks increases, the performance of MLLF goes down
average. due to the cost of the algorithm itself. Therefore, in order
1.0 to maximize the performance of MLLF algorithm, an
0.9
efficient implementation approach and data structure are
0.8
needed.
0.7

0.6
N MLLF
N LLF 0.5
4. Conclusion
0.4

0.3 In this paper, we proposed the Modified Least-Laxity-


0.2 First (MLLF) scheduling algorithm that solves the
0.1
disadvantage of the LLF scheduling algorithm. MLLF
2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 scheduling algorithm defers the preemption by allowing
The number of periodic tasks
(processor utilization = 0.9)
laxity inversion as far as deadlines of tasks are not
missed. Hence, MLLF scheduling algorithm avoids the
Figure 6. Comparison of the number of context degradation of systems performance. We proved the
switches optimality of MLLF scheduling algorithm. The
simulation results showed that MLLF scheduling
algorithm performs better than LLF scheduling algorithm
especially when the number of processes is small and the
processor utilization is high.
We now investigate an efficient implementation
approach and data structures to incorporate MLLF
scheduling algorithm in the real-time kernel we are
developing.

References
[1] M.L. Dertouzos, “Control Robotics: the Procedural
Control of Physical Processes,” Information Processing 74,
North-Holland Publishing Company, 1974
[2] A.K. Mok, “Fundamental Design Problems of Distributed
Systems for the Hard-Real-Time Environment,” Ph.D.
Thesis, Department of Electrical Engineering and
Computer Science, Massachusetts Institute of Technology,
Cambridge, Massachusetts, May 1983
[3] Michael B. Jones, Joseph S. Barrera III, Alessandro Forin,
Paul J. Leach, Daniela Rosu and Marcel-Catalin Rosu,
“An Overview of the Rialto Real-Time Architecture,” In
Proceedings of the Seventh ACM SIGOPS European
Workshop, Connemara, Ireland, pages 249-256, September,
1996
[4] G.C. Buttazzo, “Hard Real-Time Computing Systems :
Predictable Scheduling Algorithms and Applications,”
Kluwer Academic Publishers, 1997

You might also like