You are on page 1of 35

Scheduling

Processes Threads Interprocess Communication (IPC) CPU Scheduling Deadlocks

Computer Architecture

WS 06/07

Dr.-Ing. Stefan Freinatis

Scheduling
Better CPU utilization through multiprogramming Scheduling: switching CPU among processes Productivity depends on CPU bursts

Figure from [Ta01 p.134] Computer Architecture WS 06/07 Dr.-Ing. Stefan Freinatis

Short-Term Scheduler
Also called CPU scheduler. Selects one process from among the ready processes in memory and dispatches it. The dispatcher is a module that finally gives CPU control to the selected process (switching context, switching from kernel mode to user mode, loading the PC).
Short-term scheduler
Scheduling

Job queue

Ready queue

CPU

Computer Architecture

WS 06/07

Dr.-Ing. Stefan Freinatis

Scheduling decisions
CPU scheduling decisions may take place when a process
Scheduling

1. 2. 3. 4.

switches from running to waiting, switches from running to ready, switches from waiting to ready, or terminates.
2. 4.

3.

1.
Figure from [Sil00 p.89]

Computer Architecture

WS 06/07

Dr.-Ing. Stefan Freinatis

Preemptive(ness)
Preemptiveness determines the way of multitasking.
Scheduling

With non-preemptive scheduling (cooperative scheduling), a running process is taken away the CPU because the process became blocked, it completed, or it voluntarily gave up the CPU. With preemptive scheduling the operating system can additionally force a context switch at any time to satisfy the priority policies. This allows the system to more reliably guarantee each process a regular "slice" of operating time.
Computer Architecture WS 06/07 Dr.-Ing. Stefan Freinatis

Preemptive(ness)
Preemptive scheduling:
Scheduling

Scheduler can interrupt Special timer hardware required


for the timer-controlled interrupts of the scheduler.

Synchronization of shared resources


An interrupted process may leave shared data inconsistent.

Cooperative (non-preemptive) scheduling:

CPU occupation depends on process


in particular on the CPU burst distribution.

Applicable on any hardware platform Lesser problems with shared resources


at least the elementary parts of shared data structures are not inconsistent
Computer Architecture WS 06/07 Dr.-Ing. Stefan Freinatis

Scheduling Criteria
The scheduling policy depends on what criteria are emphasized [Sil00 p.140] Scheduling

CPU Utilization
Keeping the CPU as busy as possible. The utilization usually ranges from 40% (light loaded system) to 90% (heavy loaded).

Throughput
The number of processes that are completed per time unit. For long processes the throughput rate may be one process per hour, for short ones it may be 10 per second.

Turnaround time
The interval from the time of submission to the time of completion of a process. Includes the time to get into memory, times spent in the ready queue, execution time on CPU and I/O time.
[ With real-time scheduling this time-period is called reaction time ]
Computer Architecture WS 06/07 Dr.-Ing. Stefan Freinatis

Scheduling Criteria
Waiting time
Scheduling

The scheduling algorithm does not affect the time a process executes or spends doing I/O. It only affects the amount of time a process spends waiting in the ready queue. The waiting time is the sum of time spent waiting in the ready queue.

Response time
Irrespective of the turnaround time, some processes produce an output fairly early and continue computing new results while previous results are output to the user. The response time is the time from the submission of a request until the first response is produced.
[ Remark: In the exercises the response time is defined as the time from submission until the process starts (that is, until the first machine instruction is executing). ]

Different systems (batch systems, interactive computers, control systems) may put focus on different scheduling criteria. See next slide.
Computer Architecture WS 06/07 Dr.-Ing. Stefan Freinatis

Scheduling

Criteria importance by system [Ta01 p.137]


Computer Architecture WS 06/07 Dr.-Ing. Stefan Freinatis

Optimization
Common criteria:
Scheduling

Maximize(average(CPU utilization)) Maximize(average(throughput)) Minimize(average(turnaround time)) Minimize(average(waiting time)) Minimize(average(response time))


Sometimes it is desirable to optimize the minimum or maximum values rather than the average. For example, to guarantee that all users receive a good service in terms of responsiveness, we may want to minimize the maximum response time. [Note: we do not delve into optimization any further].
Computer Architecture WS 06/07 Dr.-Ing. Stefan Freinatis

Static / Dynamic Scheduling


Scheduling

With static scheduling all decisions are made before the system starts running. This only works when there is perfect information available in advance about the work needed to be done and the deadlines that have to be met. Static scheduling - if applied - is used in real-time systems that operate in a deterministic environment. With dynamic scheduling all decisions are made at run time. Little needs to be known in advance. Dynamic scheduling is required when the number and type of requests is not known beforehand (non deterministic environment). Interactive computer systems like personal computers use dynamic scheduling. The scheduling algorithm is carried out as a
(hopefully short) system process in-between the other processes.

Computer Architecture

WS 06/07

Dr.-Ing. Stefan Freinatis

Scheduling Algorithms
Scheduling

First Come First Served Shortest Job First Priority Scheduling Round Robin Multilevel Queueing
These algorithms typically are dynamic scheduling algorithms.

Computer Architecture

WS 06/07

Dr.-Ing. Stefan Freinatis

First Come - First Served


The process that entered the ready queue first will be the first one scheduled. The ready queue is a FIFO queue. Cooperative scheduling (no preemption).
Process P1 P2 P3 Burst time 24 ms 3 ms 3 ms 0
Scheduling

Let the processes arrive in the order P1, P2, P3. The Gantt chart for the schedule is:

P1
24

P2
27

P3
ms

30

Waiting time for P1 = 0 ms, for P2 = 24 ms, for P3 = 27 ms. Average waiting time: (0 ms + 24 ms + 27 ms) / 3 = 17 ms.
Computer Architecture WS 06/07 Dr.-Ing. Stefan Freinatis

First Come - First Served


Let the processes now arrive in the order P2, P3, P1. The Gantt chart for the schedule is:
Scheduling

P2
0 3

P3
6

P1
30

t [ms]

Waiting time for P1 = 6 ms, for P2 = 0 ms, for P3 = 3 ms. Average waiting time: (6 ms + 0 ms + 3 ms) / 3 = 3 ms.

Much better average waiting time than previous case. With FCFS the waiting time generally is not minimal. No preemption.
Computer Architecture WS 06/07 Dr.-Ing. Stefan Freinatis

Shortest Job First (SJF)


Associate with each process the length of its next CPU burst. Use these lengths to schedule the process with the shortest time. Two schemes:
Scheduling

Non-preemptive SJF
Once the CPU is given to the process, it cannot be preempted until the CPU burst is completed.

Preemptive SJF
When a new process arrives with a CPU burst length less than the remaining burst time of the current process, the CPU is given to the new process. This scheme is known as the Shortest Remaining Time First (SRTF)

With respect to the waiting time, SJF is provably optimal. It gives the minimum average waiting time for a given set of processes. Processes with long bursts may suffer from starvation.
Computer Architecture WS 06/07 Dr.-Ing. Stefan Freinatis

Shortest Job First


Process P1 P2 P3 P4 Arrival time 0 ms 2 ms 4 ms 5 ms Burst time 7 ms 4 ms 1 ms 4 ms
Scheduling

For non-preemptive scheduling the Gantt chart is:

P1

P3

P2

P4 t [ms]

12

16

Waiting time for P1 = 0 ms, for P2 = 6 ms, for P3 = 3 ms, for P4 = 7 ms. Average waiting time: (0 ms + 6 ms + 3 ms + 7 ms) / 4 = 4 ms.
Computer Architecture WS 06/07 Dr.-Ing. Stefan Freinatis

Shortest Job First


Process P1 P2 P3 P4 Arrival time 0 ms 2 ms 4 ms 5 ms Burst time 7 ms 4 ms 1 ms 4 ms
Scheduling

For preemptive scheduling (SRTF) the Gantt chart is:


P4 P1 t [ms]

P1

P2

P3

P2

11

16

Waiting time for P1 = 9 ms, for P2 = 1 ms, for P3 = 0 ms, for P4 = 2 ms. Average waiting time: (9 ms + 1 ms + 0 ms + 2 ms) / 4 = 3 ms.
Computer Architecture WS 06/07 Dr.-Ing. Stefan Freinatis

Shortest Job First


Predicting the CPU burst time
Scheduling

The next CPU burst is predicted as the exponential average of the measured lengths of previous bursts:

n + 1 = t n + (1 ) n
n + 1 = predicted length of next burst
tn = actual length of nth burst : 0 1
controls the relative contributions of the recent and the past history
Computer Architecture WS 06/07 Dr.-Ing. Stefan Freinatis

Shortest Job First


Exponential average for = and 0 = 10
Scheduling

i
0 1 2 3 4 5 6 7 8

Figure from [Sil00 p.144] Computer Architecture WS 06/07 Dr.-Ing. Stefan Freinatis

Shortest Job First


Exponential average for = and 0 = 10
Scheduling

1 = 6 + 10 = 8 2 = 4 + 8 = 6 3 = 6 + 6 = 6 4 = 4 + 6 = 5 5 = 13 + 5 = 9 6 = 13 + 9 = 11 7 = 13 + 11 = 12
Computer Architecture WS 06/07 Dr.-Ing. Stefan Freinatis

n + 1 = t n + (1 ) n

Priority Scheduling
Each process is assigned a priority. The process with highest priority is allocated the CPU. Two schemes:
Scheduling

Non-preemptive Preemptive
When a new process arrives with a priority higher than a running process, the CPU is given to the new process.

SJF scheduling is a special case of priority scheduling in which the priority is the inverse of the CPU burst length. Solution to starvation problem: The priority of a process increases as the waiting time increases (aging technique).
Computer Architecture WS 06/07 Dr.-Ing. Stefan Freinatis

Priority Scheduling
Assume low numbers representing high priorities
Process P1 P2 P3 P4 P5 Burst time 10 ms 1 ms 2 ms 1 ms 5 ms Priority 3 1 4 5 2
Scheduling

All processes arrive at time 0. For non-preemptive scheduling the Gantt chart is:

P2
0 1

P5
6

P1
16
Computer Architecture WS 06/07

P3
18

P4
19

t [ms]

Dr.-Ing. Stefan Freinatis

Priority Scheduling
Process Burst time Arrival time Priority P1 P2 P3 P4 P5 P2 P5 P1 P3 P4
0 5 10 15
Computer Architecture

Scheduling

10 ms 1 ms 2 ms 1 ms 5 ms

0 ms 2 ms 2 ms 6 ms 12 ms

3 1 4 5 2

Here: preemptive scheduling.

Timing diagram
Processes sorted by priority = running = ready

20
WS 06/07

t [ms]
Dr.-Ing. Stefan Freinatis

Round Robin
Each process gets a small unit of CPU time (time quantum), usually 10-100 milliseconds. After the quantum has elapsed, the process is preempted and added to the end of the ready queue.
Scheduling

Burst quantum
When the current CPU burst is smaller than the time quantum, the process itself will release the CPU (changing state into waiting).

Burst > quantum


The process is interrupted and another process is dispatched.

If the time quantum is very large compared to the processes burst times, the scheduling policy is the same as FCFS. If the time quantum is very small, the round robin policy turns into processor sharing (seems as if each process has its own processor).
Computer Architecture WS 06/07 Dr.-Ing. Stefan Freinatis

Round Robin
Process P1 P2 P3 P4 Burst time 53 ms 17 ms 68 ms 24 ms
Scheduling

Suppose a time quantum of 20 ms. The Gantt chart for the schedule is:

P1 0 20

P2 37

P3 57

P4 77

P1

P3 97 117

P4

P1

P3

P3

121 134 154 162

t [ms]

Waiting time for P1 = 0 + 57 + 24 = 81 ms, for P2 = 20 ms, for P3 = 37 + 40 + 17 = 94 ms, for P4 = 57 + 40 = 97 ms. Average waiting time: (81 + 20 + 94 + 97) / 4 = 73 ms.
Computer Architecture WS 06/07 Dr.-Ing. Stefan Freinatis

Round Robin
Round Robin typically has higher average turnarounds than SJF, but has better response.
Scheduling

Context switch and performance


The smaller the time quanta, the more the context switches do affect performance. Following is shown a process with a 10 ms burst, and time quanta of 12, 6 and 1 ms.

Context switches cause overhead

Figure from [Sil00 p.148] Computer Architecture WS 06/07 Dr.-Ing. Stefan Freinatis

Round Robin
Turnaround time depends on time quantum
Burst time Scheduling

All processes arrive at same time. Ready queue order: P1, P2, P3, P4

Turnaround time as function of time quantum


Figure from [Sil00 p.149] Computer Architecture WS 06/07 Dr.-Ing. Stefan Freinatis

Round Robin
Average turnaround time for time quantum = 1ms
P1 P2 P3 P4
0 5 10 15 20

Scheduling

t [ms]

Turnaround (P1) = 15 ms Turnaround (P2) = 9 ms Turnaround (P3) = 3 ms Turnaround (P4) = 17 ms

Average turnaround: (15 + 9 + 3 + 17) ms = 11 ms 4

Computer Architecture

WS 06/07

Dr.-Ing. Stefan Freinatis

Round Robin
Average turnaround time for time quantum = 2 ms
P1 P2 P3 P4
0 5 10 15 20

Scheduling

t [ms]

Turnaround (P1) = 14 ms Turnaround (P2) = 10 ms Turnaround (P3) = 5 ms Turnaround (P4) = 17 ms

Average turnaround: (14 + 10 + 5 + 17) ms = 11.5 ms 4

Computer Architecture

WS 06/07

Dr.-Ing. Stefan Freinatis

Round Robin
Average turnaround time for time quantum = 6 ms
P1 P2 P3 P4
0 5 10 15 20

Scheduling

Side note: policy now is like FCFS

t [ms]

Turnaround (P1) = 6 ms Turnaround (P2) = 9 ms Turnaround (P3) = 10 ms Turnaround (P4) = 17 ms

Average turnaround: (6 + 9 + 10 + 17) ms 4 = 10.5 ms

Computer Architecture

WS 06/07

Dr.-Ing. Stefan Freinatis

Multilevel Queue
The ready queue is partitioned into separate queues. Each queue has its own CPU scheduling algorithm. There is also scheduling between the queues (inter queue).
Interqueue scheduling Fixed priority Time slicing
Scheduling

Figure from [Sil00 p.150] Computer Architecture WS 06/07 Dr.-Ing. Stefan Freinatis

Real-Time Scheduling
Tdist Technical process r waiting (ready) context switch RT System execution
inclusive output

Scheduling

TRmax
t

d Tw Tcs
t

e s TR c

Realtime condition: TR TRmax

otherwise realtime-violation

Computer Architecture

WS 06/07

Dr.-Ing. Stefan Freinatis

Real-Time Scheduling
Tdist
Scheduling

A technical process generates events (periodically or not). A real-time computing system is requested to respond to the events. The response must be delivered within the period TRmax. The technical system requests computation by raising an interrupt at time r at the real-time system. The time from the occurrence of the request (interrupt) until the context switch of the corresponding computer process is the waiting time Tw . Switching the context takes the time TCS . The point in time at which execution starts is the start time s. The execution time e is the netto CPU time needed for execution (even if the process is interrupted). The process finishes at completion time c.
Computer Architecture WS 06/07 Dr.-Ing. Stefan Freinatis

Real-Time Scheduling
The reaction time (also called response time) TR is the time interval between the request (the interrupt) and the end of the process: TR = Tw + TCS + e. This is the time interval the technical system has to wait until response is received. Starting from the request, the maximum response time TRmax defines the deadline d (a point in time) at which the real-time system must have responded. A hard real-time system must not violate the real-time conditions. Note: For all following considerations, the context switch time TCS is neglected, that is, we assume TCS = 0 s.
In accordance with D. Zbel, W. Albrecht: Echtzeitsysteme, page 24, ISBN 3-8266-0150-5

Scheduling

Computer Architecture

WS 06/07

Dr.-Ing. Stefan Freinatis

Real-Time Violation
Example RT.1 RT-Scheduling

Two technical processes TP1 and TP2 on some machine require response from a real-time system. The corresponding computer processes are P1 and P2. The technical processes generate events as follows: Response must be given latest just before the next event (thus within T
TP1 TP2
a 0 a 0

dist)

TRmax1 TRmax2

b 5 b 5

c 10

t [ms] t [ms]

c 10

The execution time of P1 is 1ms, the execution time of P2 is 4 ms, and the scheduling algorithm is preemptive priority scheduling. The context switch time is considered negligible (0 s).
Computer Architecture WS 06/07 Dr.-Ing. Stefan Freinatis

Real-Time Violation
Case 1:
P1 low priority P2 high priority

Machine TP1 TP2


b 5

TRmax Process
4 ms 6 ms P1 P2
c

response time TR 1 ms 4 ms
d 10

Priority LOW HIGH

TP1 TP2

a 0 a 0

t [ms] t [ms]

b 5 10

Real-time violation, response to TP1 is too late!

P1 P2
0 a

b b 5
Computer Architecture

c c

t [ms]
10
WS 06/07 Dr.-Ing. Stefan Freinatis

Real-Time Violation
Case 2:
P1 high priority P2 low priority

Machine TP1 TP2


b 5

TRmax Process
4 ms 6 ms P1 P2
c

response time TR 1 ms 4 ms
d 10

Priority HIGH LOW

TP1 TP2

a 0 a 0

t [ms] t [ms]

b 5 10

No real-time violation. Fine!

P1 P2
0

a a

b a 5 b

c b 10
Computer Architecture WS 06/07

d c

t [ms]

Dr.-Ing. Stefan Freinatis

Real-Time Scheduling
Theorem
Scheduling

For a system with n processors (n 2) there is no optimal scheduling algorithm for a set of processes P1 ... Pm unless
all starting times s1, ... sm, all execution times e1, ... em, all completion times c1, ... cm

are known (deterministic systems).


Often, technical processes (or natural processes) are non-deterministic, at least to a part.

An algorithm is optimal when it finds an effective solution if such exists.


Computer Architecture WS 06/07 Dr.-Ing. Stefan Freinatis

Branch-and-Bound Scheduling
Find a schedule by searching all combinations of processes.
RT-Scheduling

Of each process (non-preemptive!) must be known in advance:


the request time (interrupt arrival time)
known in case of periodical technical processes

the response time the deadline

TR

known from analysis or worst-case measurements

d
request time ri 0 ms 0 ms 0 ms
Computer Architecture

given by the technical system

Example:

Process P1 P2 P3

execution time e 20 ms 50 ms 30 ms
WS 06/07

deadline di 30 ms 90 ms 100 ms

Dr.-Ing. Stefan Freinatis

Branch-and-Bound Scheduling
Search tree for the example
RT-Scheduling

P1 P1, P2 P1, P3 P2, P1

P2 P2, P3 P3, P1

P3 P3, P2

P1, P2 , P3

P1, P3 , P2

P2, P1 , P3

P2, P3 , P1

P3, P1 , P2

P3, P2 , P1

For n processes: tree depth (number of levels) = n, number of combinations = n!


Computer Architecture WS 06/07 Dr.-Ing. Stefan Freinatis

Branch-and-Bound Scheduling
RT-Scheduling

Sequence P1, P2 , P3 P3 P2 P1
0 10 20 30 40 50 60 70 80 90 100 110

t [ms] d1
Sequence P2, P3 , P1

d2

d3

P3 P2 P1
0 10 20 30 40 50 60 70 80 90 100 110

t [ms] d1
Real-time violation
Computer Architecture WS 06/07

d2

d3

Dr.-Ing. Stefan Freinatis

Branch-and-Bound Scheduling
RT-Scheduling

Sequence P2, P1 , P3 P3 P2 P1
0 10 20 30 40 50 60 70 80 90 100 110

t [ms] d1
Sequence P2, P3 , P1
Real-time violation

d2

dd3

P3 P2 P1
0 10 20 30 40 50 60 70 80 90 100 110

t [ms] d1
Real-time violation
Computer Architecture WS 06/07

d2

d3

Dr.-Ing. Stefan Freinatis

Branch-and-Bound Scheduling
RT-Scheduling

Sequence P3, P1 , P2 P3 P2 P1
0 10 20 30 40 50 60 70 80 90 100 110

Real-time violation

t [ms] d1
Sequence P3, P2 , P1
Real-time violation

d2

d3

P3 P2 P1
0 10 20 30 40 50 60 70 80 90 100 110

t [ms] d1
Real-time violation
Computer Architecture WS 06/07

d2

d3

Dr.-Ing. Stefan Freinatis

Branch-and-Bound Scheduling
Search tree for the example
RT-Scheduling

P1 P1, P2 P1, P3 P2, P1

P2 P2, P3 P3, P1

P3 P3, P2

P1, P2 , P3

P1, P3 , P2

P2, P1 , P3

P2, P3 , P1

P3, P1 , P2

P3, P2 , P1

The only solution: P1 must be first, P2 must be second.


Computer Architecture WS 06/07 Dr.-Ing. Stefan Freinatis

Branch-and-Bound Scheduling
For small n one may directly investigate the n! combinations at the leafs. For bigger n it is recommended to start from the root and investigate all nodes (level by level). When a node violates the real-time condition the corresponding sub tree can be disregarded.
RT-Scheduling

P1 P1, P2 P1, P3 P2, P1

P2 P2, P3 P3, P1

P3 P3, P2

P1, P2 , P3

P1, P3 , P2

P2, P3 , P1
Computer Architecture WS 06/07

P3, P2 , P1
Dr.-Ing. Stefan Freinatis

Deadline Scheduling
RT-Scheduling

Priority Scheduling. The process with the closest deadline has highest priority. When processes have the same deadline, selection is done arbitrarily or according to FCFS.

Non-preemptive
The algorithm is carried out after a running process finishes. Intermediate requests are saved (interrupt flip-flops) meanwhile.

Preemptive
The algorithm is carried out when a request arrives (interrupt routine) or after a process finishes.

The deadline scheduling algorithm is also known as earliest deadline first (EDF). The algorithm is optimal for the one-processor case.
If there is a solution, it is found. If none is found then there is no solution.
Computer Architecture WS 06/07 Dr.-Ing. Stefan Freinatis

Deadline Scheduling
Example RT.2: Non-preemptive scheduling Process P1 P2 P3 P4 P4 P3 P2 P1
0 5 10 15 20

RT-Scheduling

request time ri 0 ms 0 ms 0 ms 0 ms

execution time e 4 ms 1 ms 2 ms 5 ms

deadline di 5 ms 7 ms 7 ms 13 ms

Deadline is the same, choice is arbitrary. Could be sequence P3, P2 as well.

t [ms]

d1 d2, d3

d4
Computer Architecture WS 06/07 Dr.-Ing. Stefan Freinatis

Deadline Scheduling
Example RT.3: Preemptive scheduling Process P1 P2 P3 P4 P4 P3 P2 P1
0 5 10 15 20

RT-Scheduling

request time ri 0 ms 3 ms 6 ms 5 ms

execution time e 2 ms 3 ms 3 ms 4 ms

deadline di 4 ms 14 ms 12 ms 10 ms

t [ms]

d1

d4

d3

d2
WS 06/07 Dr.-Ing. Stefan Freinatis

Computer Architecture

Remember, context switch time is neglected.

Deadline Scheduling
Continuation of example RT.3 t = 0 ms: t = 2 ms: t = 3 ms: t = 5 ms: t = 6 ms: t = 9 ms: t = 12 ms: t = 13 ms:
RT-Scheduling Request for P1 arrives. Since there is no other process, P1 is scheduled. P1 finishes. Since there are no requests, the scheduler has nothing to do. Request for P2 arrives. Since there is no other process, P2 is scheduled. Request for P4 arrives. The deadline d4 is closer than the deadline of the running process P2. P4 has higher priority and is scheduled. Request for P3 arrives. Deadline d3 is more distant than any other, so nothing changes. P4 continues. P4 finishes. The closest deadline now is d3, so P3 is scheduled. P3 finishes. The closest deadline now is d2, so P2 is scheduled again. P2 finishes. There are no processes ready. Nothing to schedule.

Computer Architecture

WS 06/07

Dr.-Ing. Stefan Freinatis

Deadline Scheduling
RT-Scheduling

For multi-processor systems, the algorithm is not optimal.


Example RT.4: Three processes and two processors. Non-preemptive scheduling. Process P1 P2 P3
Processor 1 Processor 2
0 P3

request time ri 0 ms 0 ms 0 ms

execution time e 8 ms 5 ms 4 ms

deadline di 10 ms 9 ms 9 ms

P2

Real-time violation P1
P1

t [ms]
5
Computer Architecture WS 06/07

10
Dr.-Ing. Stefan Freinatis

Real-Time Scheduling
Scheduling

When there are n processes that are


periodic, independent of each other, preemptable, and the response is to be delivered latest
Tdist TRmax
t

at the end of each period

(that is TRmax = Tdist)

then the processes can be scheduled on a single processor without real-time violation, if

ei 1 i =1 Tdist i
n
WS 06/07

Schedulability Test

Computer Architecture

Dr.-Ing. Stefan Freinatis

Real-Time Scheduling
Example RT.5 : Process P1 P2 P3 execution time e 15 ms 25 ms 15 ms
Scheduling

deadline di k 30 ms k 70 ms k 200 ms

T
i =1

ei
dist i

15 25 15 + + = 0.5 + 0.36 + 0.075 = 0.935 1 30 70 200

The processes can be scheduled. Deadline scheduling would yield: P3 P2 P1


5 ms 15 ms 15 ms 0 50 100
Computer Architecture

10 ms

10 ms

15 ms

150
WS 06/07

200

t [ms]

Dr.-Ing. Stefan Freinatis

Real-Time Scheduling
Continuation of example RT.5 t = 0 ms: t = 15 ms: t = 30 ms: t = 45 ms: t = 55 ms: t = 60 ms: t = 70 ms:
Scheduling Requests for P1, P2, P3 arrive. P1 has closest deadline and is scheduled. P1 finishes. The deadline of P2 is closer than the deadline of P3. P2 is scheduled. Request for P1 arrives. Reevaluation of the deadlines yields that P1 has highest priority. P1 is scheduled. P1 finishes. The deadline of P2 still is closer than the deadline of P3. P2 is scheduled. P2 finishes. The only waiting process is P3. P3 thus is scheduled. Request for P1 arrives. Reevaluation of the deadlines yields that P1 has highest priority. P1 is scheduled. Request for P2 arrives. Deadline of P1 is closest, P1 continues.

...

Computer Architecture

WS 06/07

Dr.-Ing. Stefan Freinatis

Real-Time Scheduling
Example RT.6: Process P1 P2 P3 execution time e 2 ms 3 ms 5 ms deadlines di k 4 ms k 14 ms k 12 ms
Scheduling

T
i =1

ei
dist i

2 3 5 + + = 0.5 + 0.215 + 0.42 = 1.135 4 14 12

This means an overutilization of the microprocessor. The processor would have to execute more than one process at a time (which is impossible). Therefore there is no schedule that would not violate the real-time condition sooner or later (on a single-processor system). The schedulability test failed.
Computer Architecture WS 06/07 Dr.-Ing. Stefan Freinatis

Laxity Scheduling
Priority Scheduling. The process with the least is arbitrary or FCFS.
The laxity is the period of time left in which a process can be started without violating its deadline. Latest when the laxity is 0 the process must be started, otherwise it will not finish in time. The execution time e of the process must be known, of course Laxity: lax = (d - now) e now is the point at time at which the laxity is
lax now e d
RT-Scheduling

laxity has highest priority. For equal laxities the selection policy

(re)calculated. Usually this is the point in time at which a new request arrives (preemptive scheduling) or at which a process finishes.
Computer Architecture WS 06/07 Dr.-Ing. Stefan Freinatis

Laxity Scheduling
RT-Scheduling

Deadline scheduling focuses on the deadline, but does not take into account the execution time e of a process. Laxity scheduling does, it sometimes finds a solution that deadline scheduling does not find.
Example RT.7: Three processes and two processors. Non-preemptive scheduling. Same as in example RT.4. Process P1 P2 P3 request time ri 0 ms 0 ms 0 ms execution time e 8 ms 5 ms 4 ms deadline di 10 ms 9 ms 9 ms

Processes now undergoing laxity scheduling (see next slide)

Computer Architecture

WS 06/07

Dr.-Ing. Stefan Freinatis

Laxity Scheduling
Continuation of example RT.7 t = 0 ms:
RT-Scheduling Requests for P1, P2, P3 arrive. The laxities are: lax1 = 2 ms, lax2 = 4 ms,

lax3 = 5 ms. Least laxity is lax1, so P1 is scheduled on processor 1.


Processor 2 is not yet assigned, so P2 is chosen (lax2 < lax3).

t = 5 ms: t = 8 ms:

P2 finishes. The only process waiting is P3, so it is scheduled. P1 finishes. No new processes to schedule. Processor 1 Processor 2
0 P2 P1

P3

t [ms]
5 10

No real-time violation as opposed to the deadline scheduling example RT.4


Computer Architecture WS 06/07 Dr.-Ing. Stefan Freinatis

Laxity Scheduling
Laxity scheduling, like deadline scheduling, is generally not optimal for multi-processors.
That is, it does not always find a solution. RT-Scheduling

Example RT.8: Four processes and two processors. Non-preemptive scheduling. Process P1 P2 P3 P4 request time ri 0 ms 0 ms 0 ms 0 ms execution time e 1 ms 5 ms 3 ms 5 ms deadline di 1 ms 6 ms 5 ms 8 ms

Continuation on next slide


Computer Architecture WS 06/07 Dr.-Ing. Stefan Freinatis

Laxity Scheduling
Continuation of example RT.8 t = 0 ms:
RT-Scheduling Requests for P1, P2, P3, P4 arrive. The laxities are: lax1 = 0 ms,

lax2 = 1 ms, lax3 = 2 ms, lax4 = 3 ms. Least laxity is lax1, so P1 is scheduled on processor 1. Second least laxity is lax2,
so P2 is chosen for processor 2.

t = 1 ms: t = 4 ms:

P1 finishes. Least laxity is lax3 (now 1ms), so P3 is scheduled on processor 1. P3 finishes. Least laxity is lax4 (now -1 ms), so P4 is scheduled on processor 1 ... but it is already too late (negative laxity). Processor 1 Processor 2
0 P1 P3 P4

P2

Real-time violation P4 t [ms]


5

d4

10
Dr.-Ing. Stefan Freinatis

Computer Architecture

WS 06/07

Laxity Scheduling
Continuation of example RT.8 However, there exists a schedule that works well:
Processor 1 Processor 2
0 P1 P2

RT-Scheduling

Non-violating schedule
found through deadline scheduling P4

P3

t [ms]
5

d4

10

Scheduling non-preemptive processes in a multi-processor system is a complex problem.


This is even the case in a two-processor system when all request times ri are the same and all deadlines di are the same.
Computer Architecture WS 06/07 Dr.-Ing. Stefan Freinatis

Rate Monotonic Scheduling


Priority scheduling for periodical preemptive processes where the deadlines are equal to the periods. The process with highest frequency (repetition rate) has highest priority. Static scheduling.
Technical process 1 Tdist
t

Tdist
t

Technical process 2

Computer process P2 has higher priority than process P1 since its rate is higher.

Although the algorithm is not optimal, it is often used in real-time applications because it is fast and simple (at run time!). Note, static scheduling!
Computer Architecture WS 06/07 Dr.-Ing. Stefan Freinatis

Rate Monotonic Scheduling


A more thorough explanation from [Ta01 p.472]

The classic static real-time scheduling algorithm for preemptable, periodic processes is RMS (Rate Monotonic Scheduling). It can be used for processes that meet the following conditions:
Each periodic process must complete within its period. No process is dependent on any other process. Each process needs the same amount of CPU time on each burst. Any non periodic processes have no deadlines. Process preemption occurs instantaneously and with no overhead.

RMS works by assigning each process a fixed priority equal to the frequency of occurrence of its triggering event. For example, a process that must run every 30ms (= 33Hz) receives priority 33, a process that must run every 40ms (= 25 Hz) receives priority 25. The priorities are linear with the rate, this is why it is called rate monotonic.
Computer Architecture WS 06/07 Dr.-Ing. Stefan Freinatis

Rate Monotonic Scheduling


Example RT.9 Process A B C request time ri k 30 ms k 40 ms k 50 ms execution time e 10 ms 15 ms 5 ms deadline di (k+1) 30 ms (k+1) 40 ms (k+1) 50 ms

Three periodic processes [Ta01 p.471]

Figure from [Ta01 p.471] Computer Architecture WS 06/07 Dr.-Ing. Stefan Freinatis

Rate Monotonic Scheduling


Continuation Example RT.9 The processes A, B, C scheduled with Rate Monotonic Scheduling (RMS), Deadline scheduling (EDF).

Figure from [Ta01 p.473] Computer Architecture WS 06/07 Dr.-Ing. Stefan Freinatis

Rate Monotonic Scheduling


Continuation Example RT.9

Up to t = 90 the choices of EDF and RMS are the same. At t = 90 process A is requested again. The RMS scheduler votes for A (process A4 in the figure) since its priority is higher than the priority of B, thus B is interrupted. The deadline scheduler in contrast has a choice because the deadline of A is the same as the deadline of B (dA = dB = 120). In practice, preempting B has some nonzero cost associated, therefore it is better to let B continue.

See next example (Example RT.10) to dispel the idea that RMS and EDF would always give same results.
Computer Architecture WS 06/07 Dr.-Ing. Stefan Freinatis

Rate Monotonic Scheduling


Example RT.10: Like RT.9 but process A now has 15ms execution time Process A B C request time ri k 30 ms k 40 ms k 50 ms execution time e 15 ms 15 ms 5 ms deadline di (k+1) 30 ms (k+1) 40 ms (k+1) 50 ms

The schedulability test yields that the processes are schedulable.

T
i =1

ei
dist i

15 15 5 + + = 0.5 + 0.375 + 0.1 = 0.975 30 40 50

Nevertheless, RMS fails in this example while EDF does not.

Computer Architecture

WS 06/07

Dr.-Ing. Stefan Freinatis

Rate Monotonic Scheduling


Continuation Example RT.10

RMS leads to a real-time violation. Process C is missing its deadline dC = 50.


Figure from [Ta01 p.474] Computer Architecture WS 06/07 Dr.-Ing. Stefan Freinatis

Rate Monotonic Scheduling


Why did RMS fail? Using static priorities only works if the CPU utilization is not too high. It was proved* that RMS is guaranteed to work for any system of periodic processes if 1

T
i =1

ei

n (2 n 1).

dist i

For n = 2 processes, RMS will work for sure if the CPU utilization is below 0.828. For n = 3 processes, RMS will work for sure if the CPU utilization is below 0.780. For n processes, RMS will ... if the CPU utilization is below ln 2 (0.694).
* C.L. Liu, James Layland: Scheduling Algorithms for Multiprogramming in a Hard-Real-Time Environment, Journal of the ACM, 1973, http://citeseer.ist.psu.edu/liu73scheduling.html
Computer Architecture WS 06/07 Dr.-Ing. Stefan Freinatis

Rate Monotonic Scheduling


In example RT.9 the utilization was 0.808 (thus higher than 0.780), why did it work? We were just lucky. With different periods and execution times, a utilization of 0.808 might fail. In example RT.10 the utilization was so high that there was little hope RMS could work. In contrast to RMS, deadline scheduling always works for any schedulable set of processes (single-processor system). Deadline scheduling can achieve 100% CPU utilization. The price paid is a more complex algorithm [Ta01 p.475]. Because RMS is static all priorities are known at run time. Selecting the next process is a matter of just a few machine instructions.
Computer Architecture WS 06/07 Dr.-Ing. Stefan Freinatis

Real-Time Scheduling
Branch and Bound Description
Try all permutations of processes.

Deadline (EDF)
Earliest deadline has highest priority. Execution time is not taken into account.

Laxity
Least laxity has highest priority. Execution time is taken into account.

RMS
Highest repetition rate (frequency) has highest priority. Execution time is not taken into account.

Preferably static used in scheduling German Planen durch Name


Suchen

dynamic scheduling Planen nach Fristen

dynamic scheduling Planen nach Spielrumen

static scheduling Planen nach monotonen Raten

Overview real-time scheduling algorithms

Computer Architecture

WS 06/07

Dr.-Ing. Stefan Freinatis

You might also like