You are on page 1of 35

2 Analyzing Algorithms

¾ To familiarize you with the framework of the design and


analysis of algorithms, we start with the algorithm of
insertion sort.
¾ We describe algorithms as programs written in a pseudocode
that is similar in many aspects to C or Java.
¾ In pseudocode, we employ whatever expressive method is
most clear and concise to specify a given algorithm. Issues of
implementation are often ignored in order to convey the
essence of the algorithm.
¾ Insertion sort is an efficient algorithm for sorting a small
number of elements. The input numbers are sorted in place:
the numbers are rearranged within the input array, with at
most a constant number of them stored outside the array at
any time.

2.1
Pseudocode for Insertion Sort
¾ INSERTION-SORT(A)
1 FOR j := 2 TO length[A]
2 DO key := A[j];
3 //Insert A[j] into the sorted sequence A[1 .. j-1]
4 i := j – 1;
5 WHILE i > 0 and A[i] > key
6 DO A[i+1] := A[i];
7 i := i – 1
END
8 A[i+1] := key
END

¾ See an example on papantulis.


2.2
Loop Invariants and
the Correctness of Algorithms
¾ In the insertion sort algorithm, the elements
A[1..j-1] are the elements originally in positions 1
through j-1, but now in sorted order. We state
these properties of A[1..j-1] formally as a loop
invariant:

At the start of each iteration of the for loop of


lines 1 - 8, the subarray A[1..j-1] consists of the
elements originally in A[1..j-1] but in sorted order.

¾ We use loop invariants to help us understand why


an algorithm is correct.

2.3
¾ We must show three things about a loop invariant:
1. Initialization: It is true prior to the first iteration
of the loop.
2. Maintenance: If it is true before an iteration of the
loop, it remains true before the next iteration.
3. Termination: When the loop terminates, the invariant
gives us a useful property that helps show that the
algorithm is correct.

2.4
Showing the correctness of insertion sort by using a
loop invariant
¾ Initialization: We have to show that the loop invariant holds before
the first loop iteration, when j = 2. The subarray A[1..j-1],
therefore, consists of just the single element A[1], which is in fact
the original element in A[1]. Moreover, this subarray is sorted,
trivially.
¾ Maintenance: We have to show that each iteration maintains the
loop invariant. The body of the FOR loop works by moving A[j-1],
A[j-2], and so on by one position to the right until the proper
position for A[j] is found (lines 4-7), at which point the value of
A[j] is inserted (line 8).
¾ Termination: We examine what happens when the loop terminates.
The FOR loop ends when j exceeds n (the length of A), i.e., when j =
n + 1. Substituting n+1 for j in the statement of loop invariant, we
have that the subarray A[1..n] consists of the elements originally in
A[1..n], but in sorted order. But the array A[1..n] is the entire array.
Hence, the entire array is sorted, which means that the algorithm
is correct.
2.5
WHILE Loops

¾ Programming languages offer a variety of loop


constructs, such as while loops, for loops, and
repeat-until loops.
¾ We will focus on while loops. The other loops can be
written in terms of while loops, so no expressive
programming power is lost by restricting attention
to while loops alone.
¾ The template for a WHILE loop is as follows:

WHILE E
DO S
END

2.6
¾ The predicate E is a condition or boolean
expression. It is called the guard of the loop.
¾ If the loop is executed in a state in which E is
false, then the loop terminates immediately
without changing the state.
¾ If the loop is executed in a state s in which E is
true, then the loop executes the sequence of
statements S to reach a state s' from which the
loop is again executed. Execution of a loop thus
passes through a sequence of intermediate states.
¾ In general it is possible that a loop never
terminates execution, because the guard is true in
every state that is reached. It will therefore be
necessary to prove that the loops always
terminate when they are required to.

2.7
false
E

true

2.8
Example
¾ The "Russian Multiplication" algorithm multiplies two numbers
a and b by producing two columns of numbers, headed by a
and b.
¾ The first column is produced by repeated halving with
rounding down (integer division), and the second column is
produced by repeated doubling.
¾ All rows in which the first entry is even are removed.
¾ The sum of the second column then gives the product of a
and b.

2.9
Russian multiplication of 57 and 43

57 43

28 86
removed
14 172

7 344

3 688

1 1376

2451

2.10
¾ The algorithm can be described with a loop, which keeps track of
the running total as execution proceeds. When the loop finishes,
then the running sum total will be the product of the two
variables a and b.

x := a;
y := b;
total := 0;
WHILE x > 0
DO IF x mod 2 = 1 THEN total := total + y END;
x := x/2;
y := y*2
END

2.11
¾ The loop in the previous slide is supposed to implement the Russian
Multiplication algorithm, but no argument has yet been given concerning
its correctness.
¾ Correctness must be with respect to a requirement, expressed as a
postcondition P. For the Russian Multiplication loop, the postcondition
is that total = a*b.
¾ Since a loop execution passes through many intermediate states, the
relationship between the initial state from which it is executed, and
the final state it reaches, must take the intermediate states into
account.
¾ The relationship between one intermediate state and the next is
described by the body S of the loop.
¾ The key link between successive states is captured by a loop invariant.

2.12
¾ A loop invariant is a condition (predicate) which holds of all
of the states that the execution of the loop passes through,
before and after each execution of the loop body S. It
therefore provides a link between the initial and final
states, connected through all of the intermediate states.

2.13
I

I
I ∧ ¬E
E

I∧E

S
P
I

Loop control points annotated with predicates


2.14
Example
¾ Consider again the Russian Multiplication.
¾ It requires that total = a*b in the final state.
¾ In the initial state: x = a & y = b & total = 0.
¾ Proving that the algorithm is correct requires the
identification of some predicate I on some or all of the
variables, which is true for the initial state, is preserved by
the body S of the loop, and which implies the postcondition
total = a*b when the guard x > 0 is false.
¾ One suitable invariant is the following:
total + x * y = a * b
¾ At any particular stage, the invariant describes what has
been achieved so far, as encapsulated in the value total.

2.15
¾ The proof is as follows:
9 Initialization: Initially, total = 0 and x*y is indeed a*b. So the
invariant is true when the loop begins.
9 Maintenance: On a single pass through the loop, there are two
possibilities:
1. If x is even, then total remains as it was, x is halved, and y is
doubled. In this case (x/2)*(y*2) is the same as x*y, so the
invariant remains true.
2. If x is odd, then y is added to total, x is then halved with
rounding down, and y is doubled. In this case, (total + y) +
x/2*(y*2) = total + x*y, so the invariant again remains true.
Thus the invariant is preserved by every iteration of the loop.
9 Termination: Finally, on termination total + x*y = a*b and also
the negation of the guard holds: x = 0. It follows that total =
a*b, which is the required postcondition.

2.16
An execution of the Russian Multiplication loop

x y total INVARIANT

57 43 0 0 + 57*43 = 57*43

28 86 43 43 + 28*86 = 57*43

14 172 43 43 + 14*172 = 57*43

7 344 43 43 + 7*344 = 57*43

3 688 387 387 + 3*688 = 57*43

1 1376 1075 1075 + 1*1376 = 57*43

0 2752 2451 2451 + 0*2752 = 57*43

2.17
Finding an Invariant

¾ Loop correctness requires that


I ∧ ¬E ⇒ P. This says that the invariant, together
with the negation of the guard, must imply the
postcondition.
¾ This requirement can be used to guide the
development of a loop which is required to
establish a particular postcondition P.

2.18
¾ One technique is to obtain I by weakening P, so I
holds for more states than P does. The loop should
then terminate when the particular instance of I
corresponds to the situation where P also holds:
this will influence the choice of guard.

2.19
Constructing I by Weakening P

Replacing a constant with a variable


¾ The postcondition P can be weakened by replacing a
constant N by a variable i, so that P = I when i is equal
to N.
¾ In this case, the guard of the loop should be i ≠ N.
¾ When the loop terminates, the guard is false, and we will
have I ∧ i=N, which indeed implies P.

2.20
Example
¾ We want to develop a loop to sum the elements of an array
aa [1..N].
¾ The postcondition P is
sum = Σj.(j ∈ 1..N | aa[j]).
¾ Replacing the constant N by a variable i results in the invariant
I:
sum = Σj.(j ∈ 1..i | aa[j])
The type of i is natural number.
¾ The guard E of the loop is:
i ≠ N.
¾ It follows by construction that I ∧ ¬E ⇒ P.

2.21
¾ The suitable loop is:

sum := 0;
i := 0;
WHILE i ≠ N
DO i := i + 1; sum := sum + aa[i]
END

2.22
Exercise
¾ The postcondition in the previous example can also be
weakened to obtain the following invariant I:
sum = Σj.(j ∈ i..N | aa(j))
¾ Develop a complete loop with this invariant.

2.23
Constructing I by Weakening P
(cont.)

Deleting a conjunct
¾ If a postcondition consists of a number of conjuncts, then
it can be weakened by deleting one (or several) of its
conjuncts.
¾ The resulting predicate will be true in more states than the
postcondition, and might be suitable as a loop invariant.
¾ The loop guard in this case will be the negation of the
deleted conjunct. So the negation of the guard and the
remaining conjuncts together imply the postcondition.

2.24
Example
¾ The integer square root r of a natural number n is the
greatest integer whose square is no more than n. So we
have
P = r2 <= n & n < (r + 1)2
¾ Deleting the second conjunct leaves r2 <= n. This will do
as an invariant of a loop to achieve the postcondition P.
It is true when r = 0, so an initial state for the loop can
easily be established. The loop guard will be the
negation of the deleted conjunct: E = (r + 1)2 <= n.
¾ The loop body simply increments r .

2.25
¾ Thus the complete loop to compute integer square root
is:

r := 0;
WHILE (r + 1)2 <= n
DO r := r + 1
END

2.26
Random-Access Machine (RAM)
¾ Analyzing an algorithm means predicting the
resources (computing time, memory, communication
bandwidth, etc) that the algorithm requires.
¾ Most often it is computing time that we want to
measure.
¾ Before we can analyze an algorithm, we must have a
model of the implementation technology.
¾ We will use a generic one-processor RAM (random-
access machine) model of computation and
implement our algorithms as computer programs on
that machine.

2.27
The RAM model:
¾ contains instructions commonly found in real
computers:
arithmetic (add, subtract, multiply, divide,
remainder, floor, ceiling),
data movement (load, store, copy), and
control (conditional and unconditional branch,
subroutine call and return).
Each such instruction takes a constant amount of
time.
¾ supports the data types: integer and floating point.

2.28
Running Time

¾ The running time of an algorithm is the number of


primitive operations or steps executed.
¾ Running time depends on
ƒ input size (e.g. 8 elements vs. 8000)
ƒ input itself (e.g. already sorted or not)
¾ The mathematical expression for the running time
of INSERTION-SORT can be determined as
follows.

For each j = 2, 3, ..., n, where n = length[A], we let


tj be the number of times the while loop test in
line 5 is executed for that value of j.

2.29
INSERTION-SORT (A) cost times
1 FOR j := 2 TO length[A] c1 n

2 DO key := A[j]; c2 n-1

3 //Insert A[j] into the sorted

sequence A[1 .. j-1]. 0 n-1

4 i := j – 1 c4 n-1

5 WHILE i > 0 and A[i] > key c5 n


∑ t
j=2 j
6 DO A[i+1] := A[i]; c6
∑ (t −1)
n
j =2 j
7 i := i – 1 END c7
∑ (t −1)
n
j =2 j
8 A[i+1] := key END c8 n-1

2.30
¾ The running time of INSERTION-SORT is:

n n
T(n) = c1n + c2 (n −1) + c4 (n −1) + c5 ∑t j + c6 ∑(t j −1)
j =2 j =2
n
+ c7 ∑(t j −1) + c8 (n −1).
j =2

2.31
¾ For inputs of a given size, an algorithm's running time may
depend on which input of that size is given. For example, in
INSERTION-SORT, the best case occurs if the array is
already sorted. In this case, tj = 1 for j = 2,3, ...,n.
¾ The best-case running time is

T(n) = c1n+c2(n−1)+c4(n−1)+c5(n−1)+c8(n−1)
= (c1 +c2 +c4 +c5 +c8)n−(c2 +c4 +c5 +c8).

¾ This is a linear function of n.

2.32
¾ The worst case occurs if the array is in reverse sorted order.
In this case, each element A[j] must be compared with each
element in the entire sorted subarray A[1 .. j-1], and so tj = j
for j = 2, 3, ..., n.

n n ( n + 1)
∑ j = −1
j=2 2
n n ( n − 1)
∑ ( j − 1) =
j=2 2
¾ The worst case running time is a quadratic function of n :

c5 c6 c7 2 c5 c6 c7
T(n) = ( + + )n + (c1 + c2 + c4 + − − + c8 )n
2 2 2 2 2 2
− (c2 + c4 + c5 + c8 ).
2.33
Kinds of Algorithm Analysis

¾ (usually) -- Worst case


T(n) = max time on any input of size n.
¾ (sometimes) -- Average case
T(n) = average time over all inputs of size n
(assumes statistical distribution of inputs)
¾ (never) -- Best case
Useless, because we can cheat with slow algorithm
that works fast on some input.

2.34
¾ We usually concentrate on finding the worst-case
running time. Why?
Three reasons:
1. The worst-case running time is an upper bound on
the running time for any input. Knowing it gives us
a guarantee that the algorithm will never take any
longer.
2. For some algorithms, the worst-case occurs fairly
often. In some searching applications, searches for
absent info may be frequent.
3. The "average case" is often roughly as bad as the
worst case. For example, suppose that we randomly
choose n numbers and apply insertion sort. The
average-case running time is just like the worst-
case running time, i.e., a quadratic function of n.

2.35

You might also like