You are on page 1of 22

DAA Module-2

Module- 2
Divide and Conquer

2.1. General Method


Given a function to compute on n inputs the divide-and-conquer strategy suggests splitting the inputs
into k distinct subsets, 1 < k < n, yielding k sub problems. These sub problems must be solved, and then
a method must be found to combine sub solutions into a solution of the whole.

If the sub problems are still relatively large, then the divide-and-conquer strategy can be reapplied. In
divide-and-conquer strategy when the input is split the sub problems are of the same kind as the original
problem. The reapplication of the divide-and-conquer principle is naturally expressed by a recursive
algorithm.

Control Abstraction for Divide and Conquer


Algorithm DAndC(P)
2 {
3 if Small(P) then return S(P);
4 else
5 {
6 divide P into smaller instances P1, P2, ... , Pk, k > 1;
7 Apply DAndC to each of these subproblems;
8 return Combine(DAndC(P1),DAndC(P2),...,DAndC(Pk));
9 }
10 }

DAndC is initially invoked as DAndC(P), where P is the problem to be solved. Small(P) is a Boolean-
valued function that determines whether the input size is small enough that the answer can be computed
without splitting. If this is so, the function S is invoked. Otherwise the problem P is divided into smaller
sub problems. These sub problems P1 , P2, , Pk are solved by recursive applications of DAndC.
Combine is a function that determines the solution to P using the solutions to the k sub problems. If the
size of P is n and the sizes of the k sub problems are n1 , n2 , ... ,nk , respectively, then the computing time
of DAndC is described by the recurrence relation

_______2.1

where T (n)- is the time for DAndC on any input of size n and
g(n) - is the time to compute the answer directly for small inputs.
f (n) - is the time for dividing P and combining the solutions to sub problems.

For divide and- conquer-based algorithms that produce sub problems of the same type as the original
problem, it is very natural to first describe such algorithms using recursion. The complexity of many
divide-and-conquer algorithms is given by recurrences of the form

CSE@HKBKCE 1 2017-18
DAA Module-2

where a and b are known constants. We assume that T(1) is known and n is a power of b (i.e., n = bk ).

Master
Theorem
If f(n) (nd) where d>=0 in recurrence equation 2.1 then

(Analogous result hold for the O and  notations too)

Example -1
a=2,b=2 and let T(1)=2 and f(n)=n
Then
T(n) = 2*T(n/2) + n
Since f(n)=n,d=1
According to masters theorem a=bd
T(n)  (nlogn)

Example-2

a=1,b=2 and let T(1)=1 and f(n)=c

Since f(n)=c,d=0
According to masters theorem a=bd
T(n)  (logn)

Solve the following


1. a=2,b=2,f(n)=cn
2. a=7,b=2 and f(n)=18n2
3. a=9,b=3 and f(n)= 4n6
4. a=1,b=2,f(n)=cn
5. a=5,b=4,f(n)=cn2
6. a=28,b=3,f(n)=cn3

CSE@HKBKCE 2 2017-18
DAA Module-2

2.2. Binary Search


 Let ai , 1  i n, be a list of elements that are sorted in increasing order.
 Binary search is the problem of determining whether a given element x is present in the list. If x
is present, we have to determine a value j such that aj = x. If x is not in the list, then j is to be set
to zero.
 Let P = (n, al,…. ,au, x) denote an arbitrary instance of this search problem (n is the number of
elements in the list, al,…. , au is the list of elements, and x is the element searched for.

Divide-and-conquer can be used to solve this problem.


Let Small(P) be true if n = 1. In this case, S(P) will take the value i if x = ai , otherwise it will take
the value 0. Then g(1) = (1).
If P has more than one element, it can be divided into a new sub problem by picking an index q (in
the range [l,u]) and compare x with a[q]. There are three possibilities:
1. x = aq: In this case the problem P is immediately solved.
2. x < aq: In this case x has to be searched for only in the sublist ai, [al, • • •, aq-1].Therefore, P
reduces to (q-l, al ,... , aq-1 , x).
3. x > aq: In this case the sub list to be searched is aq+1, , au. P reduces to (u- q, aq+1, au, x).

In Binary search any given problem P gets divided (reduced) into one new sub problem. This
division takes only O(1)time. After a comparison with aq, the instance remaining to be solved (if
any) can be solved by using this divide-and-conquer scheme again. If q is always chosen such that
aq is the middle element (that is, q = [(l + u)/ 2), then the resulting search algorithm is known as
binary search. The answer to the new sub problem is also the answer to the original problem P.
There is no need for any combining.

Algorithm BinSrch is Recursive and has four inputs a[ ], I , 1, and x. It is initially invoked as
BinSrch(a, 1, n, x).

1.Algorithm BinSrch(a,l,u,x)
2 // Given an array a[l:u] of elements in nondecreasing
3 // order, l < l < u, determine whether x is present, and
4 // if so, return j such that x = a[j]; else return 0.
5{
6 if (1 = u) then // If Small ( P)
7{
8 if (x = a[l]) then return l;
9 else return 0;
10 }
11 else
12 { // Reduce P into a smaller subproblem.
13 mid := [(l +u)/2];
14 if (x = a[mid]) then return mid;
15 else if (x < a[mid]) then
16 return BinSrch(a, l, mid-1, x);
17 else return BinSrch(a, mid + 1,u,x);
18 }
19 }

CSE@HKBKCE 3 2017-18
DAA Module-2

Iterative version of Binary search is given below

Algorithm Bin Search (a, n, x)


2 // Given an array a[1 : n] of elements in nondecreasing
3 // order, n > 0, determine whether x is present, and
4 // if so, return j such that x = a[j]; else return 0.
5{
6 low := 1; high := n;
7 while (low < high) do
8{
9 mid := [(low + high)/2];
10 if (x < a[mid]) then high := mid -1;
11 else if (x > a[mid]) then low := mid + 1;
12 else return mid;
13 }
14 return 0;
15 }

Example:
Consider the set of elements

Index [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14]
element -15 -6 0 7 9 23 54 82 101 112 125 131 142 151

Total number of elements n=14

When x=151
Low high mid
1 14 7
8 14 11
12 14 13
14 14 14
found

When x=-5
Low high mid
1 14 7
1 6 3
1 2 1
2 2 2
3 2 Not Found

When x=9
Low high mid
1 14 7
1 6 3
4 6 5- found

CSE@HKBKCE 4 2017-18
DAA Module-2

Theorem 3.1 Algorithm BinSearch(a,n,x) works correctly.

Proof: We assume that all statements work as expected and that comparisons such as x > a[mid] are
appropriately carried out.

Initially low = 1,high = n, n > 0, and a[1] < a[2] < • • • < a[n].
If n = 0, the while loop is not entered and 0 is returned.
Otherwise we observe that each time through the loop the possible elements to be checked for equality
with x are a[low],a[low + 1], . . ., a[mid], ..., a[high]. If x = a[mid], then the algorithm terminates
successfully. Otherwise the range is narrowed by either increasing low to mid + 1 or decreasing high to
mid - 1. Clearly this narrowing of the range does not affect the outcome of the search. If low becomes
greater than high, then x is not present and hence the loop is exited.

To fully test binary search, we need not consider the values of a[1 : n]. By varying x sufficiently, we
can observe all possible computation sequences of BinSearch without devising different values for a. To
test all successful searches, x must take on the n values in a. To test all unsuccessful searches, x need
only take on n + 1 different values. Thus the complexity of testing BinSearch is 2n + 1

Analysis of Binary search algorithm

Space complexity analysis: storage is required for the n elements of the array plus the variables low,
high, mid, and x, therefore the space requirement is n + 4 locations

Time complexity analysis


The basic operation of this algorithm is the element comparison that is the comparisons between x and
the elements in a[ ]. We assume that only one comparison is needed to determine which of the three
possibilities of the “if” statement holds.

The number of element comparisons needed to find each of the 14 elements is given below. No element
requires more than 4 comparisons to be found.

Index [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14]
element -15 -6 0 7 9 23 54 82 101 112 125 131 142 151
comparisons 3 4 2 4 3 4 1 4 3 4 2 4 3 4

To derive a formula for time complexity consider the sequence of values for mid that are produced by
BinSearch for all possible values of x. These values can be described using a binary decision tree . Here
the value of each node is the value of mid. For example, if n = 14, then Figure 2.1 contains a binary
decision tree that traces the way in which the sequence of mid values are produced by BinSearch

CSE@HKBKCE 5 2017-18
DAA Module-2

Figure 2.1 Binary decision tree for binary search n=14

The first comparison is x with a[7]. If x < a[7], then the next comparison is with a[3], similarly, if x >
a[7], then the next comparison is with a[11]. Each path through the tree represents a sequence of
comparisons in the binary search method. If x is present, then the algorithm will end at one of the
circular nodes that list the index into the array where x was found. If x is not present, the algorithm will
terminate at one of the square nodes. Circular nodes are called internal nodes, and square nodes are
referred to as external nodes.

Worst Case Analysis

Theorem: If n is in the range [2k-1, 2k ), then BinSearch makes at most k element comparisons for a
successful search and either k -1 or k comparisons for an unsuccessful search. i.e.. the time for a
successful search is 0(1og n) and for an unsuccessful search is (log n)).

Proof: Consider the binary decision tree describing the action of BinSearch on n elements. All
successful searches end at a circular node whereas all unsuccessful searches end at a square node. If 2k-1
 n < 2k , then all circular nodes are at levels 1, 2, ... , k whereas all square nodes are at levels k and k +
1 (note that the root is at level 1). The number of element comparisons needed to terminate at a circular
node on level i is i whereas the number of element comparisons needed to terminate at a square node at
level i is only i- 1. Therefore the theorem is proved.

Average Case Analysis


To determine the average behavior, size needs to be equated to the number of element comparisons
in the algorithm. The distance of a node from the root is one less than its level. The internal path length I
is the sum of the distances of all internal nodes from the root. Analogously, the external path length E is
the sum of the distances of all external nodes from the root. For any binary tree with n internal nodes, E
and I are related by the formula.

E=I+2n

Let As(n) be the average number of comparisons in a successful search, and Au(n) the average number of
comparisons in an unsuccessful search.

The number of comparisons needed to find an element represented by an internal node is one more than
the distance of this node from the root. Hence,

CSE@HKBKCE 6 2017-18
DAA Module-2

AS (n) = 1 + I /n

The number of comparisons on any path from the root to an external node is equal to the distance
between the root and the external node. Since every binary tree with n internal nodes has n + 1 external
nodes, it follows that
Au(n) = E / (n + 1)

Using these three formulas for E, As (n) , and Au(n), we find that

As(n) = (1 + 1/n)Au(n) -1

From this formula we see that As(n) and Au(n) are directly related. The minimum value of As(n) (and
hence Au(n)) is achieved by an algorithm whose binary decision tree has minimum external and internal
path length. This minimum is achieved by the binary tree all of whose external nodes are on adjacent
levels, and this is the tree that is produced by binary search algorithm. It follows that E is proportional to
n log n. Using this in the preceding formulas, we conclude that As(n) and Au(n) are both proportional to
log n. Thus we conclude that the average- and worst-case numbers of comparisons for binary search are
the same , within a constant factor.

The best-case analysis


For a successful search only one element comparison is needed. For an unsuccessful search, [logn]
element comparisons are needed in the best case.

Therefore the formulas that describe the time complexity of the algorithm in the best average and worst
cases are:

2.3. Finding the Maximum and Minimum


The problem is to find the maximum and minimum items in a set of n elements.

Algorithm StraightMaxMin(a, n, max, min)


2 // Set max to the maximum and min to the minimum of a[1 : n].
3{
4 max := min := a[1];
5 for i := 2 to n do
6 {
7 if (a[i] > max) then max := a[i];
8 if (a[i] < min) then min := a[i];
9 }
10}

Analyzing Time complexity

CSE@HKBKCE 7 2017-18
DAA Module-2

The time complexity of this algorithm is determined by the number of element comparisons. The
justification for this is that the frequency count for other operations in this algorithm is of the same order
as that for element comparisons.

(n-1) comparisons are needed to find the max element and (n-1) comparisons to find the min element.
Therefore, Algorithm StraightMaxMin requires 2(n - 1) element comparisons in the best, average, and
worst cases.

Improvement

The comparison a[i] < min is necessary only when a[i] > max is false. Hence we can replace the
contents of the for loop by

if (a[i] > max) then max := a[i];


else if (a[i] < min) then min := a[i];

Time complexity analysis

Now the best case occurs when the elements are in increasing order. The number of element
comparisons is n — 1. The worst case occurs when the elements are in decreasing order. In this case the
number of element comparisons is 2(n — 1). The average number of element comparisons is less than 2(n
— 1). On the average, a[i] is greater than max half the time, and so the average number of comparisons is
3(n/2 — 1)

Divide and conquer algorithm for the problem


Let P = (n, a[i], , a[j]) denote an arbitrary instance of the problem. Here n is the number of elements in
the list a[i], , a[j] and we are interested in finding the maximum and minimum of this list. Let Small(P)
be true when n < 2. In this case, the maximum and minimum are a[i] if n = 1. If n = 2, the problem can
be solved by making one comparison.

If the list has more than two elements, P has to be divided into smaller instances. For example, we might
divide P into the two instances P1 = ([n/2] , a[1], , a[[n/2]]) and P2 = (n- [n/2] ,a[[n/2] + 1], ... ,a[n]).
After having divided P into two smaller sub problems, we can solve them by recursively invoking the
same divide-and-conquer algorithm.

To combine p1 and p2 If MAX(P) and MIN(P) are the maximum and minimum of the elements in P,
then MAX(P) is the larger of MAX(P1 ) and MAX(P2). Also, MIN(P) is the smaller of MIN(P1 ) and
MIN(P2).

Recursive solution for maximum and minimum that is implemented using divide and conquer technique.
1 Algorithm MaxMin(i, j , max, min)
2// a[1 : n] is a global array. Parameters i and j are integers,
3 // 1 < i < j < n. The effect is to set max and min to the
4 // largest and smallest values in a[i : j], respectively.
5{
6 if (i = j) then max := min := a[i]; // Small(P)
7 else if (i = j — 1) then // Another case of Small(P)

CSE@HKBKCE 8 2017-18
DAA Module-2

8{
9 if (a[i] < a[j]) then
10 {
11 max := a[j]; Min : = a[i];
12 }
13 else
{
15 max := a[i]; min := a[j];
16 }
17 }
18 else
19 { // If P is not small, divide P into subproblems.
2l // Find where to split the set.
21 mid := [(i + j)/2];
22 // Solve the subproblems.
23 MaxMin(i, mid, max, min);
2: MaxMin(mid+ 1, j,maxl,min1);
25 // Combine the solutions.
26 if (max < max1) then max := maxi;
27 if (min > minl) then min := minl;
28 }
29 }

The procedure is initially invoked by the statement MaxMin(1, n, x, y)

Example: Simulate the max and min of the following 9 elements

The recursive calls can be tracked with the help of a tree

Figure 2.2 Recursive calls of max and min

CSE@HKBKCE 9 2017-18
DAA Module-2

Recurrence relation is

Solving the recurrence


T(n) =2T(n/2) +2 Substituting n= 2k
k-1
=2*T(2 ) +2
=2[2*T(2k-2 )+2] +2 = 22*T(2k-2 ) + 22 + 2
= 22*[2*T(2k-3 ) + 2]+ 22 + 2 =23*T(2k-3 ) + 23+ 22 + 2

In general for the ith iteration =2i*T(2k-i ) + 2i+ 2i-1 +…….. 21


Substituting i=k-1 = 2k-1*T(2k-k +1 ) + 2k-1+ 2k-2 +…….. 21
= 2k-1 *T(2)+ 2k-1 +…….. 21
= 2k-1 + 2k -2 = 2k (1/2 + 1) -2 = 3/2*2 log 2n -2= 3n/2 -2
T(n) (n)
Note that 3n/2 - 2 is the best-, average-, and worst-case number of comparisons when n is a power of
two. Compared with the 2n - 2 comparisons for the straightforward method, this is a saving of 25% in
comparisons.

In terms of storage, MaxMin is worse than the straightforward algorithm because it requires stack space
for i, j, max, min,max1,min1 and the return address. So Given n elements, there will be [log n] +1 levels
2

of recursion and we need to save seven values for each recursive call

2.4. Merge Sort


It sorts a given array A[0..n − 1] by dividing it into two halves A[0.._n/2_ − 1] and A[_n/2_..n − 1],
sorting each of them recursively, and then merging the two smaller sorted arrays into a single sorted one.

ALGORITHM Mergesort(A[0..n − 1])


{
//Sorts array A[0..n − 1] by recursive mergesort
//Input: An array A[0..n − 1] of orderable elements
//Output: Array A[0..n − 1] sorted in nondecreasing order
if n > 1
{
copy A[0..n/2 − 1] to B[0..n/2 − 1]
copy A[n/2..n − 1] to C[0..n/2 − 1]
Mergesort(B[0.._n/2 − 1])
Mergesort(C[0.._n/2 − 1])
Merge(B, C, A)
}
}

CSE@HKBKCE 10 2017-18
DAA Module-2

ALGORITHM Merge(B[0..p − 1], C[0..q − 1], A[0..p + q − 1])


//Merges two sorted arrays into one sorted array
//Input: Arrays B[0..p − 1] and C[0..q − 1] both sorted
//Output: Sorted array A[0..p + q − 1] of the elements of B and C
i ←0; j ←0; k←0
while i <p and j <q do
{
if B[i]≤ C[j ]
A[k]←B[i]; i ←i + 1
else A[k]←C[j ]; j ←j + 1
k←k + 1
}
if i = p
copy C[j..q − 1] to A[k..p + q − 1]
else copy B[i..p − 1] to A[k..p + q − 1]

Example:
Simulation of merge sort for the list 8, 3, 2, 9, 7, 1, 5, 4

Figure 2.3 Merge sort operation

Time Complexity Analysis


Input size is n
Assuming that n is a power of 2, the recurrence relation for the number of key comparisons C(n) is
C(n) = 2C(n/2) + Cmerge(n) for n > 1, C(1) = 0.

CSE@HKBKCE 11 2017-18
DAA Module-2

Analysis of Cmerge(n)
The number of key comparisons performed during the merging stage. At each step, exactly one
comparison is made, after which the total number of elements in the two arrays still needing to be
processed is reduced by 1. In the worst case, neither of the two arrays becomes empty before the other
one contains just one element (e.g., smaller elements may come from the alternating arrays). Therefore,
for the worst case, Cmerge(n) = n – 1.

Now the recurrence is

Cworst(n) = 2Cworst(n/2) + n − 1 for n > 1, Cworst(1) = 0.

Hence, according to the Master Theorem, Cworst(n) ∈ (n log n) .

The principal shortcoming of mergesort is the linear amount of extra storage the algorithm requires.

2.5. Quick Sort


Quick sort is a sorting algorithm that is based on the divide-and conquer approach. Quick sort
divides its input element according to their value.
Partition - array elements are arranged so that all the elements to the left of some element A[s] are
less than or equal to A[s], and all the elements to the right of A[s] are greater than or equal to it
therefore after the partition A[s] will be in its final position in the sorted array. Now the two sub
arrays to the left and to the right of A[s] can be sorted independently. Combining the sub problem is
not required

Pseudo code for quick sort is given below

ALGORITHM Quicksort(A[l..r])
{
//Sorts a subarray by quicksort
//Input: Subarray of array A[0..n − 1], defined by its left and right
// indices l and r
//Output: Subarray A[l..r] sorted in nondecreasing order
if l < r
s ←Partition(A[l..r]) //s is a split position
Quicksort(A[l..s − 1])
Quicksort(A[s + 1..r])
}

Partition algorithm
 Partition starts by selecting a pivot.

CSE@HKBKCE 12 2017-18
DAA Module-2

 Sub array is divided based on the value of the Pivot.


 The first element of the sub array is treated as the pivot p=A[l].
 Now scan the sub array from both ends, comparing the elements to the pivot.
 The left-to-right scan, denoted below by index pointer i, starts with the second element. Since we
want elements smaller than the pivot to be in the left part of the sub array, this scan skips over
elements that are smaller than the pivot and stops upon encountering the first element greater
than or equal to the pivot.
 The right-to-left scan, denoted below by index pointer j, starts with the last element of the sub
array. Since we want elements larger than the pivot to be in the right part of the sub array, this
scan skips over elements that are larger than the pivot and stops on encountering the first element
smaller than or equal to the pivot.

After both scans stop, three situations may arise, depending on whether or not the scanning indices have
crossed.
1. If i < j, exchange A[i] and A[j ] and resume the scans by incrementing i and decrementing j,
respectively.

2. If , i > j, The sub array is partitioned after exchanging the pivot with A[j ].

3. If i = j, the value they are pointing to must be equal to p . Thus, we have the sub array
partitioned, with the split position s = i = j :

We can combine the last case with the case of crossed-over indices (i > j ) by exchanging the pivot with
A[j ] whenever i ≥ j .

ALGORITHM HoarePartition(A[l..r])
//Partitions a subarray by Hoare’s algorithm, using the first element as a pivot
//Input: Subarray of array A[0..n − 1], defined by its left and right indices l and r (l<r)
//Output: Partition of A[l..r], with the split position returned as this function’s value
p←A[l]
i ←l+1; j ←r
while(i<=j)
{
while(A[i]<=p) i=i+1
while(A[j] >p) j=j-1
swap(A[i], A[j ]) //undo last swap when i ≥ j
}
swap(A[l], A[j ])
return j }

CSE@HKBKCE 13 2017-18
DAA Module-2

Note : Index i can go out of the sub array’s bounds in this pseudocode. Rather than checking for this
possibility every time index i is incremented, we can append to arrayA[0..n − 1]a “sentinel” that would
prevent index i from advancing beyond position n
.

Example: simulation of quick sort

Figure 2.4 Quick sort operations

Time complexity analysis

Best Case Analysis


The number of key comparisons made before a partition is achieved is n + 1 if the scanning indices cross
over and coincide n If all the splits happen in the middle of corresponding sub arrays, we will have the
best case. The number of key comparisons in the best case satisfies the recurrence

Cbest(n) = 2Cbest(n/2) + n for n > 1, Cbest(1) = 0.

According to the Master Theorem, Cbest(n) ∈ (n log2 n); solving it exactly for n = 2k yields
Cbest(n) = n log2 n.

CSE@HKBKCE 14 2017-18
DAA Module-2

Worst Case Analysis


In the worst case, all the splits will be skewed to the extreme: one of the two subarrays will be empty,
and the size of the other will be just 1 less than the size of the sub array being partitioned. This
unfortunate situation will happen, in particular, for increasing arrays, i.e., for inputs for which the
problem is already solved. Indeed, if A[0..n − 1] is a strictly increasing array and we use A[0] as the
pivot, the left-to-right scan will stop on A[1] while the right-to-left scan will go all the way to reach
A[0], indicating the split at position 0:

So, after making n + 1 comparisons to get to this partition and exchanging the pivot A[0] with itself, the
algorithm will be left with the strictly increasing array A[1..n − 1] to sort. This sorting of strictly
increasing arrays of diminishing sizes will continue until the last one A[n − 2..n − 1] has been processed.
The total number of key comparisons made will be equal to

Average Case behavior

Partition can happen in any position s (0 ≤ s ≤ n−1) after n+1comparisons are made to achieve the
partition. After the partition, the left and right sub arrays will have s and n − 1− s elements,
respectively. Assuming that the partition split can happen in each position s with the same probability
1/n, we get the following recurrence relation:

Solution:

Thus, on the average, quick sort makes only 39% more comparisons than in the best case. Moreover, its
innermost loop is so efficient that it usually runs faster than merge sort on randomly ordered arrays of
nontrivial sizes. This certainly justifies the name given to the algorithm by its inventor.

Weakness
 It is not stable.
 It requires a stack to store parameters of sub arrays that are yet to be sorted.
 Although more sophisticated ways of choosing a pivot make the quadratic running time of the
worst case very unlikely, they do not eliminate it completely.

CSE@HKBKCE 15 2017-18
DAA Module-2

Table 2.1 Difference Merge sort and quick sort


Merge sort Quick sort
Divides its input elements according to their Divides its input element according to their
position in the array. value.
Division of the problem into two sub problems is The entire work happens in the division stage,
immediate and the entire work happens in with no work required to combine the solutions
combining their solutions to the sub problems.

2.6. Strassen’s Matrix Multiplication


Let A and B be two n X n, matrices. The product matrix C = AB is also an n X n matrix whose i, jth
element is formed by taking the elements in the ith row of A and the jth column of B and multiplying
them to get

C(i,j) = >A(i, k)* B (k , j) 1<k<n--------------- 2.1


for all i and j between 1 and n. To compute C(i, j) using this formula, we need n multiplications. As
the matrix C has n2 elements, the time for the resulting matrix multiplication algorithm, which we
refer to as the conventional method is (n3).

Divide and conquer strategy to compute the product of two nXn matrix

we assume that n is a power of 2, that is, that there exists a nonnegative integer k such that n = 2k . In
case n, is not a power of two, then enough rows and columns of zeros can be added to both A and B so
that the resulting dimensions are a power of two.

Imagine that A and B are each partitioned into four square sub matrices, each sub matrix having
dimensions n/2x n/2. Then the product AB can be computed by using the above formula in equation 2.1
for the product of 2 x 2 matrices.

If AB is

----------- 2.2

For n=2 the elements of C can be computed directly using formula 2.2 the For n >2, the elements of C
can be computed using 8 matrix multiplication and 4 addition operations applied to matrices of size
n/2 x n/2. Since n is a power of 2, these matrix products can be recursively computed by the same

CSE@HKBKCE 16 2017-18
DAA Module-2

algorithm we are using for the n x n case. This algorithm will continue applying itself to smaller-sized
sub matrices until n becomes suitably small (n = 2) so that the product is computed directly. Since two
n/2 x n/2 matrices can be added in time cn2 for some constant c, the overall computing time T(n) of the
resulting divide-and-conquer algorithm is given by the recurrence

Where b and c are constants

The recurrence can be solved to obtain T(n)= O(n3). Hence no improvement over the conventional
method has been made.
Since matrix multiplications are more expensive than matrix additions (0(n3) versus 0(n2)), we can
attempt to reformulate the equations for Cij so as to have fewer multiplications and possibly more
additions. Volker Strassen has discovered a way to compute the Cij 's of (2.2) using only 7
multiplications and 18 additions or subtractions as given in the formula of 2.3 and 2.4

----------- 2.3

----------- 2.4

The resulting recurrence relation for T(n) is

Where a and b are constants. Solving the recurrence using masters theorem we get

2.7. Advantages and disadvantages of divide and conquer


Advantages
1. Here a problem is divided into sub problems and each sub problem is solved independently and
therefore it is a good approach to solve difficult problem.
2. Divide-and-conquer technique is probably the best-known general algorithm design technique.
3. Facilitates the discovery of efficient algorithm
4. Sub problems can be executed on parallel processors to solve the problem faster.

Disadvantage
1. Large numbers of sub lists are created and need to be processed.
2. Makes use of recursive methods. Recursive methods are slow and require stack space for
each recursive call.

CSE@HKBKCE 17 2017-18
DAA Module-2

2.8. Decrease and Conquer


The decrease-and-conquer technique is based on
 Exploiting the relationship between a solution to a given instance of a problem and a solution
to its smaller instance.
 Once such a relationship is established, it can be solved either top down(recursive )or bottom
up(Iterative) .
The bottom-up variation is usually implemented iteratively, starting with a solution to the smallest
instance of the problem; it is called sometimes the incremental approach.

There are three major variations of decrease-and-conquer:


 Decrease by a constant
 Decrease by a constant factor
 Variable size decrease

2.8.1. Decrease By a constant


The size of an instance is reduced by the same constant, on each iteration of the algorithm.
Typically, this constant is equal to one (Figure 2.5). The recursive relation can be represented as.

Figure 2.5 Decrease(by one) and conquer technique

CSE@HKBKCE 18 2017-18
DAA Module-2

Example: consider the exponentiation problem of computing an where a≠0 and n is a nonnegative
integer. The relationship between a solution to an instance of size n and an instance of size n − 1 is
obtained by the formula an = an−1 *a. So the function f (n) = an can be computed either “top down”
by using its recursive definition or “bottom up” by multiplying a n times. The formula is

Example: a5=a4*a

2.8.2. Decrease by a constant factor


Reducing a problem instance by the same constant factor on each iteration of the algorithm. In most
applications, this constant factor is equal to two.

Figure 2.6 Decrease (by half) and con


quer technique

Example: consider the exponentiation problem If the instance of size n is to compute an, the instance of
half its size is to compute an/2, with the relationship between the two: an = (an/2)2. But since we consider
instances with integer exponents only . If n is odd, we have to compute a n−1 by using the rule for even-
valued exponents and then multiply the result by a. To summarize, we have the following formula

CSE@HKBKCE 19 2017-18
DAA Module-2

Example: a6 = (a3)2 a5=(a2)2*a

If we compute an recursively according to formula then the time complexity T(n) is (log n)

2.8.3. Variable size Decrease


The size-reduction pattern varies from one iteration of an algorithm to another.Example: Euclid’s
algorithm for computing the greatest common divisor
gcd(m, n) = gcd(n, m mod n).
Though the value of the second argument is always smaller on the right-hand side than on the left-hand
side, it decreases neither by a constant nor by a constant factor.

2.9. Topological sorting


 Topological sorting is a problem of directed graph that has application in pre- requisite restricted
tasks.
 A directed graph, or digraph for short, is a graph with directions specified for all its edges.
 The adjacency matrix and adjacency lists are two principal means of representing a digraph.
 Depth-first search and breadth-first search are principal traversal algorithms for traversing
digraphs.
 Four types of edges possible in a DFS forestof a directed graph (figure 2.7a) are (Figure 2.7b)
tree edges (ab, bc, de), back edges (ba) from vertices to their ancestors, forward edges (ac) from
vertices to their descendants in the tree other than their children, and cross edges (dc).

FIGURE 2.7 (a) Digraph. (b) DFS forest of the digraph for the DFS traversal started at a.

Example :
Consider a set of five required courses {C1, C2, C3, C4, C5} a part-time student has to take in some
degree program. The courses can be taken in any order as long as the following course prerequisites are
met: C1 and C2 have no prerequisites, C3 requires C1 and C2, C4 requires C3, and C5 requires C3 and
C4. The student can take only one course per term. In which order should the student take the courses?

The situation can be modeled b

CSE@HKBKCE 20 2017-18
DAA Module-2

y a digraph in which vertices represent courses and directed edges indicate prerequisite requirements as
shown in figure 2.8

Figure 2.8 Digraph representing the pre requisites of the courses

If the vertices of a graph are listed in such an order that for every edge in the graph, the vertex where the
edge starts is listed before the vertex where the edge ends is the problem of topological sorting. Thus,
for topological sorting to be possible, a digraph in question must be a DAG( directed acyclic graph)

There are two efficient algorithms that both verify whether a digraph is a dag and, if it produce an
ordering of vertices that solves the topological sorting problem

Application of Depth first search algorithm to find topological sorting

Perform a DFS traversal and note the order in which vertices become dead-ends (i.e., popped off the
traversal stack). Reversing this order yields a solution to the topological sorting problem. No back edge
should have been encountered during the traversal. If a back edge has been encountered, the digraph is
not a dag, and topological sorting of its vertices is impossible.

Figure 2.9 illustrates an application of this algorithm

Figure 2.9 a) digraph b) DFS traversal stack with the subscript numbers indicating the popping off order
(c) Solution to the problem.

Source Removal Technique


Direct implementation of the decrease-(by one)-and-conquer technique
Repeatedly, identify in a remaining digraph a source, which is a vertex with no incoming edges, and
delete it along with all the edges outgoing from it. (If there are several sources, break the tie arbitrarily.
If there are none, stop because the problem cannot be solved. The order in which the vertices are deleted
yields a solution to the topological sorting problem. The application of this algorithm to the same
digraph representing the five courses is given in Figure 2.10.

CSE@HKBKCE 21 2017-18
DAA Module-2

Figure 2.10 Source Removal Algorithm

Applications of topological sorting in computer science


 Instruction scheduling in program compilation,
 Cell evaluation ordering in spreadsheet
 Formulas and resolving symbol dependencies in linkers.

CSE@HKBKCE 22 2017-18

You might also like