You are on page 1of 5

Dynamic Programming

Dynamic Programming Ivor Page1


This note explains the principles behind dynamic program through some simple examples. In some of the examples, a greedy algorithm exists that may run in less time than the dynamic programming solution given. The solutions are presented to help you understand the principles of dynamic programming. A problem can be solved by Dynamic Programming techniques if the solution to the problem can be constructed from solutions to its sub-problems. Typically such problems suggest a recursive solution. For example, to nd the nth Fibonacci number, F ibn we know that F ibn = F ibn1 + F ibn2 The denition of Fibonacci numbers shows how to construct a solution from solutions to sub-problems. However, it is easy to show that a purely recursive solution runs in O(2n ), or exponential time. A dynamic programming solution collects solutions to all sub-problems in an array. For the Fibonacci problem, we might start by computing F ib2 (after lling in the entries for F ib0 = 1 and F ib1 = 1) and compute values for F ib3 , F ib4 , up to the required value, recording all computed values in an array. However, we observe that, to compute F ibn , we only need the previous two Fibonacci numbers, F ibn1 and F ibn2 . Our array reduces to a queue of size two.

The Ways to Give Change Problem


This problem requires us to compute the number of ways that a certain dollar amount can be given, using only the set of coin and note denominations stated in the problem. For example, we might restrict the denominations to the set {1, 5, 10, 25, 50} and ask how many ways can we use any number of coins of each type such that the sum equals $100. Once again, a recursive solution is suggested in which solutions to sub-problems can be combined to produce the solution to the main problem. An array, d[] is initiallized with the denominations in incresing order. ways(amount, d[1 k]) = ways(amount, d[1 k 1]) +ways(amount d[k], d[1 k]) For example the number of ways to make up 10 cents using the set {1, 5} is given by: ways(10, {1, 5}) = ways(10, {1}) + ways(5, {1, 5}) = 1+2
1

University of Texas at Dallas

Dynamic Programming

A recursive method can easily be developed from this recurrence, but the running time is exponential for large values. As is usually the case with such problems that yield to dynamic programming techniques, a straight recursive approach solves each sub-problems many times. Dynamic programming replaces the recursion with iteration. We build an array, computing solutions to ALL sub-problems as we go. The array will have columns equal to the number of denominations and rows equal to the amount. For example, to solve the problem ways(15, {1, 2, 4, 5}), we build an array of size 15 4 and begin by solving entries in the top row, ways(1, {1}), ways(1, {1, 2}), ways(1, {1, 2, 4}) ways(1, {1, 2, 4, 5}) Using these results, we compute entries in the second row, ways(2, {1}), ways(2, {1, 2}), ways(2, {1, 2, 5}), and so on. In the program, an array w will be used to store the solutions to sub-problems. Note that we can construct the array in this order since the update mechanism is as follows: w[i][j] = w[i][j 1] + w[i denom[k]][j 1] where denom[k] is denomination added in column j. A new entry is computed from the entry to its left and an entry in the same column above it denom[k] positions. Here is the resulting array w:
Amount 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 Denoms 2 4 1 1 1 1 2 2 2 2 3 4 3 4 4 6 4 6 5 9 5 9 6 12 6 12 7 16 7 16 8 20 8 20 5 1 1 2 2 4 4 7 8 11 13 17 19 24 27 33 37

A row for amount = 0 has been added because, in the logic of the solution, ways(0, S) = 1, where S is any non-empty set of denominations. However, ways(n, S) = 0 if n < 0. Entries in the rst row and column are initially set equal to 1. Look at the bottom right entry (the solution). ways[15][4] = ways[15][3] + ways[15 5][4] = 20 + 17 = 37. All entries are calculated in this way.

Dynamic Programming

The Desert Traversal Problem


Here is another simple problem. You are given a n m grid of squares G with positive integers in the squares: 1 8 4 1 6 5 6 5 3 2 1 1 4 1 9 3 2 3 4 6

The object is to nd the least cost path from any position above the top row to any position below the bottom row. That is, we can begin our path in any of the squares in the top row and end it in any of the squares in the bottom row. At each step, we can move vertically down, or diagonally down to one square to the left or the right in the next row down. The cost of a step is equal to the positive dierence between the cell weights. So, if the chosen path is vertically down the left edge, where the cell weights are 1, 8, 4, 1, the cost of that path is 1 + |1 8| + |8 4| + |4 1| = 1 + 7 + 4 + 3 = 15. To solve this problem we construct an array C of the same size as the above grid, where ci,j is the minimum cost of getting from above the grid to cell (i, j) in the grid. We wont worry about storing the path yet. Obviously, the rst row of C is equal to the rst row of G. The second row of C is added as follows: 1 6 8 5 3 4 2 4 3 3

Consider the middle entry of the 2nd row. There are 3 ways to get to that cell from its neighbors in the rst row, from the 6, the 3, and the 4. The costs of those paths are 6+4, 3+1, and 4+2. The middle path is the best, so the corresponding entry in C is 4. Note that no entry of C can be less than the corresponding entry in G. Each row of the array is developed in the same way. Its easy to complete on paper. Check my 1 minute answer:
1 8 6 7 6 5 6 7 3 4 3 3 4 3 9 5 2 3 4 6

How do we use the table to construct the path? Clearly the minimum cost path ends in the middle square of the bottom row. The previous square on the path must be one of its three neighbors in the row above, the middle one. Before that square we have the 3 square

Dynamic Programming

above and to the right, and then the 2 square in the top right-hand corner. The best path is through squares 2, 3, 3, 3, with total cost 3.

Floyds All pairs Shortest Path Algorithm


Finally, we have the famous all-pairs shortest path in a graph problem. We have an undirected graph G(V, E) where |V | = n, and every edge ei,j E has a positive weight associated with it. A shortest path between vertices vi and vj is a path with minimum sum of its weights. The algorithm is frequently used to nd a minimal cost path between two specied vertices but, in the usual requirement for dynamic programming, it solves all sub-problems and, in doing so, nds minimal paths between all pairs of vertices. Dijkstras greedy algorithm is faster for the minimal cost path between two specic vertices. A trivial version of Dijkstra runs in time O(n2 ), while a version employing a Fibonacci heap runs in time O(|E|+nlog n). Floyds algorithm runs in time O(n3 ), but the computations for each iteration are very fast. A 2GHz PC can run Floyds algorithm with graphs of 200 nodes in a few seconds. Floyds dynamic programming algorithm works as follows: A is the adjacency matrix, ai,j = cost of edge ei,j . For all 0 i < n, ai,i = 0. The vertices are numbered starting at zero. We maintain an array D of dimensions n n where di,j is the minimum cost of a path from vi to vj using the set of vertices considered so far. Initially, for all i, j, di,j = ai,j . Initially, only those vertices connected by an edge will have a non-zero element of D. Now add one vertex at a time and see if a better path can be found using the new vertex. Say we have already computed the minimum cost of getting from vi to vj using only vertices in the set {v0 , v1 , vk1 }. Now we consider whether going via the new vertex vk is better than the path that we have already found. The decision is simply, if(d[i][k] + d[k][j] < d[i][j]) { d[i][j] = d[i][k] + d[k]j]; update the path information for i to j to include k } Having considered whether going via vertex vk is a better route for all pairs vertices than using only vertices in the set {v0 , v1 , , vk1}, we repeat the process considering vertex vk+1 and so on.

Dynamic Programming Here is the complete algorithm in the context of the problem Frogger:
// calculate all pairs direct distances for(i=0;i<nstones;i++) for(int j=0;j<nstones;j++) A[i][j] = d[i][j] = pts[i]->distance(*pts[j]); // find minimum frog distance for(int k=1;k<nstones;k++) { for(int i=0;i<nstones;i++) for(int j=0;j<nstones;j++) A[i][j] = min(A[i][j],max(A[i][k],A[k][j])); } cout << endl << nstones << ", " << A[0][1] << endl << endl;

Summary
A problem yields to dynamic programming if: The solution to the given problem can be constructed from solutions to its subproblems. A recursive solution exists that builds on sub-problems, but computes solutions to sub-problems many times. An array of soultions to sub-problems can be constructed with an iterative scheme (the recursion is abandoned). Not all problems can be solved by combining solutions to its sub-problems.

You might also like