Professional Documents
Culture Documents
Fundamentals of
Data Structures in C++
Dinesh P. Mehta
Chapter 1
BASIC CONCEPTS
1.3.1
1.3.3
ADT Bag is
objects: A collection of items where the items are in the subrange
of integers starting at 0 and ending at MAXINT on the computer.
functions:
for all i ∈ (0,MAXINT)
1
2 1 BASIC CONCEPTS
1.5.1
void horner ()
{
int ∗a, n, x, out;
cout "Input n, the coefficients, and x " endl; // 1
cin n; // 1
a = new int[n+1]; // 1
for (int loop = 0; loop ≤ n; loop++) // n+2
cin a[loop]; // n+1
cin x; // 1
for (loop = n, out = 0; loop ≥ 0; loop−−) // n+2
out = out ∗ x + a[loop]; // n+1
cout out endl; // 1
}
The comments at the end of each statement indicate how many times it was executed.
1.5.3
1.5.6
Recursive Version
int factorial (int n)
{
if (n ≤ 1) return 1;
else return (n ∗ factorial(n−1));
}
Iterative Version
int factorial iter (int n)
{
int output = 1;
1 BASIC CONCEPTS 3
1.5.8
1.5.10
Recursive Function
int Ackermann (int m, int n)
{
if (m == 0) return n+1;
if (n == 0) return Ackermann(m−1, 1);
return Ackermann(m−1, Ackermann(m, n−1));
}
Nonrecursive Algorithm
int Ackermann (int m, int n) // Pseudocode
{
// Assume that there are two stacks available to us. stack2 can be thought of as result
// stack where results are pushed for future use. This program can be written with one
// stack but we use two stacks for clarity.
1.5.12
1.5.14
}
}
1.7.1
1.7.3
(a)
(b)
1.7.5
(a)
count++;
}
count++; // for last time of for j
}
count++; // for last time of for i
}
(b)
(d)
1.7.7
(a)
void Multiply(int ∗∗a, int ∗∗b, int ∗∗c, int m, int n, int p)
{
for (int i = 0; i < m; i++) {
count++; // count is global
for (int j = 0; j < p; j++) {
count++;
1 BASIC CONCEPTS 7
c[i][j] = 0;
count++;
for (int k = 0; k < n; k++) {
count++;
c[i][j] += a[i][k] ∗ b[k][j];
count++;
}
count++; // for last time of for k
}
count++; // for last time of for j
}
count++; // for last time of for i
}
The following program is written by simplifying the program above by removing the
statements.
void Multiply(int ∗∗a, int ∗∗b, int ∗∗c, int m, int n, int p)
{
for (int i = 0; i < m; i++) {
count += 2; // count is global
for (int j = 0; j < p; j++) {
count += 3;
for (int k = 0; k < n; k++)
count += 2;
}
}
count++; // for last time of for i
}
(b) It is profitable to interchange the two outermost for loops when p < m. In that case the
total number of steps becomes 2mpn + 3mp + 2p + 1.
1.7.9
(a) If 10n2 +9 = O(n) then 10n2 +9 ≤ cn for some positive constant c and for all n, n ≥ n0 by
definition. We can disprove this in the following way: 10n2 +9 ≤ cn ⇒ 10n2 ≤ cn ⇒ 10n ≤ c.
Choosing n > max{n0 , c/10}, we have a contradiction.
(b) As in the previous problem, we have to get c1 , c2 , and n0 such that c1 n2 ≤ n2 logn ≤ c2 n2
for all n, n ≥ n0 . n2 logn ≤ c2 n2 ⇒ c2 ≥ log2 n. Choosing n > max{n0 , 2c2 }, there is a
contradiction.
(d) If this is true, n3 2n +6n2 3n ≤ cn3 2n for some positive constant c and for all n, n ≥ n0 . This
means 6n2 3n ≤ cn3 2n ⇒ 3n ≤ cn2n . Since n ≤ (1.1)n , we have 3n ≤ c(2.2)n ⇒ (3/2.2)n ≤ c.
The left hand side increases as n increases whereas the right hand side is a constant which
is a contradiction. Hence, the equality is incorrect.
1.7.11
There are two for loops, one at line 3 and the other at line 7. While i goes from 0 to
n − 1, k goes from (i + 1) to n − 1, so line 8 gets executed n − 1 times , n − 2 times, and
so on. The total number of times line 8 is executed is (n − 1) + (n − 2) + ... + 1 which is
n(n − 1)/2. No other line is executed more times than line 8. Hence, the execution time of
the program SelectionSort is O(n2 ).
1.7.15
This program makes use of the procedures add and mult. In every call of mult there are
n3 multiplications and n3 additions. In add there are n2 additions and 0 multiplications.
Procedure mult has to be called 4 times, meaning 4n3 multiplications and 4n3 additions.
Procedure add has to be called 2 times (including subtractions) which means additions go
up by 2n2 . Hence, the number of additions (including subtractions) is 2n2 + 4n3 , and the
number of multiplications is 4n3 .
Chapter 2
ARRAYS
2.1.1
2.1.5
#include <iostream.h>
#include <math.h>
class Polynomial
{
friend ostream& operator (ostream& os, Polynomial& p);
friend istream& operator (istream& is, Polynomial& p);
public:
Polynomial(int c0 = 0, int c1 = 0, int c2 = 0) : c(c0), b(c1), a(c2) { }
Polynomial operator+(const Polynomial& s) {
Polynomial sum; sum.a = a + s.a; sum.b = b + s.b; sum.c = c + s.c;
return sum;
}
float eval(float x) { return a∗x∗x + b∗x + c; }
void roots();
private:
int a, b, c;
};
9
10 2 ARRAYS
os "Polynomial is: " p.a "x^2 + " p.b "x + " p.c endl;
return os;
}
void Polynomial::roots() {
if (b∗b − 4∗a∗c ≥ 0) {
float r1 = (−b + sqrt(b∗b − 4∗a∗c))/ (2 ∗ a);
float r2 = (−b − sqrt(b∗b − 4∗a∗c))/ (2 ∗ a);
cout "Roots are " r1 " and " r2 endl;
}
else {
// use functions defined in 2.1.2
Complex c1( (−b)/(2∗a), sqrt(4∗a∗c − b∗b)/(2∗a));
Complex c2( (−b)/(2∗a), −sqrt(4∗a∗c − b∗b)/(2∗a));
cout "Roots are " c1 " and " c2 endl;
}
}
2.2.1
#include <iostream.h>
const int defaultSize = 100;
const float defaultValue = 0.0;
class CppArray {
friend ostream& operator(ostream& , CppArray&);
friend istream& operator(istream& , CppArray&);
public:
CppArray(int, float);
CppArray(const CppArray&);
∼CppArray();
CppArray& operator= (const CppArray&);
float& operator[] (int i);
int GetSize();
private:
int n;
2 ARRAYS 11
float ∗cpparr;
};
CppArray::∼CppArray() {
delete [] cpparr;
}
2.3.1
ADT OrderedList is
objects: A set of pairs < index, value > where for each value of index there is a value from
the set item. index is a finite ordered set of one dimension.
functions:
for all A ∈ Array, x ∈ item, i, j, n ∈ integer
Create(size, item) : Array ::= declare an array of size size
and the array elements are of type item;n = 0;
Length(A) : integer ::= Length = n
Read(A)::= for (int j = 1; j <= n; j + +) cin >> A[i]
Retrieve(A, i) : item ::= Retrieve = A[i]
Store(A, i, x) ::= A[i] = x
Insert(A, i, x) ::= if (n < size) then
for (int j = n; j >= i; j − −) A[j + 1] = A[j];
A[i] = x; n + +;
else error;
Delete(A, i, x) ::= if (n == 0) then error
else for (int j = i; j < n; j + +) A[j] = A[j + 1];
n − −;
endOrderedList
2.3.4
int numTerms;
cout "Enter number of terms in polynomial followed by (coef, exp) pairs"
endl;
is numTerms;
if (p.free + numTerms > MaxTerms) cout "error" endl;
else {
p.Start = p.free; p.Finish = p.Start + numTerms − 1;
p.free = p.Finish + 1;
for (int i = p.Start; i ≤ p.Finish; i++)
is p.termArray[i].coef p.termArray[i].exp;
}
return is;
}
2.3.6
float Polynomial::eval(float x)
{
float result = 0.0;
int power = 0; float value = 1.0; // value = x∗∗power
2.4.1
The elements are stored row by row, and every row is stored in increasing order of the
column indices. By doing this we have imposed an order. We can do a binary search on the
elements stored using this order to locate an arbitary element. The time required for this
search will be O(log2 (terms)). Since changing a value takes a constant amount of time, the
total time is O(log2 (terms)).
2.4.3
14 2 ARRAYS
We will assume that the matrix is input in increasing order of the rows and that within
every row the elements are ordered by increasing order of their column indices. The output
has the same format as the input. The number of nonzero entries and matrix dimensions
are initially specified.
2.4.5
Line 3 verifies that the matrices are compatible. Line 4 calls FastTranspose to produce
matrix bXpose. Matrix bXpose is the transpose of matrix b. Every row of a is multiplied
with every row of bXpose to produce the corresponding row of result. Note that a row of
bXpose corresponds to a column of b.
Now let us see how the while loops function to determine the correctness. There are
two while loops, one starting at line 16 and the other at line 20. Each time the while loop
at line 16 is iterated one row of d gets stored. We start with currRowIndex = 0 and we
iterate until currRowIndex <=a.Terms. Every time we move to the next row in lines 53-54,
where we increment currRowIndex, and in line 55, we assign currRowBegin to currRowIndex.
currRowBegin keeps track of where the current row starts. This variable is used within the
second while loop. The variable currRowA keeps track of row number we presently are using
to generate the matrix d. The value of currRowA is changed at lines 8 and 56 only, where
they are initialized and updated properly. Hence, the row numbers of the elements we are
generating are correct.
2 ARRAYS 15
Having established that we are taking a single row and using that in each iteration of
the while loop starting at line 16, let us see what happens within the while loop at line 20.
The ideas used in the previous loop apply here too: We look at every row of matrix bXpose
(which is equivalent of looking at every column of matrix b) and then generate items. Before
entering the while loop of line 20, we initialize currColIndex to 0. Variable currColB points
to the column (of b) we are looking at. If the term we are looking at has a row value that is
greater than the present value (of currRowA), then we have seen all the elements of the row.
It does not matter whether we have seen the whole column; these items are going to be zero.
Having done this in lines 22-25, we have to make the row pointer point to the beginning of
the row which we do in line 26. Lines 28-29 make sure that the pointer currColIndex points
to the next column and the present column value is stored in currColb.
Storesum stores the element that has a value equal to sum in d[LastInResult], which
has a row value currRowA and a column value currColB only if sum is not equal to zero.
LastInResult is incremented in the process. Note that sum is assigned the value zero.
We may run out of column values before we run out of row values, in which case we have
to make currRowIndex point to currRowbegin and we must change the column value. This
is done in lines 37-38. Also, the sum is stored in the corresponding place by calling Storesum
in line 34.
When we scan the row and column simultaneously, the value of sum is changed only if
the column value of a and the row value of b (column value of bXpose) is the same. This is
checked in lines 44-45. If these values are the same, the sum is incremented and the pointers
currRowIndex and currColIndex are incremented. If we are looking at a column value of a
that is greater than the row value of b (column value of bXpose) we increment the pointer
currColIndex; otherwise we increment the pointer currRowIndex. These are done in lines 51
and 43, respectively.
2.4.7
(a) To represent t nonzero terms we need t words. To store the two-dimensional array bits
we need dnm/we words. Total words needed = t + dnm/we.
(b) We assume that both matrices are compatible. Assume that each matrix has row, col
data members which contain the number of rows and columns, resp., of the matrix. bit[][] is
the bitmatrix and terms, the array of terms.
(c) We first consider the random access operation. Suppose we do not use a supplementary
array: to access A[i][j], we first check bits[i][j] in O(1) time. If bits[i][j] is 0, then A[i][j] is
0, and we are done. If not, we need to look up array v. To do this, we need to know the
position of v containing A[i][j]. This is obtained by counting the number of non-zero elements
that precede A[i][j] in row-major order. This requires a scan of the bits array, which can
consume, in the worst case, O(mn) time. Suppose we use a supplementary array as specified
in the problem statement: we immediately know the number of non-zero elements in rows
0 through i − 1. It remains to find the number of non-zero elements in positions 0 through
j − 1 in row i. This requires a scan of row i of the bits array, requiring, in the worst case,
O(n) time.
Addition takes O(mn) time, as described above. Multiplication takes O((mn + np)p)
time using an additional O(n) space: suppose we need to get C = A.B, where A=m*n and
B=n*p. The idea is to multiply the rows of A with the first column of B, using an additional
O(n) space to store the elements of the column. These elements are stored as the first row of
C. This is repeated with all columns of B. Finally, C is transposed to get the correct matrix.
The representation of Section 2.4 requires 3t words. It is better when the number of
nonzero elements in the matrix (t) is very small and the size of the matrix (mn) is large. If
t is large, the representation used in this exercise takes less space. Random access time is
O(log t) with the representation of Section 2.4 (Exercise 2.4.1). Generally, when t is much
less than mn, the representation of Section 2.4 will outperform the representation in this
exercise. This can be intuitively understood from the fact that we do not have to look into a
large two-dimensional array for any of the operations. The exact analysis and the breakeven
points cannot be determined unless we have the results of timing experiments.
2.5.2
2 ARRAYS 17
In array a[0..n] we can store n + 1 values. Array b[−1..n, 1..m] can hold m ∗ (n + 2) values,
whereas array c[−n..0, 1..2] can hold 2n + 2 values.
2.5.3
k=n
where aj = Πk=j+1 (uj + 1) for 1 ≤ j < n and an = 1.
2.6.1
void String::Frequency()
// Assume only characters a to z are allowed in the string
{
int freq[26]; char temp = 'a';
for (int i = 0; i < 26; i++) freq[i] = 0;
for (i = 0; i < length; i++) freq[str[i]−'a']++;
for (i = 0; i < 26; i++) {
cout temp ": " freq[i];
temp++;
}
cout endl;
}
2.6.3
String String::CharDelete(char c)
{
String delstring;
int count = 0;
for (int i = 0; i < length; i++)
if (str[i] == c) count++;
delstring.length = length − count;
delstring.str = new char[length−count];
int j = 0;
for (i = 0; i < length; i++)
if (str[i] 6= c) {
delstring.str[j] = str[i];
j++;
}
18 2 ARRAYS
return delstring;
}
2.6.5
if (m > n) return 1;
else if (m < n) return −1;
else return 0;
}
2.6.7
(a)
j 0 1 2 3 4
pattern a a a a b
f −1 0 1 2 −1
(b)
j 0 1 2 3 4 5
pattern a b a b a a
f −1 −1 0 1 2 0
(c)
j 0 1 2 3 4 5 6 7 8
pattern a b a a b a a b b
f −1 −1 0 0 1 2 3 1 −1
2.6.9
2 ARRAYS 19
(a)
j 0 1 2 3 4 5 6 7 8 9
pat a b c a b c a c a b
f −1 −1 −1 −1 −1 −1 3 −1 −1 1
(b)
As before, if the character at the pattern matches the text, we increment P osP and
P osS (line 8). Consider the case when the pattern fails to match. Let P osP initially be i
and let its new position be j. If pat.str[i] = pat.str[j], we are going to fail again because
s.str[P osS] 6= pat.str[i] and therefore s.str[P osS] 6= pat.str[j]. The strengthening condition
prevents failure in this case.
(c)
void String::FailureFunction2()
{
int LengthP = Length();
int ∗temp = new int[LengthP];
temp[0] = −1;
for (int j = 1; j < LengthP; j++)
{
int i = temp[j−1];
while((∗(str+j) 6= ∗(str+i+1)) && (i ≥ 0)) i = temp[i];
if (∗(str+j) == ∗(str+i+1)) temp[j] = i + 1;
else temp[j] = −1;
}
f[0] = −1;
for (j = 1; j < LengthP; j++)
{
int i = temp[j];
while (i ≥ 0 && ∗(str+j+1) == ∗(str+i+1)) i = temp[i];
if (∗(str+j+1) 6= ∗(str+i+1)) f[j] = i;
else f[j] = −1;
}
delete [ ] temp;
}
It is easy to see from the bounds of the two for loops that the computing time remains
O(LengthP ).
(d) With the new definition of f, we have strengthened the failure function. Therefore, with
the new definition FastFind cannot take more time to run than it did with the old definition,
it can take less time. For example take the pattern abcabcabcabcabc. Suppose part of the
20 2 ARRAYS
text is abcabcabcabcabx. With the modified definition of f, when we fail to match x with
c the failure function is −1, whereas with the old definition we have to match four times
within the pattern to arrive at the same result. Considering that the whole text contains the
piece of text shown above, the run time of the algorithm FastFind with the new definition
for f is considerably faster than with the old definition.
2.8.1
2.8.2
class Matrix {
private:
int m, n;
int ∗∗a;
public:
int saddle();
};
int Matrix::Saddle()
{
int ∗aux = new int[n]; // auxiliary array for storing the max element in each col
for (int j = 0; j < n; j++) { // find max element in column j
aux[j] = a[0][j]; // and store in aux[j]
for (int i = 1; i < m; i++)
if (aux[j] < a[i][j]) aux[j] = a[i][j];
}
for (int i = 0; i < m; i++) { // find min element in row i and store in min
int min = a[i][0]; // minpos is the column number of the min element
int minpos = 0;
for (j = 1; j < n; j++)
if (min > a[i][j]) {
2 ARRAYS 21
min = a[i][j];
minpos = j;
}
2.8.4
The storage scheme is given in the hint. We will store the lower triangular matrix A as
the lower triangle of c, and the lower triangular matrix B is transposed (it then becomes
upper triangular) and is stored as the upper triangular matrix of c.
The storage scheme then is
Algorithm retrieve(Matrix,i, j)
if (Matrix == A) return c[i][j]
else return c[j][i + 1]
2.8.6
(a) The main diagonal contains n elements. Let us count the number of elements in the lower
diagonals(excluding the main diagonal). The first lower diagonal contains n − 1 elements,
and the second lower diagonal contains n − 2 elements. Generalizing, we can say that the
(a − 1)th diagonal contains n − (a − 1) elements, which is n − a + 1 elements. Hence, the
total number of elements in the lower diagonals = (n − 1) + (n − 2) + . . . + (n − a + 1) =
n(n−1)/2−(n−a)(n−a+1)/2. The total number of elements in the diagonals (excluding the
main diagonal) is = 2[n(n − 1)/2 − (n − a)(n − a + 1)/2], which is n(n − 1) − (n − a)(n − a + 1),
which is simply (a − 1)(2n − a). Hence, the total number of elements is n + (a − 1)(2n − a).
22 2 ARRAYS
Note that in the last term we are adding n elements from the main diagonal.
(c) There are three cases, as mentioned in (b). If (i = j), then all the lower diagonal elements
are stored first. The number of lower diagonal elements = (a−1)(2n−a)/2. Now, the address
of the element A[i][j] when (i = j) is i + [(a − 1)(2n − a)/2] − 1.
When (i > j) the element lies in one of the lower diagonals. If (i − j = k), for example,
the element lies in the kth lower diagonal. The (a−1)th diagonal contains n−a+1 elements.
All the elements in the diagonals (a − 1) through (k + 1) will be stored before the elements
in the kth lower diagonal are stored. In other words, (n − a + 1) + . . . + (n − k − 1) elements
are stored earlier. That is, (a − 1 − k) ∗ (2n − a − k)/2 elements. Hence, the address of the
element A[i][j] is j + (a − 1 − k) ∗ (2n − a − k)/2 − 1.
Now we have to get the address of the elements when (i < j), that is, of the elements
that reside in the upper diagonal. All the lower diagonal elements and the main diagonal
elements will be stored before the upper diagonal elements are stored. That will be n +
[(a − 1)(2n − a)]/2 elements. In the upper diagonal, the first upper diagonal contains n − 1
elements, the second n − 2 elements, and so on. If the element is in the kth upper diagonal,
then all elements from the first upper diagonal to the (k − 1)th upper diagonal are stored
before the kth diagonal element. That is, (n − 1) + . . . + (n − k + 1) elements from the
upper diagonal are stored before the kth upper diagonal elements. This is (k − 1)(2n − k)/2
elements. Hence, the total number of elements stored before the kth upper diagonal elements
is n + (a − 1)(2n − a)/2 + (k − 1)(2n − k)/2. Also, the elements in the upper diagonal are
stored in increasing order of their row number. Hence the address of A[i][j] is i + n + (a −
1)(2n − a)/2 + (k − 1)(2n − k)/2 − 1.
We can combine these three cases and obtain the addressing formula for A[i][j] as
Address of A[i][j] = i + [(a − 1)(2n − a)/2] − 1, if i = j
= j + (a − 1 − k) ∗ (2n − a − k)/2 − 1, if i − j = k and k ≤ a − 1
= i + n + (a − 1)(2n − a)/2 + (k − 1)(2n − k)/2 − 1, if j − i = k and k ≤ a − 1
Chapter 3
3.3.1
3.3.6
23
24 3 STACKS AND QUEUES
(b) The kth element of the list is the position (f ront + k)% capacity. Since the kth element
is deleted, all the elements residing in positions from (f ront + k + 1)%capacity to rear should
be moved up by one position.
3.4.1
#include <iostream.h>
int DefaultSize = 100;
class Bag {
friend ostream& operator(ostream&, Bag<T>&);
public:
Bag (int bagCapacity = DefaultSize); // constructor
virtual ∼Bag(); // destructor
protected:
T ∗array;
int capacity; // size of array
int top; // highest position in array that contains an element
};
3.4.4 (a) Yes. All rectangles are trapeziums. (b) No. (c) No. (d,e) Yes. Stacks and queues
are both ordered lists.
3.6.1
(a) AB ∗ C∗
3 STACKS AND QUEUES 27
(c) ABu ∗ C+
(d) AB + D ∗ EF AD ∗ +/ + C+
3.6.3
(a)
∗ ∗ ABC
+ − +uABCD, where u represents the unary minus
+ ∗ AuBC
+ + ∗ + ABD/E + F ∗ ADC
or or and ABC not > EF
or not and A not or < BC > CD < CE
(b)
void eval(expression e)
// Evaluate the prefix expression e. It is assumed that the first token
// in e is ’#’. A procedure PrevToken is used to extract from e the next
// token. We assume that we set the pointer to the last token in e initially.
// A one-dimensional array stack is used as a stack
{
Stack<token> stack; // initialize stack
for (token x = Lasttoken(e); x 6= '#'; x = PrevToken(e))
if (x is an operand) stack.Push(x) // add to stack
else { // operator
remove the correct operands for operator x from the stack;
perform the operation x and store the result onto the stack;
}
} // end of eval
(c) This is symmetric to the code of Program 3.19 in the text. The main difference is that we
scan the infix expression from right to left. We define isp[’)’]= 8, icp[’)’]=0, and isp[’#’]=8.
Also, since we cannot print from right to left, we use a stack outstack to reverse the output.
void InfixToPrefix(Expression e)
{
28 3 STACKS AND QUEUES
for(;!stack.IsEmpty();stack.Pop()) outStack.Push(stack.Top());
for(;!outStack.IsEmpty()); cout outStack.Top(), outStack.Pop());
} // end of InfixToPrefix
(b) and (c) have O(n) time complexity and O(n) space complexity.
3.6.5
We will assume that the postfix expression ends with a #. We use two stacks, S1 and
S2, and a special flag that does not appear in the expression. The flag marks the end of an
operand. When we pop an operand from stack S1, we will need this flag to to tell us that
we have reached the end of the operand. This is necessary because an operand could itself
be an expression consisting of several characters. For simplicity, we assume that only binary
operators are used.
void PostfixToPrefix(expression e)
{
Stack<token> S1, S2;
token y;
// The idea is to pop from the main stack and push into
// the temporary stack until two flags are encountered.
S1.Push(flag); // put flag back onto S1
S1.Push(x); // put operator on S1
} // end of PostfixToPrefix
This algorithm takes time O(n2 ). The space taken is O(n). By making use of the list
representation, discussed in Chapter 4, we could make the run time O(n).
3.6.7
We will use two stacks S1 and S2. We will scan e from right to left. Assume that the
leftmost token is ’#’. The idea used in 3.6.5 also applies to this algorithm. Assume binary
operators.
Pre2fpIn(expression e)
{
Stack<token> S1, S2;
token y;
S1.Push(flag);
// transfer entire expression from S2 to S1
while (!S2.isEmpty())
S1.Push(S2.Top());
S2.Pop();
}