Professional Documents
Culture Documents
The bubble sort makes multiple passes through a list. It compares adjacent items
and exchanges those that are out of order. Each pass through the list places the next
largest value in its proper place. In essence, each item bubbles up to the location where
it belongs.
Figure 1
will exchange the ith and jth items in the list. Without the temporary storage, one of the
values would be overwritten.
Figure 2
assignment.
The
the
threestep
procedure
have
used
the
simultaneous
unsorted portion of the list, it has the capability to do something most sorting algorithms
cannot. In particular, if during a pass there are no exchanges, then we know that the list
must be sorted. A bubble sort can be modified to stop early if it finds that the list has
become sorted. This means that for lists that require just a few passes, a bubble sort may
have an advantage in that it will recognize the sorted list and stop. Code below shows this
modification, which is often referred to as the short bubble.
process
continues
and
Figure
selected
and3 then placed in its proper location. The first pass places 93, the second pass
places 77, the third places 55, and so on. The function is shown in Code sample below.
list
with
one
item
(position 0) is already
sorted. On each pass, one
for each item 1 through
4
n1,Figure
the current
item is
Therefore, tj = 1 for j = 2, 3, ..., n and the best-case running time can be computed using equation (1) as
follows:
T(n) = c1n + c2 (n 1) + c4 (n 1) + c5 2 j n (1) + c6 2 j n (1 1) + c7 2 j n (1 1) + c8 (n 1)
T(n) = c1n + c2 (n 1) + c4 (n 1) + c5 (n 1) + c8 (n 1)
T(n) = (c1 + c2 + c4 + c5 + c8 ) n + (c2 + c4 + c5 + c8)
This running time can be expressed as an + b for constants a and b that depend on the statement costs ci.
Therefore, T(n) it is a linear function of n.
The punch line here is that the while-loop in line 5 executed only once for each j. This happens if given
array A is already sorted.
T(n) = an + b = O(n)
It is a linear function of n.
Worst-Case
The worst-case occurs if the array is sorted in reverse order i.e., in decreasing order. In the reverse order,
we always find that A[i] is greater than the key in the while-loop test. So, we must compare each element
A[j] with each element in the entire sorted subarray A[1 .. j 1] and so tj = j for j = 2, 3, ..., n. Equivalently,
we can say that since the while-loop exits because i reaches to 0, there is one additional test after (j 1)
tests. Therefore, tj = j for j = 2, 3, ..., n and the worst-case running time can be computed using equation (1)
as follows:
T(n) = c1n + c2 (n 1) + c4 (n 1) + c5 2 j n ( j ) + c6 2 j n(j 1) + c7 2 j n(j 1) + c8 (n 1)
And using the summations in CLRS on page 27, we have
T(n) = c1n + c2 (n 1) + c4 (n 1) + c5 2 j n [n(n +1)/2 + 1] + c6 2 j n [n(n 1)/2] + c7 2 j n [n(n
1)/2] + c8 (n 1)
T(n) = (c5/2 + c6/2 + c7/2) n2 + (c1 + c2 + c4 + c5/2 c6/2 c7/2 + c8) n (c2 + c4 + c5 + c8)
This running time can be expressed as (an2 + bn + c) for constants a, b, and c that again depend on the
statement costs ci. Therefore, T(n) is a quadratic function of n.
Here the punch line is that the worst-case occurs, when line 5 executed j times for each j. This can happens
if array A starts out in reverse order
T(n) = an2 + bn + c = O(n2)
It is a quadratic function of n.
For Insertion sort we say the worst-case running time is (n2), and the best-case
running time is (n).
The time of Insertion sort depends on the original order of a input. It takes a time
in (n2) in the worst-case, despite the fact that a time in order of n is sufficient to
solve large instances in which the items are already sorted.
the key to the shell sort. Instead of breaking the list into sublists of contiguous items, the
shell sort uses an increment i, sometimes called the gap, to create a sublist by choosing all
items that are i items apart.
This can be seen in Figure 6. This list has nine items. If we use an increment of
three, there are three sublists, each of which can be sorted by an insertion sort. After
completing these sorts, we get the list shown in Figure 7. Although this list is not
completely
sorted,
Figure 8
Figure 8 shows a final insertion sort using an increment of one; in other words, a
standard insertion sort. Note that by performing the earlier sublist sorts, we have now
reduced the total number of shifting operations necessary to put the list in its final order.
For this case, we need only four more shifts to complete the process.
Figure 8
Experience shows that for mid-sized data (tens of thousands elements) the algorithm
performs nearly as well if not better than the faster n log n sorts.
consists of the use of a heap data structure rather than a linear-time search to find the
maximum.
Although somewhat slower in practice on most machines than a wellimplemented quicksort, it has the advantage of a more favorable worst-case O(n log n)
runtime. Heapsort is an in-place algorithm, but it is not a stable sort.
Heapsort was invented by J. W. J. Williams in 1964. This was also the birth of the
heap, presented already by Williams as a useful data structure in its own right. In the
same year, R. W. Floyd published an improved version that could sort an array in-place,
continuing his earlier research into the treesort algorithm.
The heapsort algorithm involves preparing the list by first turning it into a max
heap. The algorithm then repeatedly swaps the first value of the list with the last value,
decreasing the range of values considered in the heap operation by one, and sifting the
new first value into its position in the heap. This repeats until the range of considered
values is one value in length.
The steps are:
1. Call the buildMaxHeap() function on the list. Also referred to as heapify(), this
builds a heap from a list in O(n) operations.
2. Swap the first element of the list with the final element. Decrease the considered
range of the list by one.
3. Call the siftDown() function on the list to sift the new first element to its
appropriate index in the heap.
4. Go to step (2) unless the considered range of the list is one element.
The heapsort algorithm starts by using BUILD-HEAP to build a heap on the
input array A[1 . . n], where n = length[A]. Since the maximum element of
the array is stored at the root A[1], it can be put into its correct final position
by exchanging it with A[n]. If we now "discard" node n from the heap (by
decrementing heap-size[A]), we observe that A[1 . . (n - 1)] can easily be
made into a heap. The children of the root remain heaps, but the new root
element may violate the heap property (7.1). All that is needed to restore the
heap property, however, is one call to HEAPIFY(A, 1), which leaves a heap in
A[1 . . (n - 1)]. The heapsort algorithm then repeats this process for the heap
of size n - 1 down to a heap of size 2.
HEAPSORT(A)
1 BUILD-HEAP(A)
2 for i
length[A] downto 2
3
do exchange A[1]
A[i]
4
heap-size[A]
heap-size[A] -1
5
HEAPIFY(A, 1)
Heap sort is not a Stable sort, and requires a constant space for sorting a list.
We now turn our attention to using a divide and conquer strategy as a way to
improve the performance of sorting algorithms. The first algorithm we will study is the
merge sort. Merge sort is a recursive algorithm that continually splits a list in half. If the
list is empty or has one item, it is sorted by definition (the base case). If the list has more
than one item, we split the list and recursively invoke a merge sort on both halves. Once
the two halves are sorted, the fundamental operation, called a merge, is performed.
Merging is the process of taking two smaller sorted lists and combining them together
into a single, sorted, new list. Figure 9 shows our familiar example list as it is being split
by merge sort. Figure 10 shows the simple lists, now sorted, as they are merged back
together.
Figure 9
Figure
10
(1)T(1)=1
(2)T(N)=2T(N/2)+N
Next we will solve this recurrence relation. First we divide (2) by N:
(3)T(N)/N=T(N/2)/(N/2)+1
N is a power of two, so we can write
(4)T(N/2)/(N/2)=T(N/4)/(N/4)+1
(5)T(N/4)/(N/4)=T(N/8)/(N/8)+1
(6)T(N/8)/(N/8)=T(N/16)/(N/16)+1
(7)
(8)T(2)/2=T(1)/1+1
Now we add equations (3) through (8) : the sum of their left-hand sides
will be equal to the sum of their right-hand sides:
T(N)/N+T(N/2)/(N/2)+T(N/4)/(N/4)++T(2)/2=
T(N/2)/(N/2)+T(N/4)/(N/4)+.+T(2)/2+T(1)/1+LogN
(LogN is the sum of 1s in the right-hand sides)
After crossing the equal term, we get
(9)T(N)/N=T(1)/1+LogN
T(1) is 1, hence we obtain
(10)T(N)=N+NlogN=O(NlogN)
Hence the complexity of the MergeSort algorithm is O(NlogN).
position currently holding 31. The partition process will happen next. It will find the split
point and at the same time move other items to the appropriate side of the list, either less
than or greater than the pivot value.
Figure
11
Partitioning
begins
by
locating two position markerslets
call them leftmark and rightmark
at the beginning and end of the
remaining items in the list (positions
1 and 8 in Figure 13). The goal of
the partition process is to move
items that are on the wrong side with
respect to the pivot value while also
converging on the split point. Figure
12 shows this process as we
locate
Figure
the position of 54.
12
We begin by incrementing leftmark until we locate a value that is greater than the
pivot value. We then decrement rightmark until we find a value that is less than the pivot
value. At this point we have discovered two items that are out of place with respect to the
eventual split point. For our example, this occurs at 93 and 20. Now we can exchange
these two items and then repeat the process again.
At the point where rightmark becomes less than leftmark, we stop. The position
of rightmark is now the split point. The pivot value can be exchanged with the contents of
the split point and the pivot value is now in place (Figure 13). In addition, all the items to
the left of the split point are less than the pivot value, and all the items to the right of the
split point are greater than the pivot value. The list can now be divided at the split point
and the quick sort can be invoked recursively on the two halves.
Figure
13
O(n2)
O(n log n)
Unfortunately, in the worst case, the split points may not be in the middle and can
be very skewed to the left or the right, leaving a very uneven division. In this case,
sorting a list of n items divides into sorting a list of 0 items and a list of n1 items. Then
sorting a list of n1 divides into a list of size 0 and a list of size n2, and so on. The result
is an O(n2) sort with all of the overhead that recursion requires.
Alphabeta pruning
Aperiodic grapg
BarabsiAlbert model
Belief propagation
BellmanFord algorithm
BianconiBarabsi model
Bidirectional search
Borvka's algorithm
Bottleneck traveling salesman problem
Breadth-first search
BronKerbosch algorithm
Centrality
Chaitin's algorithm
Christofides algorithm
Clique percolation method
Closure problem
Color-coding
Contraction hierarchies
Courcelle's theorem
CuthillMcKee algorithm
Depth-first search
Dijkstra's algorithm
DijkstraScholten algorithm
Dinic's algorithm
Disparity filter algorithm of weighted network
Double pushout graph rewriting
DulmageMendelsohn decomposition
Dynamic connectivity
Edmonds' algorithm
Blossom algorithm
EdmondsKarp algorithm
Euler tour technique
FKT algorithm
Flooding algorithm
Flow network
FloydWarshall algorithm
Force-directed graph drawing
FordFulkerson algorithm
Fringe search
GirvanNewman algorithm
Goal node (computer science)
GomoryHu tree
Graph bandwidth
Graph edit distance
Graph embedding
Graph isomorphism
Graph isomorphism problem
Graph kernel
Graph reduction
Graph traversal
HavelHakimi algorithm
Hierarchical closeness
Hierarchical clustering of networks
HopcroftKarp algorithm
Iterative deepening
Initial attractiveness
Iterative compression
Iterative deepening depth-first search
Johnson's algorithm
Journal of Graph Algorithms and Applications
Jump point search
Junction tree algorithm
K shortest path routing
Karger's algorithm
KleitmanWang algorithms
Knight's tour
Kosaraju's algorithm
Kruskal's algorithm
Lexicographic breadth-first search
Longest path problem
Minimax
Minimum bottleneck spanning tree
Misra & Gries edge coloring algorithm
Nearest neighbour algorithm
Network simplex algorithm
Nonblocking minimal spanning switch
Path-based strong component algorithm
Prim's algorithm
Proof-number search
Pushrelabel maximum flow algorithm
Reverse-delete algorithm
RochaThatte cycle detection algorithm
SethiUllman algorithm
Shortest Path Faster Algorithm
SMA*
Spectral layout
StoerWagner algorithm
Subgraph isomorphism problem
Suurballe's algorithm
Tarjan's off-line lowest common ancestors algorithm
Tarjan's strongly connected components algorithm
Topological sorting
Transitive closure
Transitive reduction
Travelling salesman problem
Tree traversal
Widest path problem
Yen's algorithm