You are on page 1of 7

Assignment Set 2

1. Double Ended Queues (Deque):- Like an ordinary queue, a doubleended queue is a container. It supports the following operations: enq_front, enq_back, deq_front, deq_back, and empty.

By choosing a subset of these operations, you can make the double-ended queue behave like a stack or like a queue. For instance, if you use only enq_front and deq_front you get a stack, and if you use only enq_front and deq_back you get a queue. By now, the reader should be used to using header objects in order to obtain uniform reference semantics. We will therefore go directly to a version of the double-ended queue with a separate header file and implementation file. Example:#include "dqueue.h" #include "dlist.h" #include < stdlib.h> struct dqueue { dlist head; dlist tail; }; dqueue dq_create(void) { dqueue q = malloc(sizeof(struct dqueue)); q -> head = q -> tail = NULL; return q; } int dq_empty(dqueue q) { return q -> head == NULL;

} void dq_enq_front(dqueue q, void *element) { if(dq_empty(q)) q -> head = q -> tail = dcons(element, NULL, NULL); else { q -> head -> prev = dcons(element, NULL, q -> head); q -> head -> prev -> next = q -> head; q -> head = q -> head -> prev; } } void dq_enq_back(dqueue q, void *element) { if(dq_empty(q)) q -> head = q -> tail = dcons(element, NULL, NULL); else { q -> tail -> next = dcons(element, q -> tail, NULL); q -> tail -> next -> prev = q -> tail; q -> tail = q -> tail -> next; } } void * dq_deq_front(dqueue q) { assert(!empty(q)); { dqueue temp = q -> head; void *element = temp -> element; q -> head = q -> head -> next; free(temp); if(q -> head == NULL) q -> tail = NULL; else q -> head -> prev = NULL; return element; }

} void * dq_deq_back(dqueue q) { assert(!empty(q)); { dqueue temp = q -> tail; void *element = temp -> element; q -> tail = q -> tail -> prev; free(temp); if(q -> tail == NULL) q -> head = NULL; else q -> tail -> next = NULL; return element; } }
2. Pulling up the records data and Information is one of the most vital applications of computers. It usually involves giving a piece of information called the key, and ask to find a record that contains other associated information. This is achieved by first going through the list to find if the given key exists or not ,a process called searching. Computer systems are often used to sort large amounts of date from which individual records must be retrieved according to some search criterion. The process of searching for an item in a data structure can be quite straightforward or very complex. Searching can be done on internal data structures or on external data structures. Information retrieval in the required format is the central activity in all computer applications. This involves searching. Consider a list of elements or can represent a file of records, where each element is a key/number. The task is to find a particular key in the list in the shortest possible time. If you know you are go in to search for an item in a set, you will need to think carefully about what type of data structure you will use for that set. At low level, the only searches that get mentioned are for sorted and unsorted arrays. However, these are not the only data types that are useful for searching.

Sequential Search[Linear search] : This is the mostnatural searchingmethod. Simplyput it means togo through a list or a file tillthe required record is found.It makesno demands onthe orderingof records. The algorithmfor a sequential search procedure

isnow presented. Hence the averagenumber of comparisons done bysequential search is

Binary Search: The drawbacks of sequential search canbe eliminated if it becomes possible to eliminate largeportions of the list from considerationinsubsequent iterations. The binarysearch method justthat,it halves the size of the list to search ineach iterations.

3. Once youve identified a problem as a search problem, its important to choose the right type of search. Here are some information which may be useful while taking the decision.

Search Time Space When to use DFS O(c^k) O(k) Must search tree anyway, know the level the answers are on, or you arent looking for the shallowest number. BFS O(c^d) O(c^d) Know answers are very near top of tree, or want shallowest answer. DFS+ID O(c^d) O(d) Want to do BFS, dont have enough space, and can spare the time. d is the depth of the answer k is the depth searched d <= k Remember the ordering properties of each search. If the program needs to produce a list sorted shortest solution first (in terms of distance from the root node), use breadth first search or iterative deepening. For other orders, depth first search is the right strategy. If there isnt enough time to search the entire tree, use the algorithm that is more likely to find the answer. If the answer is expected to be in one of the rows of nodes closest to the root, use breadth first search or iterative

deepening. Conversely, if the answer is expected to be in the leaves, use the simpler depth first search. Be sure to keep space constraints in mind. If memory is insufficient to maintain the queue for breadth first search but time is available, use iterative deepening.

4. Binary Tree:

A binary tree is simply a tree in which each node can have at most two children. A binary search tree is a binary tree in which the nodes are assigned values, with the following restrictions ; -No duplicate values. -The left subtree of a node can only have values less than the node -The right subtree of a node can only have values greater than the node and recursively defined; -The left subtree of a node is a binary search tree. -The right subtree of a node is a binary search tree. Splay Trees We shall describe the algorithm by giving three rewrite rules in the form of pictures. In these pictures, x is the node that was accessed (that will eventually be at the root of the tree). By looking at the local structure of the tree defined by x, xs parent, and xs grandparent we decide which of the following three rules to follow. We continue to apply the rules until x is at the root of the tree: Notes 1) Each rule has a mirror image variant, which covers all the cases.

2) The zig-zig rule is the one that distinguishes splaying from just rotating x to the root of the tree. 3) Top-down splaying is much more efficient in practice.

Consider the following class of binary-tree based algorithms for processing a sequence of requests:

on each access we pay the distance from the root to the accessed node. Arbitrary rotations are also allowed, at a cost of 1 per rotation. For a sequence Sigma, let C(T,Sigma) be the cost of the optimal algorithm in this class for processing a sequence Sigma starting from tree T. Then the cost of splaying the sequence (starting from any tree) is O(C(T,Sigma) + n). What about dynamic operations that change the set of elements in the tree? Heres how we can do Insertion, Join, and Deletion. Insertion Say we want to insert a new item e. Proceed like ordinary binary tree insertion, but stop short of linking the new item e into the tree. Let the node you were about to link e to be called p. We splay p to the root. Now construct a tree according to one of the following pictures: depending on if the item in e is before or after p. Lets analyze this when all the weights are 1. The amortized cost of the splay is O(log n) according to the access lemma. Then we have to pay the additional amortized cost which equals the potential of the new root. This is log(n+1). So the amortized cost of the whole thing is O(log(n)). An alternative insertion algorithm is to do ordinary binary tree insertion, and then simply splay the item to the root. This also has efficient amortized cost, but the analysis is a tiny bit more complicated. Join Here we have two trees, say, A and B. We want to put them together with all the items in A to the left of all the items in B. This is called the "Join" operation. This is done as follows. Splay the rightmost node of A. Now make B the right child of this node. The amortized cost of the operation is easily seen to be O(log n) where n is the number of nodes in the tree after joining. (Just assign uniform weights to all the nodes, as we did in the Insertion analysis.) Deletion To delete a node, x, simply splay it to the root and then Join its left and right sub-trees together as described above. The amortized cost is easily seen to be O(log n), where n is the size before the deletion.

5. Threaded List: A threaded list is a list in which additional linkage structures, called threads, have been added to provide for traversals in special orders. This permits bounded work space ,i.e. read-only traversals along the direction provided by the threads. It does presuppose that the list and any sublists are not recursive and further that no sublist is shared.

ii. DynamicHashing: Number of buckets not fixed as with static hashing, but grows or shrinks as needed The hash function: Keys are mapped to an arbitrarily long pseudorandom bit string (often called a "signature" or "pseudo key")o only the first "so many" bits will be use do but we won't know in advance how many o a possible hash function: Key% (a prime),where the prime is safely above a maximum file size. y If we add a record: o either it fits on the page it maps to (this should happen most often)o or-the page overflows and must be split y. File starts with a single bucket, once bucket is full and anew record is to be inserted, the bucket splits into2buckets and all values are re-inserted into the appropriate bucket y On the first bucket split, all values whose hash value starts with a 0 are inserted into one bucket, while all values whose hash value starts with a 1 are inserted into the other bucket y At this point, a binary tree structure, called a directory is which has2types of nodes :o Internal nodes: These guide the search-each has a left pointer corresponding to a 0 bit and a right pointer corresponding to a 1 bit o Leaf nodes (or bucket pointers): These hold a bucket address.

You might also like