You are on page 1of 8

CS304 - Assignment 04

Reg No : 050293R
Name : Skandhakumar Nimalaprakasan

Q1
R = (A, B, C, D, E)
F = (A → BC, CD → E, B → D, E → A)
A → A (trivial)
A → BC gives A → B, A → C (decomposition)
A → B, B → D gives A → D (transitivity)
A → C, A → D gives A → CD (union)
A → CD, CD → E gives A → E (transitivity)

A → A, A → B, A → C, A → D, A → E
gives A → ABCDE (union rule)

E → A, A → ABCDE gives E → ABCDE (transitivity)

CD → E, E → ABCDE gives CD → ABCDE (transitivity)

B → D gives BC → CD gives BC → ABCDE (transitivity)

Any functional dependency with A, BC, CD, or E on the left hand side of the arrow is in F+

A, BC, CD and E have functional dependencies with ABCDE.


Thus they can be regarded as candidate keys.
Q2
R = (A, B, C, D, E)
F = (A → BC, CD → E, B → D, E → A)

A decomposition {R1, R2} is a lossless-join decomposition if R1 ∩ R2 → R1 or R1 ∩ R2 → R2.


Let R1 = (A, B, C)
R2 = (A, D, E)
R1 ∩ R2 = A

Using results from (Q1)


A → A, A → D, A → E gives A → ADE (union)
ADE is a candidate key for R2
Therefore R1 ∩ R2 → R2
Thus {R1, R2} is a lossless-join decomposition

F of R1:
A → BC
F+ of R1:
A→B
A→C (decomposition)
A → ABC (union)
A is a super key for R1,
R1 is in BCNF

F of R2:
E→A
Using results from (Q1)
A→D
A→E
F+ of R2:
A → DE (union)
A → ADE (union)
E→D (decomposition)
E→E (trivial)
E → ADE (union)
A and E are super key for R2,
R2 is in BCNF

Thus;
{R1, R2} is a lossless-join decomposition into BCNF for R
Q3
● In Primary Index, search key specifies the sequential order in the file, and the records are
ordered in sequential order of the search key values.
● In Secondary Index, the records are not sequentially arranged in the sequential order search
key values.
● In Dense Index, there is a index for each search key, thus the order of the search key value in
the real file does not matter.
● In Sparse Index, there is only some search key values in the index, and we need to search
sequentially from the largest search key in the index which is smaller than the needed one.
● As primary index is sequentially ordered, we can use both dense and sparse.
● But as secondary index is not sequentially ordered, we can't use sparse, as we cannot search
some of the search keys keys which will not appear in the index.
● This will make us search the whole file sequentially and we will not be getting any benefit of
using a index.
● Thus only the dense index can be used with a secondary index.
Q4
Inserting into the B+ Tree
n=4; 9,5,43,32,0,2,77,53,81,97,45;

Step 1:
Insert 9,5,43; Order 5,9 accordingly when insering;

Step 2:
Insert 32, full; Split as 5,9 and 32,43 and add 32 in a root; Insert 0 and order;

Step 3:
Insert 2, full; Split as 0,2 and 5,9 and add 5 to root; Insert 77

Step 4:
Insert 81, full; Split as 32,43 & 53,77 and add 53 to root; Insert 81;
Insert 97, full; Split as 53,77 & 81,97 and add 81 to root; Root full; Split root as 5,32 & 53,81;
Step 5:
Add a new root; Can't add 53 to root as already two instances;
Add 32 to root and change pointers as shown below; Remove 32 from initial place;
Insert 45;

Step 6:
The final B+ Tree after inserting all given values;

Delete 32:
Just delete 32 from leaf; No other change needed;
The final B+ Tree after deleting 32;
Delete 0:
Step 1:
Delete 0 from leaf; Leaf has only one element; Can't have less than 2; Merge leafs as shown;
After merger leafs, have to change its immediate above node pointer from 5 to 32;
Node with 32 has only one child, can't be less than two children; Merge nodes as shown;

Step 2:
After merger nodes, root has only one child; No need to have a root with one child;

Step 3:
Delete root; New root is (32,53,81) node;
The final B+ Tree after deleting 0;
Delete 81:
Step 1:
Delete 81 from leaf; Leaf with one element 97; merge with node (53,77)

Step 2:
81 is redundant; Remove 81;
The final B+ Tree after deleting 81;
Q5
Step 1: The first three are entered to the first bucket
0

Step 2: Can't enter another entry, so increase double no of buckets.

Step 3: First bucket is full, so again double no of buckets, but no need to split the second bucket.
2

Step 4: Split the bucket pointed by 10,11 and add another bucket to occupy new entries.

This is the final 'Hash Stricture'

You might also like