Professional Documents
Culture Documents
CLO-4: Design and interpret complete discrete/ digital system both in time and frequency domain
Note: Marks will be given based upon the presentation and viva. The complex engineering problem can be completed
in groups as well with maximum of two students in each group.
The training dataset will be gathered by each group separately and will be marked by them. Remember the more
samples (with more variability) you collect for each class, the more exposure you will give to your classifier during
the training phase. Apart from this, the detailed description of both supervised classifiers is given below:
K-Nearest Neighbors:
In pattern recognition and machine learning, k-nearest neighbors (KNN) is a simple algorithm that stores all
available cases and classifies new cases based on a similarity measure (e.g. distance). KNN is a non-parametric
method where the input consists of the k closest training examples in the feature space. The output is a class
membership. An object is classified by a majority vote of its neighbors, with the object being assigned to the class
Page 1 of 3
Bahria University, Islamabad
Subject: Digital Signal Processing (EEN-325) Instructors: Dr. Saleem / Taimur
CEP (CLO-4) Class: BEE-6-A, B, C, D: Spring 2019 Deadline: 17-05-19
most common among its k nearest neighbors (k is a positive integer, typically small). If k = 1, then the object is
simply assigned to the class of that single nearest neighbor. The algorithm for KNN classification model is given
below:
Algorithm: KNearestNeighbors
Input: Training data X
Training data labels Y
Sample x to classify
Output: Decision 𝑦𝑝 about sample x
for i ← 1 to m do
Compute distance between training sample 𝑋𝑖 and unlabeled sample x i.e. d(𝑋𝑖 , x)
end for
Compute set I containing the indices for the k smallest distances d(𝑋𝑖 , x)
return 𝑦𝑝
Naïve Bayes:
Naïve Bayes is a probabilistic non-linear classifier that classifies a candidate test vector based on Bayes Rule where
all the features within the feature vector are assumed to be statistically independent:
𝑃(𝑥|𝑤𝑖 ) ∗ 𝑃(𝑤𝑖 )
𝑃(𝑤𝑖 |𝑥) =
𝑃(𝑥)
Page 2 of 3
Bahria University, Islamabad
Subject: Digital Signal Processing (EEN-325) Instructors: Dr. Saleem / Taimur
CEP (CLO-4) Class: BEE-6-A, B, C, D: Spring 2019 Deadline: 17-05-19
where 𝑃(𝑤𝑖 |𝑥) is the posterior probability telling the conditional probability of the class 𝑤𝑖 against the given
feature vector 𝑥, 𝑃(𝑥|𝑤𝑖 )is the likelihood of the feature vector 𝑥 given the class 𝑤𝑖 , 𝑃(𝑤𝑖 ) is the prior probability
of class 𝑤𝑖 and 𝑃(𝑥) is the evidence of feature vector 𝑥. Naïve Bayes can be used to classify both numerical and
categorical test vectors. For classifying numerical test vectors, Naïve Bayes calculates likelihood probabilities of
all features for each class labels through Gaussian probability density function. Then the likelihood probability
𝑃(𝑥|𝑤𝑖 ) of a feature vector x is computed by taking the product of likelihood probabilities for all features. The
Gaussian distribution can be computed through:
1 (𝑥−𝜇)2
−
𝑃(𝜇, 𝜎) = 𝑒 2𝜎 2
√2𝜋𝜎 2
where 𝜇 is the mean and 𝜎 is the standard deviation of the Gaussian distribution. The algorithm to implement
Bayesian classification model is given below:
Compute posterior probability ‘P1’ by multiply all first-class likelihood probabilities for x with each other and
then with p1.
Compute posterior probability ‘P2’ by multiply all second-class likelihood probabilities for x with each other
and then with p2.
If P1 >= P2
𝑦𝑝 = 0
else
𝑦𝑝 = 1
return 𝑦𝑝
Good Luck 😊
Page 3 of 3