You are on page 1of 3

Bahria University, Islamabad

Subject: Digital Signal Processing (EEN-325) Instructors: Dr. Saleem / Taimur


CEP (CLO-4) Class: BEE-6-A, B, C, D: Spring 2019 Deadline: 17-05-19

CLO-4: Design and interpret complete discrete/ digital system both in time and frequency domain

Note: Marks will be given based upon the presentation and viva. The complex engineering problem can be completed
in groups as well with maximum of two students in each group.

Tools Allowed: MATLAB, Python IDLE

Music Signal Analysis and Recognition:


Music signal processing is a sub-branch of digital signal processing that deals with the analysis, interpretation and
processing of digital musical signals. Musical signals can be more complicated than human vocal sound, occupying
a wider spectral band. You have been given a task to develop an automated classification model which can
autonomously classify musical signals, speech signals and songs (the signals comprising of both musical and
human vocal patterns). The input signal fed into the proposed model can be noisy, so the first step would be to
preprocess the input signal to remove noisy contents. Afterwards, you need to extract the most relevant and
distinct features based upon your imagination for classifying the input signal. The classification model which
needs to be implemented are K-Nearest Neighbors and Probabilistic Naïve Bayes. Use both classifiers separately
on the same set of features and tell which one is performing better and why? The top-level block diagram of the
proposed classification model is shown in Figure 1:

Figure 1: Block Diagram of the Proposed System

The training dataset will be gathered by each group separately and will be marked by them. Remember the more
samples (with more variability) you collect for each class, the more exposure you will give to your classifier during
the training phase. Apart from this, the detailed description of both supervised classifiers is given below:
K-Nearest Neighbors:
In pattern recognition and machine learning, k-nearest neighbors (KNN) is a simple algorithm that stores all
available cases and classifies new cases based on a similarity measure (e.g. distance). KNN is a non-parametric
method where the input consists of the k closest training examples in the feature space. The output is a class
membership. An object is classified by a majority vote of its neighbors, with the object being assigned to the class

Page 1 of 3
Bahria University, Islamabad
Subject: Digital Signal Processing (EEN-325) Instructors: Dr. Saleem / Taimur
CEP (CLO-4) Class: BEE-6-A, B, C, D: Spring 2019 Deadline: 17-05-19
most common among its k nearest neighbors (k is a positive integer, typically small). If k = 1, then the object is
simply assigned to the class of that single nearest neighbor. The algorithm for KNN classification model is given
below:
Algorithm: KNearestNeighbors
Input: Training data X
Training data labels Y
Sample x to classify
Output: Decision 𝑦𝑝 about sample x

for i ← 1 to m do
Compute distance between training sample 𝑋𝑖 and unlabeled sample x i.e. d(𝑋𝑖 , x)
end for

Compute set I containing the indices for the k smallest distances d(𝑋𝑖 , x)

Compute the decision class 𝑦𝑝 by measuring the majority label Y from I

return 𝑦𝑝

Figure 2: KNN Classification Model

Naïve Bayes:

Naïve Bayes is a probabilistic non-linear classifier that classifies a candidate test vector based on Bayes Rule where
all the features within the feature vector are assumed to be statistically independent:

𝑃(𝑥|𝑤𝑖 ) ∗ 𝑃(𝑤𝑖 )
𝑃(𝑤𝑖 |𝑥) =
𝑃(𝑥)

Page 2 of 3
Bahria University, Islamabad
Subject: Digital Signal Processing (EEN-325) Instructors: Dr. Saleem / Taimur
CEP (CLO-4) Class: BEE-6-A, B, C, D: Spring 2019 Deadline: 17-05-19
where 𝑃(𝑤𝑖 |𝑥) is the posterior probability telling the conditional probability of the class 𝑤𝑖 against the given
feature vector 𝑥, 𝑃(𝑥|𝑤𝑖 )is the likelihood of the feature vector 𝑥 given the class 𝑤𝑖 , 𝑃(𝑤𝑖 ) is the prior probability
of class 𝑤𝑖 and 𝑃(𝑥) is the evidence of feature vector 𝑥. Naïve Bayes can be used to classify both numerical and
categorical test vectors. For classifying numerical test vectors, Naïve Bayes calculates likelihood probabilities of
all features for each class labels through Gaussian probability density function. Then the likelihood probability
𝑃(𝑥|𝑤𝑖 ) of a feature vector x is computed by taking the product of likelihood probabilities for all features. The
Gaussian distribution can be computed through:

1 (𝑥−𝜇)2

𝑃(𝜇, 𝜎) = 𝑒 2𝜎 2
√2𝜋𝜎 2

where 𝜇 is the mean and 𝜎 is the standard deviation of the Gaussian distribution. The algorithm to implement
Bayesian classification model is given below:

Algorithm: Naïve Bayes


Input: Training data X
Training data labels Y
Sample x to classify
Output: Decision 𝑦𝑝 about sample x

Compute prior probabilities ‘p1’ for the first-class


Compute prior probabilities ‘p2’ for the second-class
Compute likelihood for the test sample x against first-class training samples through Gaussian Distribution
Compute likelihood for the test sample x against second-class training samples through Gaussian Distribution

Compute posterior probability ‘P1’ by multiply all first-class likelihood probabilities for x with each other and
then with p1.
Compute posterior probability ‘P2’ by multiply all second-class likelihood probabilities for x with each other
and then with p2.

Compute evidence by summing P1 and P2.


Normalize P1 by dividing it with the evidence
Normalize P2 by dividing it with the evidence

If P1 >= P2
𝑦𝑝 = 0
else
𝑦𝑝 = 1
return 𝑦𝑝
Good Luck 😊
Page 3 of 3

You might also like