Professional Documents
Culture Documents
Wednesday, 01.07.2015
Proposed Method
Database preprocessing
Visual
features
extraction
(CSD)
Q uery
Image
Display
36 nearest
images
Mapping high
dimensional features
space to a lower
dimensional space
using kernel P CA
Query features
in mapped
space
Clustering using P AM
knowing no. of clusters
fro m optimu m silhouette
width plot
Similarity
me asure using
L 1 norm
Test samples
accumulation from query
image s belonging cluster
by removing outliers by
SVC (reduc ed database)
Classification
one-class SVM
Automaticall
y select
entire
relevant
images
Display
Display 36 nearest
images using L1 norm
Select original
CSD features
vectors
corresponding to
all positive
samples
The basic idea is to first map the input space into a feature
space via a nonlinear map and then compute the principal
components in that feature space.
(x)
Definition
A reproducing kernel k is a function k : 2 R
I
is typically a subset of RN
Illustration
Using a feature map to map the data from input space into a
higher dimensional feature space F :
Kernel Trick
We would like to compute the dot product in the higher
dimensional space, or
(x).(y).
To do this we only need to compute
k(x, y),
since
k(x, y) = (x).(y).
Note that the feature map is never explicitly computed. We
avoid this, and therefore avoid a burdensome computational task.
Example kernels
2
)
Gaussian: k(x, y) = exp( kxyk
2 2
d
Polynomial: k(x, y) = (x.y + c) , c 0
Sigmoid: k(x, y) = tanh( < x.y > +)
Nonlinear separation can be achieved.
Nonlinear Separation
Mercer Theory
Input Space to Feature Space
Necessary condition for the kernel-mercer trick:
k(x, y) =
NF
X
i
X
i
i ui uiT
Fixed-point iteration
Parameter i
I
P
rCi
d(r, mi )
(q(r) p(r))
[1, 1]
max(p(r), q(r))
N
P
i=1
i (x.xi ) +
N P
N
P
i=1 j=1