Professional Documents
Culture Documents
Clustering
Clustering
Similar objects are placed in the same group and dissimilar objects in different groups.
Objects are described and clustered using a set of features and values.
Develop a probabilistic model for a problem. Understand the characteristics of the data
Generalization
Group objects into clusters and genaralize from what we know about some members of the cluster
Clustering does not require training data and is hence called unsupervised. Classification is supervised and requires a set of labeled training instances for each group. The result of clustering only depends on natural divisions in the data and not on any pre-existing categorization scheme as in classification.
Types of Clustering
Hierarchical
Bottom Up:
Start with objects and group most similar ones. Start with all objects and divide into groups so as to maximize within-group similarity.
Top down:
Non-hierarchical
Hierarchical Clustering
1. 2.
Bottom-up:
Start with a separate cluster for each object Determine the two most similar clusters and merge into a new cluster. Repeat on the new clusters that have been formed. Terminate when one large cluster containing all objects has been formed
Example of a similarity measure:
3.
d ij ( xik x jk ) 2
K 1
1. 2.
Top-down
Start from a cluster of all objects Iteratively determine the cluster that is least coherent and split it. Repeat until all clusters have one object.
3.
Single-link
Similarity of two most similar members Similarity of two least similar members Average similarity between members
Complete-link
Group-average
Single-Link
Complete-Link
Group-Average
Instead of greatest similarity between elements of clusters or the least similarity the merge criterion is average similarity. Compromise between single-link and complete-link clustering
Many rare events do not have enough training data for accurate probabilistic modeling. Clustering is used to improve the language model by way of generalization.
Non-Hierarchical Clustering
Start out with a partition based on randomly selected seeds and then refine this initial partition. Several passes of reallocating objects are needed (hierarchical algorithms need only one pass). Hierarchical clusterings too can be improved using reallocations. Stop based on some measure of goodness or cluster quality. Heuristic: number of clusters, size of clusters, stopping criteria No optimal solution.
K-Means
Hard clustering algorithm Defines clusters by the center of mass of their members
1.
2.
3. 4.
Define initial center of clusters randomly Assign each object to the cluster whose center is closest Recompute the center for each cluster Stop when centers do not change
K-Means (Cont.)
The EM Algorithm
EM (Cont.)
Determine the most likely estimates for the parameters of the distribution. The idea is that the data are generated by several underlying causes. Z : unobserved data set
EM (Cont.)
We assume that the data is generated by k gaussians (k clusters) Each gaussian with parameters mean mj and covariance Sj is given by: 1 exp[-(x-mj)TS1j (x-mj)/2] n( x; mj , Sj ) ( 2 p )m Sj
EM (Cont.)
EM (Cont.)
Parameters are found by maximizing the log likelihood given in the equation:
(q1,,qk)T where the individual qj(mj, Sj, pj)
n
- l ( X | ) log P( xi ) log p j n j ( xi ; m j , S j )
n k i 1 i 1 j 1
- log p j n j ( xi ; m j , S j )
n k i 1 j 1
EM (Cont.)
Estimate: If we knew the value of we could compute the expected values of the hidden structure of the model. Maximize: If we knew the expected values of the hidden structure of the model, then we could compute the maximum likelihood value of .
EM (Cont.)
mj
i 1 n
hij
Sj
i 1
- - - - T hij ( xi mj )( xi mj )
n i 1
hij
p j
i 1
h
n
ij
Preferable for detailed data analysis Provides more information than flat No single best algorithm (dependent on application) Less efficient than flat ( for n objects, n X n similarity matrix required)
Preferable if efficiency is a consideration or data sets are very large K-means is the conceptually simplest method K-means assumes a simple Euclidean representation space and so cant be used for many data sets In such case, EM algorithm is chosen