You are on page 1of 25

CS 904: Natural Language Processing

Clustering

L. Venkata Subramaniam April 4, 2002

Clustering

Partition a set of objects into groups or clusters.

Similar objects are placed in the same group and dissimilar objects in different groups.

Objects are described and clustered using a set of features and values.

Exploratory Data Analysis (EDA)


Develop a probabilistic model for a problem. Understand the characteristics of the data

Generalization

Induce bins from the data.


Monday, Tuesday, ,Sunday There is no entry for Friday

Learn natural relationships in data.

Group objects into clusters and genaralize from what we know about some members of the cluster

Clustering (vs. Classification)

Clustering does not require training data and is hence called unsupervised. Classification is supervised and requires a set of labeled training instances for each group. The result of clustering only depends on natural divisions in the data and not on any pre-existing categorization scheme as in classification.

Types of Clustering

Hierarchical

Bottom Up:

Start with objects and group most similar ones. Start with all objects and divide into groups so as to maximize within-group similarity.

Top down:

Single-link, complete-link, group-average K-means EM-algorithm

Non-hierarchical

Hard (1:1) vs soft (1:n degree of membership)

Hierarchical Clustering

1. 2.

Bottom-up:
Start with a separate cluster for each object Determine the two most similar clusters and merge into a new cluster. Repeat on the new clusters that have been formed. Terminate when one large cluster containing all objects has been formed
Example of a similarity measure:

3.

d ij ( xik x jk ) 2
K 1

Hierarchical Clustering (Cont.)

1. 2.

Top-down
Start from a cluster of all objects Iteratively determine the cluster that is least coherent and split it. Repeat until all clusters have one object.

3.

Similarity Measures for Hierarchical Clustering

Single-link

Similarity of two most similar members Similarity of two least similar members Average similarity between members

Complete-link

Group-average

Single-Link

Similarity function focuses on local coherence

Complete-Link

Similarity function focuses on global cluster quality

Group-Average

Instead of greatest similarity between elements of clusters or the least similarity the merge criterion is average similarity. Compromise between single-link and complete-link clustering

An Application: Language Model

Many rare events do not have enough training data for accurate probabilistic modeling. Clustering is used to improve the language model by way of generalization.

Predictions for rare events more accurate.

Non-Hierarchical Clustering

Start out with a partition based on randomly selected seeds and then refine this initial partition. Several passes of reallocating objects are needed (hierarchical algorithms need only one pass). Hierarchical clusterings too can be improved using reallocations. Stop based on some measure of goodness or cluster quality. Heuristic: number of clusters, size of clusters, stopping criteria No optimal solution.

K-Means

Hard clustering algorithm Defines clusters by the center of mass of their members
1.

2.

3. 4.

Define initial center of clusters randomly Assign each object to the cluster whose center is closest Recompute the center for each cluster Stop when centers do not change

K-Means (Cont.)

The EM Algorithm

Soft version of K-means clustering.

EM (Cont.)

Determine the most likely estimates for the parameters of the distribution. The idea is that the data are generated by several underlying causes. Z : unobserved data set

Z= { vector z1 vector zn } Zi = (zi1,zi2 zik)

X : observed data set (data to be clustered)


Where zij =1 if object i is a member of cluster j otherwise 0

Estimate the model that generated this data.

X = { vector x1 vector xn } xi = (xi1,xi2 xim)

EM (Cont.)
We assume that the data is generated by k gaussians (k clusters) Each gaussian with parameters mean mj and covariance Sj is given by: 1 exp[-(x-mj)TS1j (x-mj)/2] n( x; mj , Sj ) ( 2 p )m Sj

EM (Cont.)

We find the maximum likelihood model of the form:


P(xi)=Skj=1 pj nj (xi; mj, Sj)

where pj is the weight for each Gaussian

EM (Cont.)

Parameters are found by maximizing the log likelihood given in the equation:
(q1,,qk)T where the individual qj(mj, Sj, pj)
n

parameters of the gaussian mixture are

- l ( X | ) log P( xi ) log p j n j ( xi ; m j , S j )
n k i 1 i 1 j 1

- log p j n j ( xi ; m j , S j )
n k i 1 j 1

EM (Cont.)

EM algorithm is an iterative solution to the following circular statements:

Estimate: If we knew the value of we could compute the expected values of the hidden structure of the model. Maximize: If we knew the expected values of the hidden structure of the model, then we could compute the maximum likelihood value of .

EM (Cont.)

Expectation step (E-step):


hij P ( xi | n j ; ) E ( zij | xi ; ) k P ( xi | nl ; )
l 1

Maximization step (M-step):


hij xi
n i 1

mj

i 1 n

hij

Sj

i 1

- - - - T hij ( xi mj )( xi mj )
n i 1

hij

p j

i 1

h
n

ij

Properties of hierarchical and non-hierarchical clustering


Preferable for detailed data analysis Provides more information than flat No single best algorithm (dependent on application) Less efficient than flat ( for n objects, n X n similarity matrix required)

Preferable if efficiency is a consideration or data sets are very large K-means is the conceptually simplest method K-means assumes a simple Euclidean representation space and so cant be used for many data sets In such case, EM algorithm is chosen

You might also like