Professional Documents
Culture Documents
Tan,Steinbach, Kumar
4/18/2004
Tan,Steinbach, Kumar
4/18/2004
1 2 3 4
Applied-Matl-DOWN,Bay-Network-Down,3-COM-DOWN, Cabletron-Sys-DOWN,CISCO-DOWN,HP-DOWN, DSC-Comm-DOWN,INTEL-DOWN,LSI-Logic-DOWN, Micron-Tech-DOWN,Texas-Inst-Down,Tellabs-Inc-Down, Natl-Semiconduct-DOWN,Oracl-DOWN,SGI-DOWN, Sun-DOWN Apple-Comp-DOWN,Autodesk-DOWN,DEC-DOWN, ADV-Micro-Device-DOWN,Andrew-Corp-DOWN, Computer-Assoc-DOWN,Circuit-City-DOWN, Compaq-DOWN, EMC-Corp-DOWN, Gen-Inst-DOWN, Motorola-DOWN,Microsoft-DOWN,Scientific-Atl-DOWN Fannie-Mae-DOWN,Fed-Home-Loan-DOWN, MBNA-Corp-DOWN,Morgan-Stanley-DOWN Baker-Hughes-UP,Dresser-Inds-UP,Halliburton-HLD-UP, Louisiana-Land-UP,Phillips-Petro-UP,Unocal-UP, Schlumberger-UP
Technology2-DOWN
Financial-DOWN Oil-UP
Summarization
Reduce the size of large data sets
Clustering precipitation in Australia
Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 3
Simple segmentation
Dividing students into different registration groups alphabetically, by last name
Results of a query
Groupings are a result of an external specification
Graph partitioning
Some mutual relevance and synergy, but areas are not identical
Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 4
Six Clusters
Two Clusters
Four Clusters
Tan,Steinbach, Kumar
4/18/2004
Types of Clusterings
A clustering is a set of clusters Important distinction between hierarchical and partitional sets of clusters Partitional Clustering
A division data objects into non-overlapping subsets (clusters) such that each data object is in exactly one subset
Hierarchical clustering
A set of nested clusters organized as a hierarchical tree
Tan,Steinbach, Kumar
4/18/2004
Partitional Clustering
Original Points
A Partitional Clustering
Tan,Steinbach, Kumar
4/18/2004
Hierarchical Clustering
p1 p2
p3
p4
p1 p2
Traditional Hierarchical Clustering
p3 p4
Traditional Dendrogram
p1 p3 p2 p4
p1 p2
Non-traditional Hierarchical Clustering
Tan,Steinbach, Kumar
p3 p4
Non-traditional Dendrogram
4/18/2004 8
Types of Clusters
Well-separated clusters Center-based clusters Contiguous clusters Density-based clusters Property or Conceptual Described by an Objective Function
Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 10
3 well-separated clusters
Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 11
4 center-based clusters
Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 12
8 contiguous clusters
Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 13
6 density-based clusters
Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 14
2 Overlapping Circles
Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 15
A variation of the global objective function approach is to fit the data to a parameterized model.
Parameters for the model are determined from the data. Mixture models assume that the data is a mixture' of a number of statistical distributions.
Tan,Steinbach, Kumar
4/18/2004
16
Types of Clusters: Objective Function Map the clustering problem to a different domain and solve a related problem in that domain
Proximity matrix defines a weighted graph, where the nodes are the points being clustered, and the weighted edges represent the proximities between points Clustering is equivalent to breaking the graph into connected components, one for each cluster. Want to minimize the edge weight between clusters and maximize the edge weight within clusters
Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 17
Sparseness
Dictates type of similarity Adds to efficiency
Attribute type
Dictates type of similarity
Type of Data
Dictates type of similarity Other characteristics, e.g., autocorrelation
Clustering Algorithms
K-means and its variants Hierarchical clustering Density-based clustering
Tan,Steinbach, Kumar
4/18/2004
19
K-means Clustering
Partitional clustering approach Each cluster is associated with a centroid (center point) Each point is assigned to the cluster with the closest centroid Number of clusters, K, must be specified The basic algorithm is very simple
Tan,Steinbach, Kumar
4/18/2004
20
The centroid is (typically) the mean of the points in the cluster. Closeness is measured by Euclidean distance, cosine similarity, correlation, etc. K-means will converge for common similarity measures mentioned above. Most of the convergence happens in the first few iterations.
Often the stopping condition is changed to Until relatively few points change clusters n = number of points, K = number of clusters, I = number of iterations, d = number of attributes
Introduction to Data Mining 4/18/2004 21
Complexity is O( n * K * I * d )
Tan,Steinbach, Kumar
Original Points
y
1 0.5 0 -2
-1.5
-1
-0.5
0.5
1.5
3 2.5 2 1.5
3 2.5 2 1.5
1 0.5 0
y
1 0.5 0 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2
-1.5
-1
-0.5
0.5
1.5
Optimal Clustering
Tan,Steinbach, Kumar Introduction to Data Mining
Sub-optimal Clustering
4/18/2004 22
y
1 0.5 0 -2
-1.5
-1
-0.5
0.5
1.5
Tan,Steinbach, Kumar
4/18/2004
23
Ite ration 2
3 2.5 2 1.5
Ite ration 3
1 0.5 0
1 0.5 0
y
1 0.5 0 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2
-2
-1.5
-1
-0.5
0.5
1.5
-1.5
-1
-0.5
0.5
1.5
Ite ration 4
3 2.5 2 1.5 3 2.5 2 1.5
Ite ration 5
3 2.5 2 1.5
Ite ration 6
1 0.5 0
1 0.5 0
y
1 0.5 0 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2
-2
-1.5
-1
-0.5
0.5
1.5
-1.5
-1
-0.5
0.5
1.5
Tan,Steinbach, Kumar
4/18/2004
24
SSE = dist 2 ( mi , x )
i =1 xCi
Given two clusters, we can choose the one with the smallest error One easy way to reduce SSE is to increase K, the number of clusters
A good clustering with smaller K can have a lower SSE than a poor clustering with higher K
Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 25
y
1 0.5 0 -2
-1.5
-1
-0.5
0.5
1.5
Tan,Steinbach, Kumar
4/18/2004
26
Ite ration 2
1 0.5 0
y
1 0.5 0 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2
-1.5
-1
-0.5
0.5
1.5
Ite ration 3
3 2.5 2 1.5 3 2.5 2 1.5
Ite ration 4
3 2.5 2 1.5
Ite ration 5
1 0.5 0
1 0.5 0
y
1 0.5 0 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2
-2
-1.5
-1
-0.5
0.5
1.5
-1.5
-1
-0.5
0.5
1.5
Tan,Steinbach, Kumar
4/18/2004
27
For example, if K = 10, then probability = 10!/1010 = 0.00036 Sometimes the initial centroids will readjust themselves in right way, and sometimes they dont Consider an example of five pairs of clusters
Tan,Steinbach, Kumar
4/18/2004
28
10 Clusters Example
Ite ration 1 4 3 2
8 6 4 2
0 -2 -4 -6 0 5 10 15 20
x Starting with two initial centroids in one cluster of each pair of clusters
Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 29
10 Clusters Example
Ite ration 1
8 6 4 2 8 6 4 2
Ite ration 2
0 -2 -4 -6 0 5 10 15 20
0 -2 -4 -6 0 5 10 15 20
Ite ration 3
8 6 4 2 8 6 4 2
Ite ration 4
0 -2 -4 -6 0 5 10 15 20
0 -2 -4 -6 0 5 10 15 20
Starting with two initial centroids in one cluster of each pair of clusters
Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 30
10 Clusters Example
Ite ration 1 4 3 2
8 6 4 2
0 -2 -4 -6 0 5 10 15 20
x
Starting with some pairs of clusters having three initial centroids, while other have only one.
Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 31
10 Clusters Example
Ite ration 1
8 6 4 2 8 6 4 2
Ite ration 2
0 -2 -4 -6 0 8 6 4 2 5 10 15 20
0 -2 -4 -6 0 8 6 4 2 5 10 15 20
x Ite ration 3
x Ite ration 4
0 -2 -4 -6 0 5 10 15 20
0 -2 -4 -6 0 5 10 15 20
Starting with some pairs of clusters having three initial centroids, while other have only one.
Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 32
Sample and use hierarchical clustering to determine initial centroids Select more than k initial centroids and then select among these initial centroids
Select most widely separated
Tan,Steinbach, Kumar
4/18/2004
33
Tan,Steinbach, Kumar
4/18/2004
34
Tan,Steinbach, Kumar
4/18/2004
35
Post-processing
Eliminate small clusters that may represent outliers Split loose clusters, i.e., clusters with relatively high SSE Merge clusters that are close and that have relatively low SSE Can use these steps during the clustering process
ISODATA
Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 36
Tan,Steinbach, Kumar
4/18/2004
37
Tan,Steinbach, Kumar
4/18/2004
38
Limitations of K-means
K-means has problems when clusters are of differing
Sizes Densities Non-globular shapes
Tan,Steinbach, Kumar
4/18/2004
39
Original Points
K-means (3 Clusters)
Tan,Steinbach, Kumar
4/18/2004
40
Original Points
K-means (3 Clusters)
Tan,Steinbach, Kumar
4/18/2004
41
Original Points
K-means (2 Clusters)
Tan,Steinbach, Kumar
4/18/2004
42
Original Points
K-means Clusters
One solution is to use many clusters. Find parts of clusters, but need to put together.
Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 43
Original Points
K-means Clusters
Tan,Steinbach, Kumar
4/18/2004
44
Original Points
K-means Clusters
Tan,Steinbach, Kumar
4/18/2004
45
Hierarchical Clustering
Produces a set of nested clusters organized as a hierarchical tree Can be visualized as a dendrogram
A tree like diagram that records the sequences of merges or splits
6
0.2
4 3 2 4 5 2 1 3
1 3 2 5 4 6
0.15
0.1
0.05
Tan,Steinbach, Kumar
4/18/2004
46
Tan,Steinbach, Kumar
4/18/2004
47
Hierarchical Clustering
Two main types of hierarchical clustering
Agglomerative:
Start with the points as individual clusters At each step, merge the closest pair of clusters until only one cluster (or k clusters) left
Divisive:
Start with one, all-inclusive cluster At each step, split a cluster until each cluster contains a point (or there are k clusters)
Tan,Steinbach, Kumar
Starting Situation
Start with clusters of individual points and a proximity matrix p1 p2 p3 p4 p5
p1 p2 p3 p4 p5
. . .
...
Proximity Matrix
...
p1 p2 p3 p4 p9 p10 p11 p12
Tan,Steinbach, Kumar
4/18/2004
50
Intermediate Situation
After some merging steps, we have some clusters
C1 C1 C2 C3 C4 C3 C4 C5 C2 C3 C4 C5
Proximity Matrix
C1
C2
C5
...
p1 p2 p3 p4 p9 p10 p11 p12
Tan,Steinbach, Kumar
4/18/2004
51
Intermediate Situation
We want to merge the two closest clusters (C2 and C5) and update the proximity matrix. C3 C4 C5 C1 C2
C1 C2 C3 C4 C3 C4 C5
Proximity Matrix
C1
C2
C5
...
p1 p2 p3 p4 p9 p10 p11 p12
Tan,Steinbach, Kumar
4/18/2004
52
After Merging
The question is How do we update the proximity matrix?
C1 C1 C3 C4 C2 U C5 C3 C4 C1 ? C2 U C5 ? ? ? ? ? ? C3 C4
Proximity Matrix
C2 U C5
...
p1 p2 p3 p4 p9 p10 p11 p12
Tan,Steinbach, Kumar
4/18/2004
53
...
Similarity?
p1 p2 p3 p4
MIN . MAX . Group Average . Proximity Matrix Distance Between Centroids Other methods driven by an objective function
Wards Method uses squared error
Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 54
p5
...
MIN . MAX . Group Average . Proximity Matrix Distance Between Centroids Other methods driven by an objective function
Wards Method uses squared error
Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 55
p5
...
MIN . MAX . Group Average . Proximity Matrix Distance Between Centroids Other methods driven by an objective function
Wards Method uses squared error
Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 56
p5
...
MIN . MAX . Group Average . Proximity Matrix Distance Between Centroids Other methods driven by an objective function
Wards Method uses squared error
Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 57
p5
...
MIN . MAX . Group Average . Proximity Matrix Distance Between Centroids Other methods driven by an objective function
Wards Method uses squared error
Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 58
p5
I1 I2 I3 I4 I5
4
4/18/2004
5
59
Tan,Steinbach, Kumar
3 5 2 4 2 3
0.2
1 6
0.15
0.1
0.05
Nested Clusters
Tan,Steinbach, Kumar Introduction to Data Mining
Dendrogram
4/18/2004 60
Strength of MIN
Original Points
Two Clusters
Tan,Steinbach, Kumar
4/18/2004
61
Limitations of MIN
Original Points
Two Clusters
Tan,Steinbach, Kumar
4/18/2004
62
Cluster Similarity: MAX or Complete Linkage Similarity of two clusters is based on the two least similar (most distant) points in the different clusters
Determined by all pairs of points in the two clusters
I3 I4 0.10 0.65 0.70 0.60 1.00 0.40 0.40 1.00 0.30 0.80
5
63
4/18/2004
4 2 5 2 3 4 3
1 5
0.4 0.35 0.3 0.25
6 1
Nested Clusters
Dendrogram
Tan,Steinbach, Kumar
4/18/2004
64
Strength of MAX
Original Points
Two Clusters
Tan,Steinbach, Kumar
4/18/2004
65
Limitations of MAX
Original Points Tends to break large clusters Biased towards globular clusters
Tan,Steinbach, Kumar Introduction to Data Mining
Two Clusters
4/18/2004
66
proximity(p , p )
i j
|Clusteri ||Clusterj |
Need to use average connectivity for scalability since total proximity favors large clusters
I1 I2 I3 I4 I5
4
4/18/2004
5
67
Tan,Steinbach, Kumar
5 2 5 2
1
0.25 0.2 0.15
3 1 4 3
0.1 0.05 0
Nested Clusters
Dendrogram
Tan,Steinbach, Kumar
4/18/2004
68
Limitations
Biased towards globular clusters
Tan,Steinbach, Kumar
4/18/2004
69
Less susceptible to noise and outliers Biased towards globular clusters Hierarchical analogue of K-means
Can be used to initialize K-means
Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 70
3 5 2 2
1 5
1 2 5 2 3 4 3 1
5 2 5 2
3 1 4 3
Tan,Steinbach, Kumar
4/18/2004
71
Tan,Steinbach, Kumar
4/18/2004
72
Once a decision is made to combine two clusters, it cannot be undone No objective function is directly minimized Different schemes have problems with one or more of the following:
Sensitivity to noise and outliers Difficulty handling different sized clusters and convex shapes Breaking large clusters
Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 73
Tan,Steinbach, Kumar
4/18/2004
74
Tan,Steinbach, Kumar
4/18/2004
75
Tan,Steinbach, Kumar
4/18/2004
76
Tan,Steinbach, Kumar
4/18/2004
77
DBSCAN Algorithm
Eliminate noise points Perform clustering on the remaining points
Tan,Steinbach, Kumar
4/18/2004
78
Original Points
Tan,Steinbach, Kumar
4/18/2004
79
Original Points
Clusters
(MinPts=4, Eps=9.75).
Original Points
(MinPts=4, Eps=9.92)
4/18/2004 81
Tan,Steinbach, Kumar
4/18/2004
82
Cluster Validity
For supervised classification we have a variety of measures to evaluate how good our model is
Accuracy, precision, recall
For cluster analysis, the analogous question is how to evaluate the goodness of the resulting clusters? But clusters are in the eye of the beholder! Then why do we want to evaluate them?
To avoid finding patterns in noise To compare clustering algorithms To compare two sets of clusters To compare two clusters
Introduction to Data Mining 4/18/2004 83
Tan,Steinbach, Kumar
Random Points
DBSCAN
0.2
0.4
0.6
0.8
x
1 0.9 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0.2 0.4 0.6 0.8 1 0 0 0.2 0.4
K-means
Complete Link
0.6
0.8
Tan,Steinbach, Kumar
4/18/2004
84
2. 3.
4. 5.
Comparing the results of two different sets of cluster analyses to determine which is better. Determining the correct number of clusters. For 2, 3, and 4, we can further distinguish whether we want to evaluate the entire clustering or just individual clusters.
Tan,Steinbach, Kumar
4/18/2004
85
Internal Index: Used to measure the goodness of a clustering structure without respect to external information.
Sum of Squared Error (SSE)
Tan,Steinbach, Kumar
4/18/2004
86
High correlation indicates that points that belong to the same cluster are close to each other. Not a good measure for some density or contiguity based clusters.
Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 87
Measuring Cluster Validity Via Correlation Correlation of incidence and proximity matrices for the K-means clusterings of the following two data sets.
1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 0.2 0.4 0.6 0.8 1 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 0.2 0.4 0.6 0.8 1
Corr = -0.9235
Tan,Steinbach, Kumar Introduction to Data Mining
Corr = -0.5810
4/18/2004 88
Points
0 0. 2 0.4 0.6 0.8 1
50 60 70 80 90 100
P oints
Tan,Steinbach, Kumar
4/18/2004
89
Using Similarity Matrix for Cluster Validation Clusters in random data are not so crisp
1 10 20 30 40 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 20 40 60 80 0 100 Simila rity 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 0.2 0.4 0.6 0.8 1
Points
50 60 70 80 90 100
P oints
DBSCAN
Tan,Steinbach, Kumar
4/18/2004
90
Using Similarity Matrix for Cluster Validation Clusters in random data are not so crisp
1 10 20 30 40 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 20 40 60 80 0 100 Simila rity 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 0.2 0.4 0.6 0.8 1
Points
50 60 70 80 90 100
P oints
K-means
Tan,Steinbach, Kumar
4/18/2004
91
Using Similarity Matrix for Cluster Validation Clusters in random data are not so crisp
1 10 20 30 40 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 20 40 60 80 0 100 Simila rity 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 0.2 0.4 0.6 0.8 1
Points
50 60 70 80 90 100
P oints
Complete Link
Tan,Steinbach, Kumar
4/18/2004
92
1 0. 9
500
6
1000
0. 8 0. 7 0. 6
3
1500
0. 5 0. 4
2000
0. 3 0. 2 0. 1
5
2500
7
3000 500 1000 1500 2000 2500 3000
DBSCAN
Tan,Steinbach, Kumar
4/18/2004
93
SSE is good for comparing two clusterings or two clusters (average SSE). Can also be used to estimate the number of clusters
10
6 4 2
9 8 7 6
SSE
0 -2 -4 -6 5 10 15
5 4 3 2 1 0 2 5 10 15 20 25 30
Tan,Steinbach, Kumar
4/18/2004
94
Tan,Steinbach, Kumar
4/18/2004
95
For comparing the results of two different sets of cluster analyses, a framework is less necessary.
However, there is the question of whether the difference between two index values is significant
Tan,Steinbach, Kumar
4/18/2004
96
Example
Compare SSE of 0.005 against three clusters in random data Histogram shows SSE of three clusters in 500 sets of random data points of size 100 distributed over the range 0.2 0.8 for x and y values
1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 0. 2 0.4 0.6 0.8 1
50 45 40 35 30
Count
25 20 15 10 5 0 0.016 0. 018 0.02 0.022 0.024 0.026 0.028 0.03 0.032 0.034
x Tan,Steinbach, Kumar
SS E
Introduction to Data Mining 4/18/2004 97
1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 0. 2 0.4 0.6 0.8 1
1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 0. 2 0.4 0.6 0.8 1
Corr = -0.9235
Corr = -0.5810
Tan,Steinbach, Kumar
4/18/2004
98
Cluster Separation: Measure how distinct or wellseparated a cluster is from other clusters
Example: Squared Error
Cohesion is measured by the within cluster sum of squares (SSE)
WSS = ( x mi )2
i xC i
BSS = Ci ( m mi ) 2
Where |Ci| is the size of cluster i
Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 99
m1
m2
K=1 cluster:
K=2 clusters:
Tan,Steinbach, Kumar
4/18/2004
100
cohesion
separation
Tan,Steinbach, Kumar
4/18/2004
101
Tan,Steinbach, Kumar
4/18/2004
103
Final Comment on Cluster Validity The validation of clustering structures is the most difficult and frustrating part of cluster analysis. Without a strong effort in this direction, cluster analysis will remain a black art accessible only to those true believers who have experience and great courage. Algorithms for Clustering Data, Jain and Dubes
Tan,Steinbach, Kumar
4/18/2004
104