Professional Documents
Culture Documents
Association Rules
A Frequent pattern is a pattern (a set of items, subsequences,
subgraphs, etc.) that occurs frequently in a data set.
Motivation: Finding inherent regularities (associations) in data.
Forms the foundation for many essential data mining tasks:
Association, correlation, and causality analysis
Classification: associative classification
Cluster analysis: frequent pattern-based clustering
First proposed by [AIS93] in the context of frequent itemsets and
association rule mining for market basket analysis.
Association Rules
An item (I) is:
Pattern P is subpattern of P if P P
A rule R is A B where A and B are disjoint patterns.
Support(A B)=P(A B)
Confidence(A B)=P(B|A)=posterior probability
Iyad Batal
Association Rules
Framework: find all the rules that satisfy both a minimum support
(min_sup) and a minimum confidence (min_conf) thresholds.
Association rule mining:
Find all frequent patterns (with support min_sup).
Generate strong rules from the frequent patterns.
The second step is straightforward:
For each frequent pattern p, generate all nonempty subsets.
For every non-empty subset s, output the rule s (p-s) if
conf=sup(p)/sup(s) min_conf.
Association Rules
Example for market basket data
Items={A,B,C,D,E,F}
Transaction-id
Items bought
10
A, B, D
20
A, C, D
30
A, D, E
40
B, E, F
50
B, C, D, E, F
Iyad Batal
Association Rules
Example for relational data
Rule: Smoke =T Family history = T Lung cancer=T
sup (Smoke =T Family history = T Lung cancer=T )= 60/200=30%
Iyad Batal
Iyad Batal
Apriori
The Apriori property:
Any subset of a frequent pattern must be frequent.
If {beer, chips, nuts} is frequent, so is {beer, chips}, i.e., every
transaction having {beer, chips, nuts} also contains {beer, chips}.
Apriori pruning principle: If there is any pattern which is infrequent, its
superset should not be generated/tested!
Method (level-wise search):
Initially, scan DB once to get frequent 1-itemset
For each level k:
Generate length (k+1) candidates from length k frequent patterns
Scan DB and remove the infrequent candidates
Terminate when no candidate set can be generated
Iyad Batal
Apriori
min_sup = 2
Itemset
sup
{A}
{B}
{C}
{D}
{E}
Database
Tid
Items
10
A, C, D
20
B, C, E
30
A, B, C, E
40
B, E
C1
1st scan
C2
L2
Itemset
{A, C}
{B, C}
{B, E}
{C, E}
sup
2
2
3
2
Itemset
{A, B}
{A, C}
{A, E}
{B, C}
{B, E}
{C, E}
sup
1
2
1
2
3
2
Itemset
sup
{A}
{B}
{C}
{E}
L1
C2
2nd scan
Itemset
{A, B}
{A, C}
{A, E}
{B, C}
{B, E}
{C, E}
C3
Itemset
{B, C, E}
3rd scan
L3
Itemset
sup
{B, C, E}
Iyad Batal
Apriori
Candidate generation: Assume we are generating k+1 candidates at
level k
Step 1: self-joining two frequent k-patterns if they have the same
k-1 prefix
Step 2: pruning: remove a candidate if it contains any infrequent kpattern.
Example: L3={abc, abd, acd, ace, bcd}
Self-joining: L3*L3
abc and abd abcd
acd and ace acde
Pruning:
acde is removed because ade is not in L3
C4={abcd}
Iyad Batal
Apriori
The bottleneck of Apriori:
Huge candidate sets:
To discover a frequent 100-pattern, e.g., {a1, a2, , a100}, one
needs to generate
candidates!
Multiple scans of database:
Needs (n +1 ) scans, n is the length of the longest pattern.
Iyad Batal
FP-growth
The FP-growth algorithm: mining frequent patterns without candidate
generation [Han, Pei & Yin 2000]
Compress a large database into a compact Frequent-Pattern tree (FPtree) structure
Iyad Batal
FP-growth
Constructing the FP-tree
TID
100
200
300
400
500
Items bought
(ordered) frequent items
{f, a, c, d, g, i, m, p}
{f, c, a, m, p}
{a, b, c, f, l, m, o}
{f, c, a, b, m}
{b, f, h, j, o}
{f, b}
{b, c, k, s, p}
{c, b, p}
{a, f, c, e, l, p, m, n}
{f, c, a, m, p}
Steps:
Header Table
Iyad Batal
{}
f:4
c:3
c:1
b:1
a:3
b:1
p:1
m:2
b:1
p:2
m:1
FP-growth
Method (divide-and-conquer)
For each item, construct its conditional pattern-base, and then its
conditional FP-tree.
Iyad Batal
FP-growth
Step 1: From FP-tree to Conditional Pattern Base
Starting at the frequent header table in the FP-tree
Traverse the FP-tree by following the link of each frequent item
Accumulate all of transformed prefix paths of that item to form a
conditional pattern base
Header Table
Item frequency head
f
4
c
4
a
3
b
3
m
3
p
3
{}
f:4
c:3
c:1
b:1
a:3
m:2
p:2
b:1
p:1
b:1
m:1
Iyad Batal
item
f:3
fc:3
fca:2, fcab:1
fcam:2, cb:1
FP-growth
Step 2: Construct Conditional FP-tree
Start from the end of the list
For each pattern-base
Accumulate the count for each item in the base
Construct the FP-tree for the frequent items of the pattern base
Example: Here we are mining for pattern m, min_sup=3
Header Table
Item frequency head
f
4
c
4
a
3
b
3
m
3
p
3
{}
f:4
c:3
c:1
b:1
a:3
b:1
p:1
m-conditional pattern
base:
fca:2, fcab:1
{}
f:3
m:2
b:1
c:3
p:2
m:1
a:3
Iyad Batal
m-conditional FP-tree
FP-growth
FP-growth is faster than Apriori because:
No candidate generation, no candidate test
Use compact data structure
Eliminate repeated database scan
Basic operation is counting and FP-tree building (no pattern
matching)
Disadvantage: FP-tree may not fit in main memory!
Iyad Batal
FP-growth
FP-growth vs. Apriori: Scalability With the Support Threshold
100
D1 FP-grow th runtime
90
D1 Apriori runtime
80
Run time(sec.)
70
60
50
40
30
20
10
0
0
0.5
1
1.5
2
Support threshold(%)
Iyad Batal
2.5
Correlation analysis
Association rule mining often generates a huge number of
rules, but a majority of them either are redundant or do not
reflect the true correlation relationship among data objects.
Some strong association rules (based on support and
confidence ) can be misleading.
Correlation analysis can reveal which strong association rules
are interesting and useful.
Iyad Batal
Correlation analysis
play basketball eat cereal [40%, 66.7%] is misleading
The overall % of students eating cereal is 75% > 66.7%.
play basketball not eat cereal [20%, 33.3%] is more accurate,
Not basketball
Sum (row)
Cereal
2000 (40%)
1750 (35%)
3750 (75%)
Not cereal
1000 (20%)
250 (5%)
1250 (25%)
Sum(col.)
3000 (60%)
2000 (40%)
5000 (100%)
Iyad Batal
Correlation analysis
The lift score
P( A B)
P( B | A)
P( A) P( B)
P( B)
lift ( A B)
Not basketball
Sum (row)
Cereal
2000 (40%)
1750 (35%)
3750 (75%)
Not cereal
1000 (20%)
250 (5%)
1250 (25%)
Sum(col.)
3000 (60%)
2000 (40%)
5000 (100%)
2000 / 5000
0.89
3000 / 5000 * 3750 / 5000
1000 / 5000
lift (basketball cereal )
1.33
3000 / 5000 *1250 / 5000Iyad Batal
lift (basketball cereal )
Correlation analysis
The 2 test
Lift calculates the correlation value, but we could not tell whether
the value is statistically significant.
Pearson Chi-square is the most common test for significance of the
relationship between categorical variables
(O( r ) E[r ]) 2
E[r ]
Correlation analysis
disadvantages
Problem: Evaluate each rule individually!
Pr(CHD)=30%
R2: Family history=yes Race=Caucasian CHD
[sup=20%, conf=55%]
R2 is interesting!
R1: Family history=yes CHD
[sup=50%, conf=60%]
R2 is not interesting!
We should consider the nested structure of the rules!
Constraint-based Mining
Finding all the patterns in a database autonomously? unrealistic!
The patterns could be too many but not focused!
Data mining should be an interactive process
Constraint-based Mining
Anti-monotonic constraints are very important because they can
greatly speed up the mining process.
Anti-monotonicity exhibit an Apriori-like property:
When a pattern violates the constraint, so does any of its superset
sum(S.Price) v is anti-monotone
sum(S.Price) v is not anti-monotone
Associative classification
Associative classification: build a rule-based classifier from
association rules.
This approach overcomes some limitations of greedy methods (e.g.
decision-tree, sequential covering algorithms), which considers only
one attribute at a time (found to be more accurate than C4.5).
Build class association rules:
Association rules in general can have any number of items in the
consequent.
Class association rules set the consequent to be the class label.
Example: Age=youth Credit=OK buys_computer=yes
[sup=20%, conf=90%]
Iyad Batal
Associative classification
CBA
CBA: Classification-Based Association [Liu et al, 1998]
Use the Apriori algorithm to mine the class association rules.
Classification:
Organize the rules according to their confidence and support.
classify a new example x by the first rule satisfying x.
Iyad Batal
Associative classification
CMAR
CMAR (Classification based on Multiple Association rules) [Li et al 2001]
Use the FP-growth algorithm to mine the class association rules.
Employs the CR_tree structure (prefix tree for indexing the rules) to
efficiently store and retrieve rules.
Apply rule pruning whenever a rule is inserted in the tree:
If R1 is more general than R2 and conf(R1)>conf(R2): R2 is pruned
All rules for which the antecedent and class are not positively
correlated (2 test) are also pruned.
CMAR considers multiple rules when classifying an instance and use a
weighted measure to find the strongest class.
CMAR is slightly more accurate and more efficient than CBA
Iyad Batal
Associative classification
Harmony
Drawback of CBA and CMAR is that the number of rules can be
extremely large.
Harmony [Wang et al, 2005] adopts an instance-centric approach:
Find the highest confidence rule for each training instance.
Build the classification model from the union of these rules.
Use the FP-growth algorithm to mine the rules.
Efficient mining:
Nave way: mine all frequent patterns and then extract the
highest confidence rule for each instance.
Harmony employs efficient pruning methods to accelerate the
rule discovery: the pruning methods are incorporated within the
FP-growth algorithm.
Iyad Batal
Apply the classifier (e.g. SVM or C4.5) in the new feature space.
Iyad Batal
Iyad Batal
After choosing the solid line, the dashed line makes the groups
purer (cannot be chosen by the batch mode)
Iyad Batal
Iyad Batal