You are on page 1of 82

Lecture 2 Agenda

• Chapters 3 and 4 of J&M


 Chapter 3 till 3.11, non-advanced parts of chapter 4
 Porter stemmer
 Edit distance
 N-grams
Important notes

• Make sure you read REGULARLY


• Lecture slides for reference only
 Reading J&M book is MANDATORY
 Nugues book very useful alternative view, etc
• Decide on your extra reading by end of lec-4
English Morphology

• Morphology is the study of the ways that


words are built up from smaller meaningful
units called morphemes
• We can usefully divide morphemes into
two classes
• Stems: The core meaning-bearing units
• Affixes: Bits and pieces that adhere to stems
to change their meanings and grammatical
functions
3
3/6/2019
English Morphology

• We can also divide morphology up into


two broad classes
• Inflectional: affix doesn’t change word-class
(walk, walking)
• Derivational: meaning change, word class
change (clue, clueless; compute,
computerization)

4
3/6/2019
Light Weight Morphology

• Sometimes you just need to know the


stem of a word and you don’t care about
the structure.
• In fact you may not even care if you get
the right stem, as long as you get a
consistent string.
• This is stemming… it most often shows up
in IR applications
Stemming for Information
Retrieval

• Run a stemmer on the documents to be


indexed
• Run a stemmer on users’ queries
• Match
 This is basically a form of hashing, where you
want collisions.
Porter

• No lexicon needed
• Basically a set of staged sets of rewrite rules
that strip suffixes
• Handles both inflectional and derivational
suffixes
• Doesn’t guarantee that the resulting stem is
really a stem (see first bullet)
• Lack of guarantee doesn’t matter for IR
Porter Example

• Computerization
 ization -> -ize computerize
 ize -> ε computer
Porter

• The original exposition of the Porter


stemmer did not describe it as a
transducer but…
 Each stage is separate transducer
 The stages can be composed to get one big
transducer
• Many open-source implementations in C, C++,
Java, Perl, Python, etc on the web
 http://tartarus.org/martin/PorterStemmer/
Segmentation

• A significant part of the morphological


analysis we’ve been doing involves
segmentation
 Sometimes getting the segments is all we
want
 Morpheme boundaries, words, sentences,
paragraphs
Segmentation

• Segmenting words in running text


• Segmenting sentences in running text
• Why not just periods and white-space?
 Mr. Sherwood said reaction to Sea Containers’
proposal has been "very positive." In New York Stock
Exchange composite trading yesterday, Sea Containers
closed at $62.625, up 62.5 cents.
 “I said, ‘what’re you? Crazy?’ “ said Sadowsky. “I
can’t afford to do that.’’
• Words like:
 cents. said, positive.” Crazy?
Can’t Just Segment on
Punctuation
• Word-internal punctuation
 M.p.h
 Ph.D.
 AT&T
 01/02/06
 Google.com
 555,500.50

• Expanding clitics
 What’re -> what are
 I’m -> I am

• Multi-token words
 New York
 Rock ‘n’ roll
Sentence Segmentation

• !, ? relatively unambiguous
• Period “.” is quite ambiguous
 Sentence boundary
 Abbreviations like Inc. or Dr.
• General idea:
 Build a binary classifier:
 Looks at a “.”
 Decides EndOfSentence/NotEOS
 Could be hand-written rules, or machine-learning
Word Segmentation in Chinese

• Some languages don’t have spaces


 Chinese, Japanese, Thai, Khmer
• Chinese:
 Words composed of characters
 Characters are generally 1 syllable and 1
morpheme.
 Average word is 2.4 characters long.
 Standard segmentation algorithm:
 Maximum Matching (also called Greedy)
Vietnamese Word Segmentation
Problem
• Space is not the word separator in Vietnamese
• Vietnamese words
 Single: bóng, bàn, ăn
 Compound: tạp chí, tổ quốc, công nghiệp hóa
 Vietlex statistics:
 Word count 40,181 (syllable count 7,729)

15
Vietnamese Word Segmentation
Problem (cont)
Word segmentation ambiguities
 Syllable sequences “nhà cửa”, “sắc đẹp”,
“hiệu sách”, are words in
a. Nhà cửa bề bộn quá
b. Cô ấy giữ gìn sắc đẹp.
c. Ngoài hiệu sách có bán cuốn này

 These sequences are not words in:


a. Ở nhà cửa ngõ chẳng đóng gì cả.
b. Bức này màu sắc đẹp hơn.
c. Ngoài cửa hiệu sách báo bày la liệt.
16
Maximum Matching Word
Segmentation

Given a wordlist of Chinese, and a string.


1) Start a pointer at the beginning of the string
2) Find the longest word in dictionary that matches the
string starting at pointer
3) Move the pointer over the word in string
4) Go to 2
Spelling Correction and Edit
Distance

• Non-word error detection:


 detecting “graffe”
• Non-word error correction:
 figuring out that “graffe” should be “giraffe”
• Context-dependent error detection and
correction:
 Figuring out that “war and piece” should be
peace
Non-Word Error Detection

• Any word not in a dictionary is a spelling


error
• Need a big dictionary!
• What to use?
 FST dictionary!!
 See section 3.5, 3.6
Isolated Word Error Correction

• How do I fix “graffe”?


 Search through all words:
 graf
 craft
 grail
 giraffe
 Pick the one that’s closest to graffe
 What does “closest” mean?
 We need a distance metric.
 The simplest one: edit distance
Edit Distance

• The minimum edit distance between two


strings is the minimum number of editing
operations
 Insertion
 Deletion
 Substitution
• Needed to transform one string into the
other
Minimum Edit Distance

• If each operation has cost of 1


• Distance between these is 5
• If substitutions cost 2 (Levenshtein)
• Distance between these is 8
Min Edit Example
Min Edit As Search

• But that generates a huge search space


 Initial state is the word we’re transforming
 Operators are insert, delete, substitute
 Goal state is the word we’re trying to get to
 Path cost is what we’re trying to minimize
• Navigating that space in a naïve backtracking
fashion would be incredibly wasteful
• Why? Lots of paths wind up at the same
states. But there’s no need to keep
track of the them all. We only care
about the shortest path to each of those
revisited states.
Min Edit Distance
Min Edit Example
Min Edit Example
Summary

• Minimum Edit Distance


• See page 52 of Nugues as well
• A “dynamic programming” algorithm
• We will see a probabilistic version of this
called “Viterbi”
Language Modeling

• We want to compute
P(w1,w2,w3,w4,w5…wn), the probability
of a sequence
• Alternatively we want to compute
P(w5|w1,w2,w3,w4,w5): the probability of
a word given some previous words
• The model that computes P(W) or
P(wn|w1,w2…wn-1) is called the language
model.
29
3/6/2019
Computing P(W)

• How to compute this joint probability:

 P(“the”,”other”,”day”,”I”,”was”,”walking”,”along”
,”and”,”saw”,”a”,”lizard”)

• Intuition: let’s rely on the Chain Rule of


Probability

30
3/6/2019
The Chain Rule

• Recall the definition of conditional probabilities


P( A^ B)
• Rewriting: P( A | B) 
P( B)

P( A^ B)  P( A | B) P( B)
• More generally
• P(A,B,C,D) = P(A)P(B|A)P(C|A,B)P(D|A,B,C)
• In general
• P(x1,x2,x3,…xn) =
P(x1)P(x2|x1)P(x3|x1,x2)…P(xn|x1…xn-1)
31
3/6/2019
The Chain Rule

• P(“the big red dog was”)=

• P(the)*P(big|the)*P(red|the big)*P(dog|the big


red)*P(was|the big red dog)
32
3/6/2019
Very Easy Estimate

• How to estimate?
 P(you | the river is so wide that)

P(you | the river is so wide that)


=
Count(the river is so wide that you)
_______________________________
Count(the river is so wide that)
33
3/6/2019
Very Easy Estimate

• According to Google those counts are


7/204.
 Search for fixed strings “the river is so wide
that” and “the river is so wide that you”

34
3/6/2019
Unfortunately

• There are a lot of possible sentences


• In general, we’ll never be able to get
enough data to compute the statistics for
those long prefixes
• P(lizard|the,other,day,I,was,walking,along,a
nd, saw,a)

35
3/6/2019
Markov Assumption

• Make the simplifying assumption


 P(lizard|the,other,day,I,was,walking,along,and
,saw,a) = P(lizard|a)
• Or maybe
 P(lizard|the,other,day,I,was,walking,along,and
,saw,a) = P(lizard|saw,a)
• Or maybe... You get the idea.

36
3/6/2019
Markov Assumption

So for each component in the product replace with the


approximation (assuming a prefix of N)

P(wn | w n1
1 )  P(wn | w n1
nN 1 )
Bigram version

P(w n | w n1
1 )  P(w n | w n1 )

37
3/6/2019
Estimating bigram probabilities

• The Maximum Likelihood Estimate


count(wi1,wi )
P(wi | wi1) 
count(wi1 )
c(wi1,wi )
P(wi | wi1) 
c(wi1)
38
3/6/2019
An example

• <s> I am Sam </s>


• <s> Sam I am </s>
• <s> I do not like green eggs and ham </s>

39
3/6/2019
Maximum Likelihood Estimates

• The maximum likelihood estimate of some parameter of


a model M from a training set T
 Is the estimate that maximizes the likelihood of the training set T
given the model M
• Suppose the word Chinese occurs 400 times in a corpus
of a million words (Brown corpus)
• What is the probability that a random word from some
other text from the same distribution will be “Chinese”
• MLE estimate is 400/1000000 = .004
 This may be a bad estimate for some other corpus
• But it is the estimate that makes it most likely that
“Chinese” will occur 400 times in a million word corpus.
40
3/6/2019
Berkeley Restaurant Project
Sentences
• can you tell me about any good cantonese
restaurants close by
• mid priced thai food is what i’m looking for
• tell me about chez panisse
• can you give me a listing of the kinds of food that
are available
• i’m looking for a good place to eat breakfast
• when is caffe venezia open during the day

41
3/6/2019
Raw Bigram Counts

• Out of 9222 sentences: Count(col | row)

42
3/6/2019
Raw Bigram Probabilities

• Normalize by unigrams:

• Result:

43
3/6/2019
Bigram Estimates of Sentence
Probabilities

• P(<s> I want english food </s>) =


p(i|<s>) x p(want|I) x p(english|want)
x p(food|english) x p(</s>|food)
=.000031

44
3/6/2019
Kinds of knowledge?

• P(english|want) = .0011 • World


• P(chinese|want) = .0065 knowledge
• P(to|want) = .66 •Syntax
• P(eat | to) = .28
• P(food | to) = 0
• P(want | spend) = 0
• P (i | <s>) = .25 •Discourse

45
3/6/2019
The Shannon Visualization
Method
• Generate random sentences:
• Choose a random bigram <s>, w according to its probability
• Now choose a random bigram (w, x) according to its probability
• And so on until we choose </s>
• Then string the words together
• <s> I
I want
want to
to eat
eat Chinese
Chinese food
food </s>

46
3/6/2019
Shakespeare

47
3/6/2019
Shakespeare as corpus

• N=884,647 tokens, V=29,066


• Shakespeare produced 300,000 bigram types
out of V2= 844 million possible bigrams: so,
99.96% of the possible bigrams were never
seen (have zero entries in the table)
• Quadrigrams worse: What's coming out looks
like Shakespeare because it is Shakespeare

48
3/6/2019
The Wall Street Journal is Not
Shakespeare

49
3/6/2019
Why?

• Why would anyone want the probability of


a sequence of words?
• Typically because of

P(X | S)P(S)
P(S | X) 
P(X)

50
3/6/2019
Unknown words: Open versus
closed vocabulary tasks
• If we know all the words in advanced
 Vocabulary V is fixed
 Closed vocabulary task
• Often we don’t know this
 Out Of Vocabulary = OOV words
 Open vocabulary task
• Instead: create an unknown word token <UNK>
 Training of <UNK> probabilities
 Create a fixed lexicon L of size V
 At text normalization phase, any training word not in L changed to <UNK>
 Now we train its probabilities like a normal word
 At decoding time
 If text input: Use UNK probabilities for any word not in training

51
3/6/2019
Evaluation

• We train parameters of our model on a training


set.
• How do we evaluate how well our model works?
• We look at the models performance on some new
data
• This is what happens in the real world; we want to
know how our model performs on data we haven’t
seen
• So a test set. A dataset which is different than our
training set
52
3/6/2019
Evaluating N-gram models

• Best evaluation for an N-gram


 Put model A in a speech recognizer
 Run recognition, get word error rate
(WER) for A
 Put model B in speech recognition, get
word error rate for B
 Compare WER for A and B
 Extrinsic evaluation
53
3/6/2019
Difficulty of extrinsic (in-vivo)
evaluation of N-gram models
• Extrinsic evaluation
 This is really time-consuming
 Can take days to run an experiment
• So
 As a temporary solution, in order to run experiments
 To evaluate N-grams we often use an intrinsic
evaluation, an approximation called perplexity
 But perplexity is a poor approximation unless the test
data looks just like the training data
 So is generally only useful in pilot experiments
(generally is not sufficient to publish)
 But is helpful to think about. 54
3/6/2019
Perplexity
• Perplexity is the probability of the test
set (assigned by the language model),
normalized by the number of words:

• Chain rule:

• For bigrams:

• Minimizing perplexity is the same as maximizing


probability
 The best language model is one that best predicts 55
3/6/2019 an unseen test set
A Different Perplexity Intuition

• How hard is the task of recognizing digits


‘0,1,2,3,4,5,6,7,8,9’: pretty easy
• How hard is recognizing (30,000) names at Microsoft.
Hard: perplexity = 30,000
• Perplexity is the weighted equivalent branching factor
provided by your model

56
Slide from Josh Goodman
3/6/2019
Lower perplexity = better model

• Training 38 million words, test 1.5 million


words, WSJ

57
3/6/2019
Lesson 1: the perils of
overfitting

• N-grams only work well for word prediction


if the test corpus looks like the training
corpus
 In real life, it often doesn’t
 We need to train robust models, adapt to test
set, etc

58
3/6/2019
Lesson 2: zeros or not?

• Zipf’s Law:
 A small number of events occur with high frequency
 A large number of events occur with low frequency
 You can quickly collect statistics on the high frequency events
 You might have to wait an arbitrarily long time to get valid statistics on
low frequency events
• Result:
 Our estimates are sparse! no counts at all for the vast bulk of things
we want to estimate!
 Some of the zeroes in the table are really zeros But others are simply
low frequency events you haven't seen yet. After all, ANYTHING
CAN HAPPEN!
 How to address?
• Answer:
 Estimate the likelihood of unseen N-grams!
59
3/6/2019
Smoothing is like Robin Hood:
Steal from the rich and give to the poor (in
probability mass)

60
3/6/2019
Laplace smoothing

• Also called add-one smoothing


• Just add one to all the counts!
• Very simple

• MLE estimate:

• Laplace estimate:

• Reconstructed counts:
61
3/6/2019
Laplace smoothed bigram
counts

62
3/6/2019
Laplace-smoothed bigrams

63
3/6/2019
Reconstituted counts

64
3/6/2019
Big Changes to Counts

• C(count to) went from 608 to 238!


• P(to|want) from .66 to .26!
• Discount d= c*/c
 d for “chinese food” =.10!!! A 10x reduction
 So in general, Laplace is a blunt instrument
 Could use more fine-grained method (add-k)
• Despite its flaws Laplace (add-k) is however still used to
smooth other probabilistic models in NLP, especially
 For pilot studies
 in domains where the number of zeros isn’t so huge.

65
3/6/2019
Better Discounting Methods

• Intuition used by many smoothing


algorithms
 Good-Turing
 Kneser-Ney
 Witten-Bell
• Is to use the count of things we’ve seen
once to help estimate the count of things
we’ve never seen
66
3/6/2019
Good-Turing

• Imagine you are fishing


 There are 8 species: carp, perch, whitefish, trout,
salmon, eel, catfish, bass
• You have caught
 10 carp, 3 perch, 2 whitefish, 1 trout, 1 salmon, 1 eel
= 18 fish (tokens)
= 6 species (types)
• How likely is it that you’ll next see another trout?

67
3/6/2019
Good-Turing

• Now how likely is it that next species is


new (i.e. catfish or bass)

There were 18 distinct events... 3 of


those represent singleton species

3/18

68
3/6/2019
Good-Turing

• But that 3/18s isn’t represented in our


probability mass. Certainly not the one we
used for estimating another trout.

69
3/6/2019
Good-Turing Intuition

• Notation: Nx is the frequency-of-frequency-x


 So N10=1, N1=3, etc
• To estimate total number of unseen species
 Use number of species (words) we’ve seen once
 c0* =c1 p0 = N1/N
• All other estimates are adjusted (down) to give
probabilities for unseen

70
3/6/2019 Slide from Josh Goodman
Good-Turing Intuition

• Notation: Nx is the frequency-of-frequency-x


 So N10=1, N1=3, etc
• To estimate total number of unseen species
 Use number of species (words) we’ve seen once
 c0* =c1 p0 = N1/N p0=N1/N=3/18

• All other estimates are adjusted (down) to give


probabilities for unseen
P(eel) = c*(1) = (1+1) 1/ 3 = 2/3

71
3/6/2019 Slide from Josh Goodman
Bigram frequencies of
frequencies and GT re-estimates

72
3/6/2019
Backoff and Interpolation

• Another really useful source of knowledge


• If we are estimating:
 trigram p(z|xy)
 but c(xyz) is zero
• Use info from:
 Bigram p(z|y)
• Or even:
 Unigram p(z)
• How to combine the trigram/bigram/unigram
info?
 See pages 105, 106 (skip advanced stuff in 107) 73
3/6/2019
Backoff versus interpolation

• Backoff: use trigram if you have it,


otherwise bigram, otherwise unigram
• Interpolation: mix all three

74
3/6/2019
Interpolation

• Simple interpolation

• Lambdas conditional on context:

75
3/6/2019
How to set the lambdas?

• Use a held-out corpus


• Choose lambdas which maximize the
probability of some held-out data
 I.e. fix the N-gram probabilities
 Then search for lambda values
 That when plugged into previous equation
 Give largest probability for held-out set
 Can use EM to do this search
76
3/6/2019
GT smoothed bigram probs

77
3/6/2019
OOV words: <UNK> word

• Out Of Vocabulary = OOV words


• We don’t use GT smoothing for these
 Because GT assumes we know the number of unseen events
• Instead: create an unknown word token <UNK>
 Training of <UNK> probabilities
 Create a fixed lexicon L of size V
 At text normalization phase, any training word not in L changed to
<UNK>
 Now we train its probabilities like a normal word
 At decoding time
 If text input: Use UNK probabilities for any word not in training

78
3/6/2019
Practical Issues

• We do everything in log space


 Avoid underflow
 (also adding is faster than multiplying)

79
3/6/2019
Language Modeling Toolkits

• SRILM
• CMU-Cambridge LM Toolkit
• These toolkits are publicly available
• Can use it to get N-gram models
• Lots of parameters (need to know the
theory!)
• Standard N-gram format: ARPA language
model (see pages 108-109)
80
3/6/2019
Google N-Gram Release

81
3/6/2019
Google N-Gram Release

• serve as the incoming 92


• serve as the incubator 99
• serve as the independent 794
• serve as the index 223
• serve as the indication 72
• serve as the indicator 120
• serve as the indicators 45
• serve as the indispensable 111
• serve as the indispensible 40
• serve as the individual 234

82
3/6/2019

You might also like