You are on page 1of 6

Vector quantization

Vector quantization is a classical quantization technique from signal processing


which allows the modeling of probability density functions by the distribution of
prototype vectors. It was originally used for data compression. It works by dividing a
large set of points (vectors) into groups having approximately the same number of points
closest to them. Each group is represented by its centroid point, as in k-means and some
other clustering algorithms.

The density matching property of vector quantization is powerful, especially for


identifying the density of large and high-dimensioned data. Since data points are
represented by the index of their closest centroid, commonly occurring data have low
error, and rare data high error. This is why VQ is suitable for lossy data compression. It
can also be used for lossy data correction and density estimation.

Vector quantization is based on the competitive learning paradigm, so it is closely related


to the self-organizing map model.

Training
A simple training algorithm for vector quantization is:

1. Pick a sample point at random


2. Move the nearest quantization vector centroid towards this sample point, by a
small fraction of the distance
3. Repeat

A more sophisticated algorithm reduces the bias in the density matching estimation, and
ensures that all points are used, by including an extra sensitivity parameter:

1. Increase each centroid's sensitivity by a small amount


2. Pick a sample point at random
3. Find the quantization vector centroid with the smallest <distance-sensitivity>
1. Move the chosen centroid toward the sample point by a small fraction of
the distance
2. Set the chosen centroid's sensitivity to zero
4. Repeat

It is desirable to use a cooling schedule to produce convergence: see Simulated annealing.

The algorithm can be iteratively updated with 'live' data, rather than by picking random
points from a data set, but this will introduce some bias if the data is temporally
correlated over many samples.
Applications
Vector quantization is used for lossy data compression, lossy data correction and density
estimation.

Lossy data correction, or prediction, is used to recover data missing from some
dimensions. It is done by finding the nearest group with the data dimensions available,
then predicting the result based on the values for the missing dimensions, assuming that
they will have the same value as the group's centroid.

For density estimation, the area/volume that is closer to a particular centroid than to any
other is inversely proportional to the density (due to the density matching property of the
algorithm).

Use in data compression

Vector quantization, also called "block quantization" or "pattern matching quantization"


is often used in lossy data compression. It works by encoding values from a
multidimensional vector space into a finite set of values from a discrete subspace of
lower dimension. A lower-space vector requires less storage space, so the data is
compressed. Due to the density matching property of vector quantization, the compressed
data have errors that are inversely proportional to their density.

The transformation is usually done by projection or by using a codebook. In some cases,


a codebook can be also used to entropy code the discrete value in the same step, by
generating a prefix coded variable-length encoded value as its output.

The set of discrete amplitude levels is quantized jointly rather than each sample being
quantized separately. Consider a K-dimensional vector [x1,x2,...,xk] of amplitude levels.
It is compressed by choosing the nearest matching vector from a set of N-dimensional
vectors [y1,y2,...,yn].

All possible combinations of the N-dimensional vector [y1,y2,...,yn] form the codebook.

Block Diagram: A simple vector quantizer is shown below

Only the index of the codeword in the codebook is sent instead of the quantized values.
This conserves space and achieves more compression.

Twin vector quantization (VQF) is part of the MPEG-4 standard dealing with time
domain weighted interleaved vector quantization.

Video codecs based on vector quantization


This list is incomplete; you can help by expanding it.

• Cinepak

and old versions of its spiritual successors:

• Sorenson codec
• Indeo
• Westwood's VQA format, used in many games

All of which are superseded by the MPEG family.

Audio codecs based on vector quantization

This list is incomplete; you can help by expanding it.

• CELP
• G.729
• TwinVQ
• Ogg Vorbis [1]
• AMR-WB+
• DTS

Context mixing
Context mixing is a type of data compression algorithm in which the next-
symbol predictions of two or more statistical models are combined to yield a prediction
that is often more accurate than any of the individual predictions. For example, one
simple method (not necessarily the best) is to average the probabilities assigned by each
model. The random forest is another method: it outputs the prediction that is the mode of
the predictions output by individual models. Combining models is an active area of
research in machine learning.

The PAQ series of data compression programs use context mixing to assign probabilities
to individual bits of the input.

Application to Data Compression


Suppose that we are given two conditional probabilities, P(X|A) and P(X|B), and we wish
to estimate P(X|A,B), the probability of event X given both conditions A and B. There is
insufficient information for probability theory to give a result. In fact, it is possible to
construct scenarios in which the result could be anything at all. But intuitively, we would
expect the result to be some kind of average of the two.
The problem is important for data compression. In this application, A and B are contexts,
X is the event that the next bit or symbol of the data to be compressed has a particular
value, and P(X|A) and P(X|B) are the probability estimates by two independent models.
The compression ratio depends on how closely the estimated probability approaches the
true but unknown probability of event X. It is often the case that contexts A and B have
occurred often enough to accurately estimate P(X|A) and P(X|B) by counting occurrences
of X in each context, but the two contexts either have not occurred together frequently, or
there are insufficient computing resources (time and memory) to collect statistics for the
combined case.

For example, suppose that we are compressing a text file. We wish to predict whether the
next character will be a linefeed, given that the previous character was a period (context
A) and that the last linefeed occurred 72 characters ago (context B). Suppose that a
linefeed previously occurred after 1 of the last 5 periods (P(X|A) = 1/5 = 0.2) and in 5 out
of the last 10 lines at column 72 (P(X|B) = 5/10 = 0.5). How should these predictions be
combined?

Two general approaches have been used, linear and logistic mixing. Linear mixing uses a
weighted average of the predictions weighted by evidence. In this example, P(X|B) gets
more weight than P(X|A) because P(X|B) is based on a greater number of tests. Older
versions of PAQ uses this approach[1]. Newer versions use logistic (or neural network)
mixing by first transforming the predictions into the logistic domain, log(p/(1-p)) before
averaging[2]. This effectively gives greater weight to predictions near 0 or 1, in this case
P(X|A). In both cases, additional weights may be given to each of the input models and
adapted to favor the models that have given the most accurate predictions in the past. All
but the oldest versions of PAQ use adaptive weighting.

Most context mixing compressors predict one bit of input at a time. The output
probability is simply the probability that the next bit will be a 1.

Linear Mixing

We are given a set of predictions Pi(1) = n1i/ni, where ni = n0i + n1i, and n0i and n1i are the
counts of 0 and 1 bits respectively for the i'th model. The probabilities are computed by
weighted addition of the 0 and 1 counts:

• S0 = Σi wi n0i
• S1 = Σi wi n1i
• S = S0 + S1
• P(0) = S0 / S
• P(1) = S1 / S

The weights wi are initially equal and always sum to 1. Under the initial conditions, each
model is weighted in proportion to evidence. The weights are then adjusted to favor the
more accurate models. Suppose we are given that the actual bit being predicted is y (0 or
1). Then the weight adjustment is:
• ni = n0i + n1i
• error = y – P(1)
• wi ← wi + [(S n1i - S1 ni) / (S0 S1)] error

Compression can be improved by bounding ni so that the model weighting is better


balanced. In PAQ6, whenever one of the bit counts is incremented, the part of the other
count that exceeds 2 is halved. For example, after the sequence 000000001, the counts
would go from (n0, n1) = (8, 0) to (5, 1).

Logistic Mixing

Let Pi(1) be the prediction by the i'th model that the next bit will be a 1. Then the final
prediction P(1) is calculated:

• xi = stretch(Pi(1))
• P(1) = squash(Σi wi xi)

where P(1) is the probability that the next bit will be a 1, Pi(1) is the probability estimated
by the i'th model, and

• stretch(x) = ln(x / (1 - x))


• squash(x) = 1 / (1 + e-x) (inverse of stretch).

After each prediction, the model is updated by adjusting the weights to minimize coding
cost.

• wi ← wi + η xi (y - P(1))

where η is the learning rate (typically 0.002 to 0.01), y is the predicted bit, and (y - P(1))
is the prediction error.

List of Context Mixing Compressors

All versions below use logistic mixing unless otherwise indicated.

• All PAQ versions (Matt Mahoney, Serge Osnach, Alexander Ratushnyak,


Przemysław Skibiński, Jan Ondrus, and others) [1]. PAQAR and versions prior to
PAQ7 used linear mixing. Later versions used logistic mixing.
• All LPAQ versions (Matt Mahoney, Alexander Ratushnyak) [2].
• ZPAQ (Matt Mahoney) [3].
• WinRK 3.0.3 (Malcolm Taylor) in maximum compression PWCM mode [4].
Version 3.0.2 was based on linear mixing.
• NanoZip (Sami Runsas) in maximum compression mode (option -cc) [5].
• xwrt 3.2 (Przemysław Skibiński) in maximum compression mode (options -i10
through -i14) [6] as a back end to a dictionary encoder.
• cmm1 through cmm4, M1, and M1X2 (Christopher Mattern) use a small number
of contexts for high speed. M1 and M1X2 use a genetic algorithm to select two bit
masked contexts in a separate optimization pass.
• ccm (Christian Martelock).
• bit (Osman Turan) [7].
• pimple, pimple2, tc, and px (Ilia Muraviev) [8].
• enc (Serge Osnach) tries several methods based on PPM and (linear) context
mixing and chooses the best one. [9]
• fpaq2 (Nania Francesco Antonio) using fixed weight averaging for high speed.

You might also like