You are on page 1of 1

Compression vs.

Picture Sharpness Page 1

262a

Compression vs. Picture Sharpness

The data to create an HD signal runs about 1.5 Gbps (gigabytes per second), too much for
affordable VCRs to handle. Some method of compression is necessary to reduce the amount of
data in a picture. As more compression is used, the VCR expense and tape expense both go
down, but also image quality goes down. Since the whole point of high definition is sharp pictures,
it behooves us to compress the data the least amount possible. The method of data compression
commonly used by VCRs is DCT (Discrete Cosine Transform), a system that attempts to throw
away picture detail that your eye wouldn't see anyway. Compression ratios below 5:1 show very
little loss. Compression systems break down a bit when there is a high degree of motion or detail
in the image; there is so much data that the system must throw away even more picture detail to
stay within its design limitations. So already you can see that one can't describe picture sharpness
as a static thing; it changes, depending upon the content of the picture.
DCT can be used for intraframe compression (reducing the data for each video frame
independent of the others) or can be applied to interframe compression, where a group of video
frames are compared to each other and redundant data tossed out. MPEG-2 is an example of this.
MPEG can work at different levels. At its highest compression level, it could record one "real"
frame (called an I frame) and manufacture seven other frames from it (called P and B frames)
which contain only data about what changes occurred from frame to frame, not the data that
creates the whole image itself. This high compression MPEG-2 would be used on home and office
VCRs designed to record DTV broadcasts. For true production work, one must use the low
compression MPEG-2 compression scheme that uses only I-frames (only "real" frames). Most
VCRs use only DCT compression while others use MPEG-2, a subset of DCT. (Sony uses MPEG-
2 on its Betacam-SX and may someday use it for HD, but hasn't announced such a plan yet).
It is hard to discuss picture resolution when compression is involved. One could sample an image
many times, creating a lot of data to describe the image accurately, then compress the heck out of
it to end up with a fuzzy, artifact-laden image. One the other hand, one could sample the image
less, making an image with less data (which is theoretically fuzzier) then compress the image less,
preserving the data, leaving it technically sharper than the case mentioned above. Thus, picture
sharpness (and indeed cost) is a tradeoff between the number of pixels in an image (i.e. a
camers's CCD chip) that determines the initial image sharpness, and the amount of compression
necessary to squeeze the data onto the video tape.

Return
About the author About Today's Video 3rd. ed.
home

http://videoexpert.home.att.net/artic3/262acom.htm 09/11/2004 07:48:12 AM

You might also like