You are on page 1of 76

AF Assist Lamp

Some manufacturers fit their cameras with a lamp (normally located beside or above the lens barrel) which
illuminates the subject you are focusing on when shooting in low light conditions. This lamp assists the
camera's focusing system where other cameras autofocus will likely have failed. These lamps usually only
work over a relatively short range, up to about 4 meters. Some lamps use infrared light instead of visible light
which is better for "candid" shots where you don't want to startle the subject. Notable higher end external
flash systems feature their own focus assist lamps with far greater range.

The focus assist lamp on this Canon


PowerShot S50 is located above the lens and
beside the flash. It serves a double purpose.
Firstly it fires a beam of patterned white light
in low light situations which helps the auto
focus system to get a lock. Secondly, when
the flash and anti-red-eye are enabled it
remains lit for as long as you half-press the
shutter release to reduce the size of the
subject's pupils and thus reduce the chance of
red eye.

Hologram AF found on some Sony cameras


works by projecting a crossed laser pattern
onto the subject. This bright laser pattern
helps the camera's contrast detect AF system
to lock on to the subject. The system works
well as long as the subject is large enough to
be covered by several laser lines.

AF Servo
Autofocus Servo refers to the camera's ability to continuously focus on a moving subject, a feature normally
only found on digital SLRs. It is generally used by sports or wildlife photographers to keep a moving subject
in focus.
Autofocus Servo is normally engaged by switching focus mode to "AI Servo" (Canon) or "Continuous"
(Nikon) followed by half-pressing the shutter release. The camera will continue to focus based on its own
focus rules (and your settings) while the shutter release is half-pressed or fully depressed (actually taking
shots). It is worth noting that Autofocus Servo normally also puts the camera into "release priority" mode so
that the camera will take a shot when the shutter release is depressed, regardless of the current AF status
(good lock or still searching).

Autofocus
All digital cameras come with autofocus (AF). In autofocus mode the camera automatically focuses on the
subject in the focus area in the center of the LCD/viewfinder. Many prosumer and all professional digital
cameras allow you to select additional autofocus areas which are indicated on the LCD/viewfinder.

Example of a camera with a multi selector button (extreme right) to


select the AF area spot. The selected area spot is indicated on the main
LCD by a red bracket.
In "single AF" mode, the camera will focus when the shutter release button is pressed halfway. Some cameras
offer "continuous AF" mode whereby the camera focuses continuously until you press the shutter release
button halfway. This shortens the lag time, but reduces battery life. Normally a focus confirmation light will
stop blinking once the subject in focus. Autofocus is usually based on detecting contrast and therefore works
best on contrasty subjects and less well in low light conditions, in which case the use of an AF assist lamp is
very useful. Some cameras also feature manual focus.

Buffer
After the sensor is exposed, the image data will be processed in the camera and then written to the storage
card. A buffer inside a digital camera consists of RAM memory which temporarily holds the image
information before it is written out to storage card. This speeds up the "time between shots" and allows burst
(continuous) shooting mode. The very first digital cameras didn't have any buffer, so after you took the shot
you HAD to wait for the image to be written to the storage card before you could take the next shot.
Currently, most digital cameras have relatively large buffers which allow them to operate as quickly as a film
camera while writing data to the storage card in the background (without interrupting your ability to shoot).
The location of the buffer within the camera system is normally not specified, but affects the number of
images that can be shot in burst mode. The buffer memory is located either before or after the image
processing.

After Image Processing Buffer

With this method the images are processed and turned into their final output format before they are placed in
the buffer. As a consequence, the number of shots which can be taken in a burst can be increased by reducing
image file size (e.g. shoot in JPEG, reduce JPEG quality, reduce resolution).

Before Image Processing Buffer

In this method no image processing is carried out and the RAW data from the CCD is placed immediately in
the buffer. In parallel to other camera tasks, the RAW images are processed and written to the storage card. In
cameras with this type of buffer, the number of frames which can be taken in burst mode cannot be increased
by reducing image file size. But the number of frames per second (fps) is independent of the image processing
speed (until the buffer is full).

Smart Buffering

The "smart buffering" mentioned by Phil Askey in his Nikon D70 review, combines elements from the above
two buffering methods. Just like in the "Before Image Processing Buffer" the unprocessed image data are
stored into the buffer (1) allowing for a higher fps. They are then processed (2) and converted into JPEG,
TIFF or RAW. But instead of writing the processed images to the storage card they are stored in the buffer
(3). Therefore, the image processing is not bottlenecked by the writing to the storage card, which happens in
parallel. Moreover, it constantly frees up buffer space for new images since (3) takes up less space than (2),
especially in the case of JPEG. Just like in the "After Image Processing Buffer", the output images are then
written from the buffer to the storage card (4). But an important difference is that here the image processing
happens in parallel with writing to the storage card. So the image processing of new images can continue
while the other images are being written to the storage card. This means that you do not necessarily have to
wait for the entire burst of frames to be written to the CF card before there is enough space to take another full
burst.

Burst (Continuous)
Burst or Continuous Shooting mode is the digital camera's ability to take several shots immediately one after
another, similar to a film SLR camera with a motorwind. The speed (number of frames per second or fps) and
total number of frames differs greatly between camera types and models. The fps is a function of the shutter
release and image processing systems of the camera. The number of frames that can be taken is defined by the
size of the buffer where images are stored before they are processed (in case of a before image processing
buffer) and written to the storage card.
The number of frames per second (fps) and total number of frames that can be shot in burst mode is
continuously improving and is of course higher as you move from consumer and prosumer digital compacts to
prosumer and professional digital SLRs. Digital compacts typically allow 1 to 3 fps with bursts of up to about
10 images while digital SLRs have fps of up to 7 or more and can shoot dozens of frames in JPEG and RAW.

Some even allow an initial burst of higher fps followed by a slower but continuous fps until the storage card is
full.

Color Filter Array


Each "pixel" on a digital camera sensor contains a light sensitive photo diode which measures the brightness
of light. Because photodiodes are monochrome devices, they are unable to tell the difference between
different wavelengths of light. Therefore, a "mosaic" pattern of color filters, a color filter array (CFA), is
positioned on top of the sensor to filter out the red, green, and blue components of light falling onto it. The
GRGB Bayer Pattern shown in this diagram is the most common CFA used.

Mosaic sensors with a GRGB CFA capture only 25% of the red and blue and just 50% of the green
components of light.

Red channel pixels


(25% of the pixels)

Green channel pixels


(50% of the pixels)

Blue channel pixels


(25% of the pixels)

Combined image

As you can see, the combined image isn't quite what we'd expect but is sufficient to distinguish the colors of
the individual items in the scene. If you squint your eyes or stand away from your monitor your eyes will
combine the individual red, green, and blue intensities to produce a (dim) color image.

Red, Green, and Blue channels after interpolation

Combined image

The missing pixels in each color layer are estimated based on the values of the neighboring pixels and other
color channels via the demosaicing algorithms in the camera. Combining these complete (but partially
estimated) layers will lead to a surprisingly accurate combined image with three color values for each pixel.
Many other types of color filter arrays exist, such as CYGM using CYAN, YELLOW, GREEN, and
MAGENTA filters in equal numbers, the RGBE found in Sony's DSC-F828, etc.

Connectivity
A digital camera's connectivity defines how it can be connected to other devices for the transfer, viewing, or
printing of images, and to use the camera for remote capture.

Image Transfer
Early digital cameras used slow RS232 (serial) connections to transfer images to your computer. Most digital
cameras now feature USB 1.1 connectivity, with higher end models offering USB 2.0 and FireWire (IEEE
1394) connectivity. Manufacturers generally bundle such cameras with cables and driver software.
Note that real transfer rates are always lower than the theoretical transfer rates indicated in the table below.
Practical transfer speeds depend on your computer hardware and software configuration, the type of camera
or reader, the type and quality of the storage card, whether you are reading or writing (reading is faster than
writing), the average file size (a few large files transfer faster than many small ones), etc.
Instead of connecting the camera with a cable to your computer you can also insert the storage card into the
PC Card slot of your notebook or a dedicated card-reader.
Theoretical Transfer Speeds

Transfer Rate

USB 2.0 - Low-Speed = USB 1.1 Minimum

1.5 Mbps

USB 2.0 - Full-Speed = USB 1.1 Maximum

12 Mbps

USB 2.0 - High-Speed

480 Mbps
100-400
Mbps

FireWire/IEEE1394
Practical Transfer Speeds

Approx. Transfer Rate

Digital Camera USB 1.1

~ 350 KB/s

Digital Camera FireWire

~ 500 KB/s

USB 1.1 Card Reader

~ 900 KB/s

~ 7 Mbps

PC/PCMCIA Card Slot on notebook

~ 1,300 KB/s

~ 10 Mbps

USB 2.0 or FireWire Card Reader

~ 3,200 KB/s

~ 25 Mbps

A transfer rate of 1 Megabit per second (Mbps) equals 128 Kilobytes per second(KB/s) and is able to transfer
7.5 Megabytesof information per minuteor about four 5 megapixel JPEG images.

Remote Capture
On some cameras, the connection to transfer images can also be used for remote capture and time lapse
applications.

Video Output
Most digital cameras also provide video (and sometimes audio) output for connection to a TV or VCR. More
flexible cameras allow you to switch output between the PAL and NTSC video standards. Cameras with

infrared remote controls make it easy to do slideshows for friends and family from the comfort of your
armchair.

Print Output
Some digital cameras, e.g. those with PictBridge and USB Direct Print support, allow you to print images
directly from the camera to an enabled printer via a USB cable without the need for a computer. Although
printing directly from a digital camera is convenient, it eliminates one of the key benefits of digital imaging
the ability to edit and optimize your images.

Effective Pixels
Effective Number of Pixels
A distinction should be made between the number of pixels in a digital image and the number of sensor pixel
measurements that were used to produce that image. In conventional sensors, each pixel has one photodiode
which corresponds with one pixel in the image. A conventional sensor in for instance a 5 megapixel camera
which outputs 2,560 x 1,920 images has an equal number of "effective" pixels, 4.9 million to be precise.
Additional pixels surrounding the effective area are used for demosaicing the edge pixels, to determine "what
black is", etc. Sometimes not even all sensor pixels are used. A classical example was Sony's DSC-F505V
which effectively used only 2.6 megapixel (1,856 x 1,392) out of the 3.34 megapixel available on the sensor.
This was because Sony fitted the then new 3.34 sensor into the body of the previous model. As the sensor was
slightly larger, the lens was not able to cover the whole sensor.
So the total number of pixels on the sensor is larger than the effective number of pixels used to create the
output image. Often this higher number is preferred to specify the resolution of the camera for marketing
purposes.

Interpolated Number of Sensor Pixels


Normally, each pixel in the image is based on the measurement in one pixel location. For instance, a 5
megapixel image is based on 5 million pixel measurements, give and take the use of some pixels surrounding
the effective area. Sometimes a camera with, for instance, a 3 megapixel sensor, is able to create 6 megapixel
images. Here, the camera calculates, or interpolates, 6 million pixels of information based on the
measurement of 3 million effective pixels on sensor. When shooting in JPEG mode, this in-camera
enlargement is of better quality than those performed on your computer because it is done before JPEG
compression is applied. Enlarging JPEG images on your computer also makes the undesirable JPEG
compression artifacts more visible. However, the quality difference is marginal and you are basically dealing
with a slower 3 megapixel camera which fills up your memory cards twice as fastnot a good trade-off. It is
similar to what happens when you use a digital zoom. Interpolation cannot create detail you did not capture.

Fujifilm's Super CCD Sensors


Normally sensor pixels are square. Fujifilm's Super CCD sensors have octagonal pixels, as shown in this
diagram. Therefore, the distance "d2" between the centers of two octagonal pixels is smaller than the distance
"d1" between two conventional square pixels, resulting in larger (better) pixels.

However, the information has to be converted to a digital image with square pixels. From the diagram you can
see that, for a 4 x 4 area of 16 square pixels, only 8 octagonal pixel measurements were used: 2 red pixels, 2
blue pixels, and 4 green pixels (1 full, 4 half, and 4 quarter green pixels). In other words, 6 megapixel Super
CCD images are based on the measurement by only 3 million effective pixels, similar to the above
interpolated example, but with the advantage of larger pixels. In practice the resulting image quality is
equivalent to about 4 megapixel. The drawback is that you have to deal with double the file size (leading to
more storage and slower processing), while enjoying a quality improvement equivalent to only 33% more
pixels.

EXIF
Besides information about the pixels of the image, most cameras store additional information such as the date
and time the image was taken, aperture, shutterspeed, ISO, and most other camera settings. These data, also
known as "metadata" are stored in a "header". A common type of header is the EXIF (Exchangeable Image
File) header. EXIF is a standard for storing information created by JEIDA (Japan Electronic Industry
Development Association) to encourage interoperability between imaging devices. EXIF data are very useful
because you do not need to worry about remembering the settings you used when taking the image. Later you
can then analyze on your computer which camera settings created the best results, so you can learn from your
experience.

Example of EXIF 2.2 information extracted with ACDSee 6.0.3


which allows the data preceded by the "pencil" icon to be edited.

Most current image editing and viewing programs are able to display, and even edit the EXIF data. Note that
EXIF data may be lost when saving a file after editing. It's one of the many reasons you should always
preserve your original image and use "Save As" after editing it.

Fill Factor
The fill factor indicates the size of the light sensitive photodiode relative to the surface of the pixel. Because
of the extra electronics required around each pixel the "fill factor" tends to be quite small, especially for
Active Pixel Sensors which have more per pixel circuitry. To overcome this limitation, often an array of
microlenses is placed on top of the sensor.

Lag Time
Lag time is the time between you pressing the shutter release button and the camera actually taking the shot.
This delay varies quite a bit between camera models, and used to be the biggest drawback of digital
photography. The latest digital cameras, especially the prosumer and professional SLR's have virtually no lag
times and react in the same way as conventional film cameras, even in burst mode.
In our reviews we record "Lag Time" and define it as three distinct timings:

Autofocus Lag (Half-press Lag)


(Prime AF/AE)
Many digital camera users prime the autofocus (AF) and autoexposure (AE) systems on their camera by halfpressing the shutter release. This lag is the amount of time between a half-press of the shutter release and the
camera indicating an autofocus and autoexposure lock on the LCD/viewfinder (ready to shoot). This timing is
normally the most variable as it is affected by the subject matter, current focus position, still or moving
subject, etc.

Shutter Release Lag (Half to Full-press Lag)


(Take shot, AF/AE primed)
The amount of time it takes to take the shot (assuming you have already primed the camera with a half-press)
by pressing the shutter release button all the way down to take the shot.

Total Lag (Full-press Lag)


(Take shot, AF/AE not primed)
The amount of time it takes from a full depression of the shutter release button (without performing a halfpress of the shutter release) to the image being taken. This is more representative of the use of the camera in a
spur of the moment "point and shoot" situation. The Total Lag is not equal to the sum of the Autofocus and
Shutter Release Lags.

LCD

LCD as Viewfinder
Digital compact cameras allow you to use the LCD as a viewfinder by providing a live video feed of the scene
to be captured. The LCDs normally measure between 1.5" and 2.5" diagonally with typical resolutions
between 120,000 and 240,000 pixels. The better LCDs have an anti-reflective coating and/or a reflective sheet
behind the LCD to allow for viewing in bright outdoor daylight. Some LCDs can be flipped out of the body or
angled up or down to make it easier to take low angle or high angle shots. The main LCD is sometimes
supplemented by an electronic viewfinder which uses a smaller 0.5" LCD, simulating the effect of a TTL
optical viewfinder. LCDs on digital SLRs normally do not support live previews and are only used to review
images and change the camera settings.

Digital compact with a twist LCD Fixed LCD on a digital SLR

LCD to Play Back Images


The LCD screen delivers one of the key benefits of digital photography: the ability to play back your images
immediately after shooting. However, since only about 120,000 to 240,000 pixels are used to represent
several millions of pixels in the original digital image, further magnification is needed to determine whether
the image is sufficiently sharp and needs reshooting. Not all cameras offer magnification and the
magnification factor differs per model. Some cameras allow basic editing functions such as rotating, resizing
images, trimming video clips, etc. In playback mode you can also select an image from the thumbnail index.
Besides playback,
many cameras allow
you to "scroll"
through the EXIF
data, view the
histogram, and even
show areas with
potential for
overexposure, as
shown in this
animation.

LCD Used as Menu


The LCD is also used to change the camera settings via the camera buttons, often allowing to adjust the
brightness and color settings of the LCD itself. The main LCD is frequently supplemented by one or more
monochrome LCDs (which use less battery power) on top and/or at the rear of the camera showing the most
important camera and exposure settings.

Menu system displayed by the Example of a monochrome status LCD providing


LCD
information such as battery and storage card status,
exposure, focus mode, white balance, etc. Often a
backlight can be activated via a button.

Manual Focus
Manual focus disables the camera's built-in automatic focus system so you can focus the lens by hand[1].
Manual focus is useful for low light, macro or special effects photography. It is very important when the
autofocus system is unable to get a good focus lock, e.g. in low light situations. Note that some digital
cameras allow you to manually focus only to a few preset distances. Higher-end digital cameras allow
focusing using the normal focus ring on the attached lens, just like in conventional photography.

Technical Footnote
1. (1) In digital cameras, manual focus is often implemented on a fly-by-wire basis, whereby the manual
inputs to focus in or out are relayed to the autofocus system which effects the change in focus.

Microlenses
To overcome the limitations of a low fill factor, on certain sensors an array of microlenses is placed on top of
the color filter array in order to funnel the photons of a larger area into the smaller area of the light sensitive
photodiode.

Microlens funnels the light of a larger


area into the photodiode (indicated in
red) of the pixel

Pixels

Sensor Pixels
Similar to an array of buckets collecting rain water, digital sensors consist of an array of "pixels" collecting
photons, the minute energy packets of which light consists. The number of photons collected in each pixel is
converted into an electrical charge by the light sensitive photodiode. This charge is then converted into a
voltage, amplified, and converted to a digital value via the analog to digital converter, so that the camera can
process the values into the final digital image.
As explained in the sensor sizes topic, sensors of digital compact cameras are substantially smaller than those
of digital SLRs with a similar pixel count. As a consequence, the pixel size is substantially smaller. This
explains the lower image quality of digital compact cameras, especially in terms of noise and dynamic range.

Typical sensor size of 3, 4, and


5 megapixel digital compact
cameras

Typical sensor size of 6


megapixel digital SLRs

Typical pixel size of 4 megapixel compacts and 6 megapixel SLRs

Digital Image Pixels


A digital image is similar to a spreadsheet with rows and columns which stores the pixel values generated by
the sensor. Pixels in a digital image have no size until they are displayed on a monitor or printed. For instance,
on a 4" x 6" print, each pixel in a 5 megapixel image would only measure 0.01mm, while on an 8" x 10" print,
it will measure 0.05mm.

Pixel Density
Pixel Density is a calculation of the number of pixels on a sensor, divided by the imaging area of that sensor.
It can be used to understand how closely packed a sensor is and helps when comparing two cameras with
different sensor sizes or numbers of photosites (pixels). Because the light collecting area and efficiency of
each photosite will vary between technologies and manufacturers, pixel density should not be used as a
predictor for image quality but instead as a parameter to help understand the sensor.

Diagram comparing some common sensor sizes

The APS-C sensors used in most modern DSLRs have an area of approximately 3.5 cm, while the 1/1.7" and
1/2.3" sensors commonly used in compact cameras have areas of 0.43 and 0.29 cm, respectively.
To get some idea of what this means, here is a diagram representing a pixel density of 28 MP/cm (the pixel
density of the Canon G9). As you can see, this density equates to 12 MP on the G9's 1/1.7" sensor but would
be 91 MP if applied to a sensor as large as the one in a Canon 450D.

Conversely, if we look at the Canon 450D's pixel density of 3.7 MP, we can see that it gives 12 MP on a
Canon APS-C sensor but would give just 1.6 MP on a 1/1.7" sensor like the one in the G9.

The calculation is based on the number of pixels produced at the camera's native resolution (Effective pixels),
so both for conventional Bayer sensors and Foveon type, one photosite is considered equal to one pixel in the
final image. For Fujifilm's Super CCD SR technology, each photosite contains one 's' and one 'r' photodiode
but contribute only one pixel to the final image, so are classed as a single pixel.

Pixel Quality
The marketing race for "more megapixels" would like us to believe that "more is better". Unfortunately, it's
not that simple. The number of pixels is only one of many factors affecting image quality and more pixels is
not always better. The quality of a pixel value can be described in terms of geometrical accuracy, color
accuracy, dynamic range, noise, and artifacts. The quality of a pixel value depends on the number of
photodetectors that were used to determine it, the quality of the lens and sensor combination, the size of the
photodiode(s), the quality of the camera components, the level of sophistication of the in-camera imaging
processing software, the image file format used to store it, etc. Different sensor and camera designs make
different compromises.

Geometrical Accuracy
Geometrical or spatial accuracy is related to the number of pixel locations on the sensor and the ability of the
lens to match the sensor resolution. The resolution topic explains how this is measured at this site.
Interpolation will not improve geometrical accuracy as it cannot create what was not captured.

Color Accuracy
Conventional sensors using a color filter array have only one photodiode per pixel location and will display
some color inaccuracies around the edges because the missing pixels in each color channel are estimated
based on demosaicing algorithms. Increasing the number of pixel locations on the sensor will reduce the
visibility of these artifacts. Foveon sensors have three photodetectors per pixel location and create therefore a
higher color accuracy by eliminating the demosaicing artifacts. Unfortunately their sensitivities are currently
lower than conventional sensors and the technology is only available in a few cameras.

Dynamic Range
The size of the pixel location and the fill factor determine the size of the photodiode and this has a big impact
on the dynamic range. Higher quality sensors are more accurate and will be able to output a larger dynamic
range which can be preserved when storing the pixel values into a RAW image file. A variant of the Fujifilm
Super CCD, the Super CCD SR uses two photodiodes per pixel location with the objective to increase the

dynamic range. A more sensitive photodiode measures the shadows, while a less sensitive photodiode
measures the highlights.

Noise
The pixel value consists of two components:
1. what you want to see (the actual measurement of the value in the scene)
2. what you do not want to see (noise).
The higher (1), and the lower (2), the better the quality of the pixel. The quality of the sensor and the size of
its pixel locations have a great impact on noise and how it changes with increasing sensitivity.

Artifacts
Besides noise, there are many other types of artifacts that determine pixel quality.

Conclusion
Unfortunately there is no single standard objective quality number to compare image quality across different
types of sensors and cameras. For instance, a 3 megapixel Foveon type sensor uses 9 million photodetectors in
3 million pixel locations. The resulting quality is higher than a 3 megapixel but lower than a 9 megapixel
conventional image and it also depends on the ISO level you compare it at. Likewise, a 6 megapixel Fujifilm
Super CCD image is based on measurements in 3 million pixel locations. The quality is higher than a 3
megapixel image but lower than a 6 megapixel image. A 6 megapixel digital compact image will be of lower
quality than a 6 megapixel digital SLR image with larger pixels. To determine an "equivalent" resolution is
tricky at best.
End of the day, the most important thing is that you are happy with the quality level that comes out of your
camera for the purpose that you need it for (e.g. website, viewing on computer, printing, enlargements,
publishing, etc.). I strongly recommend that you look beyond megapixels when purchasing a digital
camera.

Sensors
The New Foveon Sensors
The cone-shaped cells inside our eyes are sensitive to red, green, and bluethe "primary colors". We
perceive all other colors as combinations of these primary colors. In conventional photography, the red, green,
and blue components of light expose the corresponding chemical layers of color film. The new Foveon
sensors are based on the same principle, and have three sensor layers that measure the primary colors, as
shown in this diagram. Combining these color layers results in a digital image, basically a mosaic of square
tiles or "pixels" of uniform color which are so tiny that it appears uniform and smooth. As a relatively new
technology, Foveon sensors are currently only available in the Sigma SD9 and SD10 digital SLRs and have
drawbacks such as relatively low-light sensitivity.

The Current Color Filter Array Sensors


All other digital camera sensors only measure the brightness of each pixel. As shown in this diagram, a "color
filter array" is positioned on top of the sensor to capture the red, green, and blue components of light falling
onto it. As a result, each pixel measures only one primary color, while the other two colors are "estimated"
based on the surrounding pixels via software. These approximations reduce image sharpness, which is not the
case with Foveon sensors. However, as the number of pixels in current sensors increases, the sharpness
reduction becomes less visible. Also, the technology is in a more mature stage and many refinements have
been made to increase image quality.

Active Pixel Sensors (CMOS, JFET LBCAST) versus CCD Sensors


Similar to an array of buckets collecting rain water, digital camera sensors consist of an array of "pixels"
collecting photons, the minute energy packets of which light consists. The number of photons collected in
each pixel is converted into an electrical charge by the photodiode. This charge is then converted into a

voltage, amplified, and converted to a digital value via the analog to digital converter, so that the camera can
process the values into the final digital image.
In CCD (Charge-Coupled Device) sensors, the pixel measurements are processed sequentially by circuitry
surrounding the sensor, while in APS (Active Pixel Sensors) the pixel measurements are processed
simultaneously by circuitry within the sensor pixels and on the sensor itself. Capturing images with CCD and
APS sensors is similar to image generation on CRT and LCD monitors respectively.
The most common type of APS is the CMOS (Complementary Metal Oxide Semiconductor) sensor. CMOS
sensors were initially used in low-end cameras but recent improvements have made them more and more
popular in high-end cameras such as the Canon EOS D60 and 10D. Moreover, CMOS sensors are faster,
smaller, and cheaper because they are more integrated (which makes them also more power-efficient), and are
manufactured in existing computer chip plants. The earlier mentioned Foveon sensors are also based on
CMOS technology. Nikon's new JFET LBCAST sensor is an APS using JFET (Junction Field Effect
Transistor) instead of CMOS transistors.

Sensor Linearity
Sensors are linear devices. If you double the amount of light, the sensor output will double, as long as the
pixels are not full[1]. Once a pixel reaches full capacity, it will give a constant or "clipped" output. Human
vision is non-linear, as explained in the dynamic range topic. A doubling of the light in low light conditions
has a much larger effect than in bright conditions. Our vision amplifies the shadows and compresses the
highlights.

Sensors respond in a linear way[1] to light while human vision responds in a non-linear
way which is often approximated by a power curve of about 0.45. So what the sensor
measures as "127" is perceived by the human vision as about "186".
If we expose this sensor until the pixels are full, then the brightest pixels will output a value of 254 (255
would be clipped). If we halve the amount of light, the brightest pixels will output a value of 127. This
implies that the brightest stop uses up half of the 255 available tones and this is where human vision is least
sensitive. There are only a few tones left to describe the darkest stops, where human vision is more sensitive.
This creates a very dark linear RAW image with a histogram skewed to the left.

Sensors (red curve) respond in a linear way to light while human vision (green curve)
responds in a non-linear way which is often very roughly approximated by a gamma
curve of 1/2.2. So what the sensor measures as "127" is perceived by the human vision
as about "186" (or "2047" is perceived as "2988", as indicated in these 12 bit graphs).
The blue curve is a typical tonal curve applied to the linear data to compensate for the
human vision and to compress the dynamic range into the smaller dynamic range of the
monitor or printer in such a way that it is pleasing to the human eye.
Therefore digital cameras apply a tonal curve to the linear raw data so that images viewed on a monitor or
printed images are more pleasing to the eye. Applying a gamma correction of 1/2.2=0.45 will allocate more
tones to the shadow areas and fewer tones to the highlight areas in line with the characteristics of our vision.
When working in a gamma 2.2 color space like sRGB or Adobe RGB the images will appear perceptually
uniform on a monitor or print, avoiding posterization (banding).
In reality cameras and raw converters go beyond a gamma correction and apply more of an S-shaped (on a
logarithmic scale) curve to the data in order to "compress" the larger dynamic range so it can be represented
on a monitor or print in a way that it is pleasing to the human eye.

Technical Footnote
1. (1) In practice, there are some non-linearities in the darkest shadows and brightest highlights. Also,
some cameras, e.g the Nikon D2X, preprocess the sensor data before the ADC.

Sensor Sizes

Typical sensor size of digital


compact cameras

Typical sensor size of digital


SLRs

This diagram shows the typical sensor sizes compared to 35mm film. The sensor sizes of digital SLRs are
typically 40% to 100% of the surface of 35mm film. Digital compact cameras have substantially smaller
sensors offering a similar number of pixels. As a consequence, the pixels are much smaller, which is a key
reason for the image quality difference, especially in terms of noise and dynamic range.

Sensor Type Designation


Sensors are often referred to with a "type" designation using imperial fractions such as 1/1.8" or 2/3" which
are larger than the actual sensor diameters. The type designation harks back to a set of standard sizes given to
TV camera tubes in the 50's. These sizes were typically 1/2", 2/3" etc. The size designation does not define
the diagonal of the sensor area but rather the outer diameter of the long glass envelope of the tube. Engineers
soon discovered that for various reasons the usable area of this imaging plane was approximately two thirds of
the designated size. This designation has clearly stuck (although it should have been thrown out long ago).
There appears to be no specific mathematical relationship between the diameter of the imaging circle and the
sensor size, although it is always roughly two thirds.

Common Image Sensor Sizes


In the table below "Type" refers to the commonly used type designation for sensors, "Aspect Ratio" refers to
the ratio of width to height, "Dia." refers to the diameter of the tube size (this is simply the Type converted to
millimeters), "Diagonal / Width / Height" are the dimensions of the sensors image producing area.

Type
1/3.6"
1/3.2"
1/3"
1/2.7"
1/2.5"
1/2.3"
1/2"
1/1.8"
1/1.7"
2/3"
1"
4/3"
1.8"[1]
35 mm
film

Aspect Ratio
4:3
4:3
4:3
4:3
4:3
4:3
4:3
4:3
4:3
4:3
4:3
4:3
3:2
3:2

Dia. (mm)
7.056
7.938
8.467
9.407
10.160
11.044
12.700
14.111
14.941
16.933
25.400
33.867
45.720
n/a

Sensor (mm)
Diagonal
Width
5.000
4.000
5.680
4.536
6.000
4.800
6.721
5.371
7.182
5.760
7.70
6.16
8.000
6.400
8.933
7.176
9.500
7.600
11.000
8.800
16.000
12.800
22.500
18.000
28.400
23.700
43.300

36.000

Height
3.000
3.416
3.600
4.035
4.290
4.62
4.800
5.319
5.700
6.600
9.600
13.500
15.700
24.000

Implementation Examples
Below is a list of a few digital cameras (as examples) and their sensor size.
Camera

Sensor Type

Pixel count

Sensor size

Konika Minolta DiMAGE Xg


PowerShot S500
Nikon Coolpix 8800
Olympus C-8080 Wide Zoom
Sony DSC-828
Konica Minolta Dimage A2

1/2.7" CCD
1/1.8" CCD
2/3" CCD
2/3" CCD
2/3" CCD
2/3" CCD

3.3 million
5.0 million
8.0 million
8.0 million
8.0 million
8.0 million

5.3 x 4.0 mm
7.2 x 5.3 mm
8.8 x 6.6 mm
8.8 x 6.6 mm
8.8 x 6.6 mm
8.8 x 6.6 mm

Nikon D70s
Nikon D2X

CCD
CMOS

6.1 million
12.2 million

23.7 x 15.7 mm
23.7 x 15.7 mm

Kodak DSC-14n
Canon EOS-1Ds Mark II

CMOS
CMOS

13.8 million
16.6 million

36 x 24 mm
36 x 24 mm

Technical Footnote
1. (1) Also called "APS-C". Many variants exist. APS-C film measures 25.1 x 16.7 mm, Sony's APS-C
measures 21.5 x 14.4 mm, Nikon "DX" sensors measure 23.7 x 15.7 mm, while Canon has several
(smaller and larger) variants, e.g. 22.2 x 14.8 mm and 28.7 x 19.1 mm.

Storage Card
Storage cards are to digital cameras what films are to conventional cameras. They are removable devices
which hold the images taken with the camera. Storage cards are keeping up with the rapidly changing digital
camera market and are trending in the following direction:

larger capacities (several GB) and faster write speeds to accommodate higher resolution images and
shooting in RAW
lower prices per MB or GB of storage
smaller form factors for smaller digital cameras

The only downside of all this good news is a proliferation of storage card formats, making it more difficult to
use cards across different cameras, card readers, and other devices (such as PDAs, MP3 players, etc). The
image and table below give you an idea of how the sizes of typical formats compare:

Card Type

Dimensions in mm Volume in mm

CompactFlash II / Microdrive

42.8 x 36.4 x 5.0

7,790

CompactFlash I

42.8 x 36.4 x 3.3

5,141

Memory Stick

50.0 x 21.5 x 2.8

3,010

Secure Digital

32.0 x 24.0 x 2.1

1,613

SmartMedia

45.0 x 37.0 x 0.8

1.332

MultiMediaCard

32.0 x 24.0 x 1.4

1,075

Memory Stick Duo

31.0 x 20.0 x 1.6

992

xD Picture Card

25.0 x 20.0 x 1.7

850

Reduced Size MultiMediaCard

18.0 x 24.0 x 1.4

605

CompactFlash
CompactFlash is a proven and reliable format compatible with many devices and generally ahead of other
formats in terms of storage capacity. Capacities above 2.2 GB require that your camera supports "FAT32".
CompactFlash comes in Type I and II which only differ in thickness (3.3mm and 5.0mm) with Type I being
the most popular for flash memory, while Type II is used by microdrives.

Microdrives
Pioneered by IBM, microdrives are minute hard disks that come in CompactFlash Type II format and
typically offer larger storage capacities at a cheaper cost per megabyte. However, CompactFlash has been
catching up with higher capacity cards. Microdrives use more battery power, create more heat (which can
result in more noise) and have a higher risk of failure because they contain moving parts.

SmartMedia
Bigger in surface than CompactFlash but much thinner, they are more fragile and known to be less reliable.
This format is gradually being phased out of the market with virtually no new cameras being announced
supporting this format.

Sony Memory Stick


Yet another standard, set by Sony but now also manufactured by others such as Lexar Media. The main
drawback is that there are fewer cameras using this type of memory, although their number is gradually
increasing. So if you buy another brand of camera later on, you may not be able to use your memory sticks.
Memory sticks are more expensive per megabyte because there is less competition in the market. Although
their capacity continues to increase, they tend to lag behind CompactFlash in terms of maximum capacity.
Several variants exist such as Sony Memory Stick with Select Function, Sony Memory Stick Pro, Sony
Memory Stick Duo, and Sony MagicGate.

Secure Digital (SD)


Supported by the SD Card Association (SDA), this compact type of memory card allows for fast data transfer
and has built-in security functions to facilitate the secure exchange of content and includes copyright (music)
protection which makes them more expensive than the similar MultiMediaCards which we will discuss next.
SD cards have a small write-protection switch on the side, similar to what floppy disks have.

MultiMediaCard/SecureMultiMediaCard/Reduced Size MultiMediaCard


(MMC/SecureMMC/RS-MMC)
Supported by the MultiMediaCard Association (MMCA), MultiMediaCards have the same surface but are
0.7mm thinner than SD cards and have two pins less. Hardware-wise MMC cards fit in SD card slots and

many, but not all, SD devices and cameras will accept MMC cards as well. Check out the specs before you
buy. Two variants are SecureMMC, similar to SD, and Reduced Size MMC.

xD Picture Card
Another format aimed at very small digital cameras, developed by Olympus, Fujifilm, and Toshiba.

Other Formats
Older formats include floppy disks and PCMCIA cards. A few models support writing on to 3-inch CD-R/RW
discs. Some low-end cameras don't have removable storage cards but instead have built-in flash RAM
memory.

Thumbnail Index
When in playback mode, most digital cameras allow you to access the images and video clips on the storage
card via a thumbnail index, an interactive contact sheet. Mostly a 2 x 2 or 3 x 3 grid of images is used, and
sometimes this can be specified by the user. Buttons on the camera allow you to navigate through the
thumbnails or select them and, depending on the camera, perform basic operations such as hiding, deleting,
organizing them into folders, view them as a slideshow, print directly from the camera, etc. Clicking on the
thumbnails will allow you to see a larger thumbnail of the image that fills the whole LCD. Read more in the
LCD topic of this glossary.

Typical 3 x 3 thumbnail index


on a digital camera.

Menu that allows you to


choose what you want to do
with the selected image(s).

Viewfinder
The viewfinder is the "window" you look through to compose the scene. We will discuss the four types of
viewfinder commonly found on digital cameras.

Optical Viewfinder on a Digital Compact Camera


The optical viewfinder on a digital compact camera consists of a simple optical system that zooms at the same
time as the main lens and has an optical path that runs parallel to the camera's main lens. These viewfinders
are small and their biggest problem is framing inaccuracy. Since the viewfinder is positioned above the actual
lens (often there is also a horizontal offset), what you see through the optical viewfinder is different from
what the lens projects onto the sensor. This "parallax error" is most obvious at relatively small subject
distances. In many instances the optical viewfinder only allows you to see a percentage (80 to 90%) of what
the sensor will capture. For more accurate framing, it is recommended to use the LCD instead. For those who
wear corrective glasses it's worth checking to see if the viewfinder has any diopter adjustment.

Because the optical path of the


viewfinder runs parallel to the camera's
main lens, what you see is different
from what the lens projects onto the
sensor.

Sometimes optical viewfinders have


parallax error lines on them to indicate
what the sensor will see at relatively
small subject distances (e.g. below 1.5
meter or 5 feet).

LCD on a Digital Compact Camera (TTL)


The LCD on a digital compact camera shows in real time what is projected onto the sensor by the lens and
therefore avoids the above parallax errors. This is also called "TTL" or "Through-The-Lens" viewing. Using
the LCD for framing will shorten battery life and it may be difficult to frame accurately in very bright sunlight
conditions, in which case you will have to resort to the optical or electronic viewfinder (see below). The
LCDs on virtually all digital SLRs will only show the image after it is taken and give no live previews.

Example of digital compact with a twist


LCD

Optical Viewfinder on a Digital SLR Camera (TTL)


The optical viewfinder of a digital SLR shows what the lens will project on the sensor via a mirror and a
prism and has therefore no parallax error. When you depress the shutter button, the mirror flips up so the lens
can expose the sensor. As a consequence, and due to sensor limitations, the LCD on most digital SLRs will
only show the image after it is taken and give no live previews. In some models this is resolved by replacing
the mirror by a prism (at the expense of incoming light). The optical viewfinder normally also features an
LCD "status bar" along the bottom of the viewfinder relaying exposure and camera setting information.

The optical TTL viewfinder allows you


to look "through the lens"

Optical TTL viewfinder on SLR with


diopter adjustment (slider on the right
side)

Electronic Viewfinder (EVF) on a Digital Compact Camera (TTL)


An electronic viewfinder (EVF) functions like the LCD on a digital compact camera and shows in real time
what is projected onto the sensor by the lens. It is basically a small LCD (typically measuring 0.5" diagonally
and 235,000 pixels) with a lens in front of it, which allows you to frame more accurately, especially in bright
sunlight. It simulates in an electronic way the effect of the (superior) optical TTL viewfinders found on digital
SLRs and doesn't suffer from parallax errors. Cameras with an EVF have an LCD as well, but no true optical
viewfinder.

Example of an electronic viewfinder

Aliasing
Aliasing refers to the jagged appearance of diagonal lines, edges of circles, etc. due to the square nature of
pixels, the building blocks of digital images.

Term

Normal View
(1X)

Enlarged View
(4X)

Comment

Aliased

Steps or "jaggies" are


visible, especially when
magnifying the image.

Antialiased

Anti-aliasing makes the


edges look much
smoother at normal
magnifications.

Anti-aliasing
Anti-aliasing makes the edges appear much smoother by averaging out the pixels around the edge. In this
example some blue is added to the yellow edge pixels and some yellow is added to the blue edge pixels,
thereby making the transition between the yellow circle and the blue background more gradual and smooth.
Most image editing software packages have "anti-aliasing" options for typing fonts, drawing lines and shapes,
making selections, etc. Anti-aliasing also occurs naturally in digital camera images and smoothens out the
"jaggies". Read here about "anti-alias" filters.

Artifacts
Artifacts refer to a range of undesirable changes to a digital image caused by the sensor, optics, and internal
image processing algorithms of the camera. The table below lists some of the common digital imaging
artifacts and links to the corresponding glossary items.

Blooming

Maze Artifacts

Chromatic Aberrations

Moir

Jaggies

Noise

JPEG Compression

Sharpening Halos

Bits
In computer terms, a "bit" (binary digit) is the smallest piece of information and has a value of either "0" or
"1" which actually corresponds to one of the millions of "switches" inside the computer being "ON" or
"OFF".
In a 1 bit image we can assign the binary value "0" to black and "1" to white.
A 2bit image can have 2^2 = 4 tones: 00 (black), 01 (gray), 10 (gray), and 11 (white).
An 8bit image can have 2^8 = 256 tones ranging from 00000000 (0) to 11111111 (255).
JPEG images are often referred to as 24 bit images because they can store up to 8 bits in each of the 3 color
channels and therefore allow for 256 x 256 x 256 = 16.7 million colors.

32 bit Floating Point Format (for advanced users)


In the topic about sensor linearity we saw that the upper half of the tones is used to describe the brightest stop.
As a consequence, even a 16 bit INTEGER image only has 16 tones to describe the darkest stop in a 12 stop
image, while for the brightest stop, 32,768 tones are available. This is the opposite of human vision which is
more sensitive to shadow detail than to highlight detail. A 32 bit INTEGER image provides more tones but

has the same limitation of having a disproportionate amount of tones for the highlights. 32 bit FLOATING
POINT images address this issue by making more efficient use of the 32 bits. Instead of using 32 bits to
describe 4,294,967,296 integer numbers, 23 bits are allocated to a fraction, 8 bits to an exponent, and 1 bit to
a sign, as follows:
V = (-1)^S * 1.F * 2 ^ (E-127), whereby:
S= Sign, uses 1 bit and can have 2 possible values
F= Fraction, uses 23 bits and can have 8,388,608 possible values
E= Exponent, uses 8 bits and can have 256 possible values
Practically speaking, this allows for an almost infinite number of tones between level "0" and "1", more than 8
million tones between level "1" and "2" and 128 tones between level "65,534" and "65,535", much more in
line with our human vision than a 32 bit integer image[1]. Because of the infinitesimally small numbers that
can be stored, the 32 bit floating point format allows to store a virtually unlimited dynamic range. In other
words, 32 bit floating point images can store a virtually unlimited dynamic range in a relatively compact way
with more detail in the shadows than in the highlights and take up only twice the size of 16 bits per channel
images, saving memory and processing power. A higher accuracy format allows for smoother dynamic and
tonal range compression. This format is important in the computer graphics industry (gaming and animation)
and supported by Adobe Photoshop CS2.

Blooming
A pixel on a digital camera sensor collects photons which are converted into an electrical charge by its
photodiode. As explained in the dynamic range topic, once the "bucket" is full, the charge caused by
additional photons will overflow and have no effect on the pixel value, resulting in a clipped or overexposed
pixel value. Blooming occurs when this charge flows over to surrounding pixels, brightening or overexposing
them in the process. In the example below, the charge overflow of the overexposed pixels in the sky causes
the dark pixels at the edges of the leaves and branches to be brightened and overexposed as well. As a result
detail is lost. Blooming can also increase the visibility of purple fringing.

Some sensors come with "anti-blooming gates" which drain away the overflowing charge so it does not affect
the surrounding pixels, except for extreme exposures (very bright edge against a virtually black edge).

Color Spaces
The Additive RGB Colors
The cone-shaped cells inside our eyes are sensitive to red, green, and blue. We perceive all other colors as
combinations of these three colors. Computer monitors emit a mix of red, green, and blue light to generate
various colors. For instance, combining the red and green "additive primaries" will generate yellow. The
animation below shows that if adjacent red and green lines (or dots) on a monitor are small enough, their
combination will be perceived as yellow. Combining all additive primaries will generate white.

The Additive RGB Color Space

The Subtractive CMYk Colors


A print emits light indirectly by reflecting light that falls upon it. For instance, a page printed in yellow
absorbs (subtracts) the blue component of white light and reflects the remaining red and green components
thereby creating a similar effect as a monitor emitting red and green light. Printers mix Cyan, Magenta, and
Yellow ink to create all other colors. Combining these subtractive primaries will generate black, but in
practice black ink is used, hence the term "CMYk" color space, with k standing for the last character of black.

The Subtractive CMYk Color Space

The LAB and Adobe RGB (1998) Color Spaces


Due to technical limitations, monitors and printers are unable to reproduce all the colors we can see with our
eyes, also called the "LAB" color space, symbolized by the horseshoe shape in the diagram below. The group
of colors an average computer monitor can replicate is called the (additive) sRGB color space. The group of
colors a printer can generate is called the (subtractive) CMYk color space. There are many types of CMYk,
depending on the device. From the diagram you can see that certain colors are not visible on an average
computer monitor but printable by a printer and vice versa. Higher-end digital cameras allow you to shoot in
Adobe RGB (1998), which is larger than sRGB and CMYk. This will allow for prints with a wider range of
colors. However, most monitors are only able to display colors within sRGB.

Compression

Image files can be compressed in two ways: lossless and lossy.

Lossless Compression
Lossless compression is similar to what WinZip does. For instance, if you compress a document into a ZIP
file and later extract and open the document, the content will of course be identical to the original. No
information is lost in the process. Only some processing time was required to compress and decompress the
document. TIFF is an image format that can be compressed in a lossless way.

Lossy Compression
Lossy compression reduces the image size by discarding information and is similar to summarizing a
document. For example, you can summarize a 10 page document into a 9 page or 1 page document that
represents the original, but you cannot create the original out of the summary as information was discarded
during summarization. JPEG is an image format that is based on lossy compression.

A Numerical Example
The table below shows how, on average, a five megapixel image (2,560 x 1,920 pixels) is compressed using
the various image formats which are discussed in this glossary. Please note that in reality, the compressed file
sizes will vary significantly with the amount of detail in the image. For example, the table shows 1.3 MB as
file size for an 80% Quality JPEG five megapixel image. However, if the image has a lot of uniform surfaces
(e.g. blue skies), it could be only 0.8 MB at 80% JPEG quality, and if it has a lot of fine detail, it could be 1.7
MB. The purpose of this table is to give a ballpark estimate.
Image Format

Typical File Size in


MB

Comment

Uncompressed TIFF

14.1

3 channels of 8 bits

Uncompressed 12-bit RAW

7.7

1 channel of 12 bits

Compressed TIFF

6.0

Lossless compression

Compressed 12-bit RAW

4.3

Nearly lossless compression

100% Quality JPEG

2.3

Hard to distinguish from uncompressed

80% Quality JPEG

1.3

Sufficient quality for 4" x 6" prints

60% Quality JPEG

0.7

Sufficient quality for websites[1]

20% Quality JPEG

0.2

Very low image quality

The JPEG topic in this glossary shows an example as to how image quality is affected by JPEG compression.

Footnote
1. (1) For the web you would of course downsample the image to a lower resolution.

Digital Zoom
Optical zoom is the number of times the maximum focal length of a zoom lens is larger than the minimum
focal length. Consumer and prosumer cameras often come also with a digital zoom, which we will discuss
based on an example of a 5 megapixel prosumer camera.

A. Scene shot with a 31mm lens

B. Scene shot with a 50mm lens

Changing the focal length from 31mm to 50mm (50/31=1.6X optical zoom)
reduces the field of view. In image B, the sensor captures the red zone indicated in
image A. In both cases the camera will store 5 megapixel of information into a 5
megapixel image.

C. 1.6X Digital Zoom


Cropped and saved as lower resolution

D. 1.6X Digital Zoom


Cropped and upsampled to full
resolution

A 1.6X digital zoom will only use the information of a 1,600 x 1,200 crop and
discard the rest (2,560/1.6=1,600 and 1,920/1.6=1,200). In image C, the camera has
captured the same field of view as in image B but only uses 2 megapixel out of the
5 megapixel resolution! If the digital camera has the option to output 1,600 x 1,200
images, the crop will be saved as a 2 megapixel image. In most cases, the 1,600 x
1,200 crop will be upsampled to the full resolution of the camera as indicated in
image D. No additional information is created in the process and the quality of
image D is clearly lower than image B.

To Use Or Not to Use Digital Zoom


So what is the best thing to do? If your purpose is to capture the information shown in image B, using a lens
with focal length of 50mm is of course the best option. If you only have a 31mm lens available (or in general,
if you reached the maximum optical zoom and need to zoom in more) there are three things you can do:
1. The recommended approach is to shoot image A with digital zoom OFF and crop it later the way you
want it.
2. If the 5 megapixel camera has the option to output 2 megapixel images, then shoot with 1.6X digital
zoom ON. The 1,600 x 1,200 crop will be saved without resampling and 2 megapixel of info is
efficiently stored onto a 2 megapixel image. You save card space compared to image A, but lose the
ability to change the way you cropped. This is recommended if card space is critical and is equivalent
to cropping in the camera.
3. It is generally not recommended to shoot with 1.6X digital zoom ON and output it as a 5 megapixel
image because you are combining the disadvantages of 1. (more card space) and 2. (lose cropping
flexibility) without major benefits[1]. You are saving 2 megapixel of information (crop C) into a 5

megapixel upsampled image (D). Upsampling cannot create detail that was not captured by the lens.
Image B (optical zoom) has more detail than image D (digital zoom).

Footnote
1. (1) If for some reason your intention is to upsample and you are shooting in JPEG, one benefit of
digital zoom is that the upsampling in the camera is done before JPEG compression. If you shoot A,
crop the 1,600 x 1,200 area, and then upsample to 2,560 x 1,920 on your computer, you will magnify
the JPEG compression artifacts and the upsampled image will look not as good as image D. Because
not all digital zooms are created equally, you may want to verify the quality differences with your
particular digital camera before using digital zoom for this purpose.

Dynamic Range
Dynamic Range of a Sensor
The dynamic range of a sensor is defined by the largest possible signal divided by the smallest possible signal
it can generate. The largest possible signal is directly proportional to the full well capacity of the pixel. The
lowest signal is the noise level when the sensor is not exposed to any light, also called the "noise floor".

Practically, cameras with a large dynamic range are able to capture shadow detail and highlight detail at the
same time. Dynamic range should not be confused with tonal range.

Dynamic Range of an Image


When shooting in JPEG, the rather contrasty tonal curves applied by the camera may clip shadow and
highlight detail which was present in the RAW data. RAW images preserve the dynamic range of the sensor
and allow you to compress the dynamic range and tonal range by applying a proper tonal curve so that the
whole dynamic range is represented on a monitor or print in a way that is pleasing to the eye. This is similar
to the more extreme example in the tonal range topic which shows how the larger dynamic range and tonal
range of a 32 bit floating point image were compressed.

Pixel Size and Dynamic Range


We learned earlier that a digital camera sensor has millions of pixels collecting photons during the exposure
of the sensor. You could compare this process to millions of tiny buckets collecting rain water. The brighter
the captured area, the more photons are collected. After the exposure, the level of each bucket is assigned a
discrete value as is explained in the analog to digital conversion topic. Empty and full buckets are assigned
values of "0" and "255" respectively, and represent pure black and pure white, as perceived by the sensor. The
conceptual sensor below has only 16 pixels. Those pixels which capture the bright parts of the scene get filled
up very quickly.

Once they are full, they overflow (this can also cause blooming). What flows over gets lost, as indicated in
red, and the values of these buckets all become 255, while they actually should have been different. In other
words, detail is lost. This causes "clipped highlights" as explained in the histogram section. On the other hand,
if you reduce the exposure time to prevent further highlight clipping, as we did in the above example, then
many of the pixels which correspond to the darker areas of the scene may not have had enough time to
capture any photons and might still have value zero (hence the term "clipped shadows" as all the values are
zero, while in reality there might be minor differences).
One of the reasons that digital SLRs have a larger dynamic range is that their sensors have larger pixels. All
things equal (in particular fill factor, "bucket" depth, and exposure time), pixels with a larger exposed surface
can collect more photons in the shadow areas than small pixels during the exposure time that is needed to
prevent the bright pixels from overflowing.
It is easy to understand that one of the reasons digital SLRs have a larger dynamic range is that their pixels are
larger. Larger pixels can collect more photons in the shadow areas before the bright ones start to overflow.

Some Dynamic Range Examples

The dynamic range of the camera was able to capture the dynamic range of the scene. The
histogram indicates that both shadow and highlight detail is captured.

Here the dynamic range of the camera was smaller than the dynamic range of the scene.
The histogram indicates that some shadow and highlight detail is lost.

The limited dynamic range of this camera was used to capture highlight detail at the
expense of shadow detail. The short exposure needed to prevent the highlight buckets
from overflowing gave some of the shadow buckets insufficient time to capture any
photons.

The limited dynamic range of this camera was used to capture shadow detail at the
expense of highlight detail. The long exposure needed by the shadow buckets to collect
sufficient photons resulted in overflowing of some of the highlight buckets.

Here the dynamic range of the scene is smaller than the dynamic range of the camera,
typical when shooting images from an airplane. The histogram can be stretched to cover
the whole tonal range with a more contrasty image as a result, but posterization can occur.

Gamma
Each pixel in a digital image has a certain level of brightness ranging from black (0) to white (1). These pixel
values serve as the input for your computer monitor. Due to technical limitations, CRT monitors output these
values in a nonlinear way:
Output = Input ^ Gamma
When unadjusted, most CRT monitors have a "gamma" of 2.5 which means that pixels with a brightness of
0.5, will be displayed with a brightness of only 0.18 (0.5^2.5) in non-colormanaged applications[1]. LCDs, in
particular those on notebooks, tend to have rather irregularly shaped output curves. Calibration via software
and/or hardware ensures that the monitor outputs the image based on a predetermined gamma curve, typically
2.2 for Windows, which is approximately the inverse of the response of the human vision. The sRGB and
Adobe RGB color spaces are also based on a gamma of 2.2.
A monitor with a gamma equal to 1.0 would respond in a linear way (Output = Input) and images created on a
system with a gamma of 2.2 would appear "flat" and overly bright in non-color managed applications[1].
Linear Gamma 1.0

Nonlinear Gamma 2.2

Nonlinear Gamma 2.5

Input 0.5 -> Output 0.5

Input 0.5 -> Output 0.22

Input 0.5 -> Output 0.18

Image looks too bright and


"flat"

Image looks contrasty and


pleasing to the eye

Image looks too dark


(exaggerated example)

Technical Footnote for Advanced Users


1. (1) A colormanaged application like Adobe Photoshop would still display an sRGB image correctly
when working in sRGB, regardless of the gamma of the (profiled) monitor. However, the image
would be displayed with "banding" because of the 8 bit limitation of most video cards.

Histogram
Histograms are the key to understanding digital images. This 10x4 mosaic contains 40 tiles which we could
sort by color and then stack up accordingly. The higher the pile, the more tiles of that color in the mosaic. The
resulting "histogram" would represent the color distribution of the mosaic.

In the sensor topic we learned that a digital image is basically a mosaic of square tiles or "pixels" of uniform
color which are so tiny that it appears uniform and smooth. Instead of sorting them by color, we could sort
these pixels into 256 levels of brightness from black (value 0) to white (value 255) with 254 gray levels in
between. Just as we did manually for the mosaic, an imaging software automatically sorted the pixels of the
image below into 256 groups (levels) of "brightness" and stacked them up accordingly. The height of each
"stack" or vertical "bar" tells you how many pixels there are for that particular brightness. "0" and "255" are
the darkest and brightest values, corresponding to black and white respectively.

On this histogram each "stack" or "bar" is one pixel wide. Unlike the mosaic histograms, the 256 bars are
stacked side by side without any space between them, which is why for educational purposes, the vertical bars
are shown in alternating shades of gray, allowing you to distinguish the individual bars. There are no blank
spaces between bars to avoid confusion with blank spaces caused by missing tones in the image. Normally all
bars will be black as indicated in the second histogram.

Typical Histogram Examples

Correctly exposed image

This is an example of a correctly exposed image with a


"good" histogram. The smooth curve downwards ending
in 255 shows that the subtle highlight detail in the clouds
and waves is preserved. Likewise, the shadow area starts
at 0 and builds up gradually.

Underexposed image

The histogram indicates there are a lot of pixels with


value 0 or close to 0, which is an indication of "clipped
shadows". Some shadow detail is lost forever as
explained in the dynamic range topic. Unless there is a
lot of pure black in the image, there should not be that
many pure black pixels. There are also very few pixels in
the highlight area.

Overexposed image

The histogram indicates there are a lot of pixels with


value 255 or close to 255, which is an indication of
"clipped highlights". Subtle highlight detail in the clouds
and waves is lost. There are also very few pixels in the
shadow area.

Image with too much contrast

This image has both clipped shadows and highlights. The


dynamic range of the scene is larger than the dynamic
range of the camera.

Image with too little contrast

This image only contains midtones and lacks contrast,


resulting in a hazy image.

Image with modified contrast

When "stretching" the above histogram via a Levels or


Curves adjustment, the contrast of the image improves,
but since the tones are redistributed over a wider tonal
range, some tones are missing, as indicated in this
"combed" histogram. Too much combing can lead to
posterization.

Keeping an Eye on the Histograms when Taking Pictures

Example of camera histogram review


with overexposure warning
Most prosumer cameras and all professional cameras allow you to view the histogram on the camera's LCD so
you can adjust the exposure and take the shot again if necessary. Some cameras come with an overexposure
warning, whereby the overexposed areas blink, as indicated in this animation. Usually the blinking areas
indiate that at least one of the channels is clipped.

Keeping an Eye on the Histograms when Editing


When editing images, it is important to keep an eye on the histogram to avoid the above mentioned shadow
and highlight clipping and posterization. Adobe Photoshop CS and later versions come with a live histogram
palette, as stated in my Photoshop CS review.

Summary
It is essential to keep an eye on the histogram when taking pictures and when editing them to ensure proper
exposure and avoid losing shadow and highlight detail.

Interpolation
Interpolation (sometimes called resampling) is an imaging method to increase (or decrease) the number of
pixels in a digital image. Some digital cameras use interpolation to produce a larger image than the sensor
captured or to create digital zoom. Virtually all image editing software support one or more methods of
interpolation. How smoothly images are enlarged without introducing jaggies depends on the sophistication of
the algorithm.
The examples below are all 450% increases in size of this 106 x 40 crop from an image.

Nearest Neighbor Interpolation


Nearest neighbor interpolation is the simplest method and basically makes the pixels bigger. The color of a
pixel in the new image is the color of the nearest pixel of the original image. If you enlarge 200%, one pixel
will be enlarged to a 2 x 2 area of 4 pixels with the same color as the original pixel. Most image viewing and
editing software use this type of interpolation to enlarge a digital image for the purpose of closer examination
because it does not change the color information of the image and does not introduce any anti-aliasing. For
the same reason, it is not suitable to enlarge photographic images because it increases the visibility of jaggies.

Nearest Neighbor Interpolation

Bilinear Interpolation
Bilinear Interpolation determines the value of a new pixel based on a weighted average of the 4 pixels in the
nearest 2 x 2 neighborhood of the pixel in the original image. The averaging has an anti-aliasing effect and
therefore produces relatively smooth edges with hardly any jaggies.

Bilinear Interpolation

Bicubic interpolation
Bicubic interpolation is more sophisticated and produces smoother edges than bilinear interpolation. Notice
for instance the smoother eyelashes in the example below. Here, a new pixel is a bicubic function using 16
pixels in the nearest 4 x 4 neighborhood of the pixel in the original image. This is the method most commonly
used by image editing software, printer drivers and many digital cameras for resampling images. As
mentioned in my review, Adobe Photoshop CS offers two variants of the bicubic interpolation method:
bicubic smoother and bicubic sharper.

Bicubic Interpolation

Bicubic Smoother

Bicubic

Bicubic Sharper

Fractal interpolation
Fractal interpolation is mainly useful for extreme enlargements (for large prints) as it retains the shape of
things more accurately with cleaner, sharper edges and less halos and blurring around the edges than bicubic
interpolation would do. An example is Genuine Fractals Pro from The Altamira Group.

Fractal Interpolation
There are of course many other methods of interpolation but they're seldom seen outside of more
sophisticated image manipulation packages.

Jaggies
Hardly a technical term, jaggies refer to the visible "steps" of diagonal lines or edges in a digital image. Also
referred to as "aliasing", these steps are simply a consequence of the regular, square layout of a pixel.

Increasing Resolution Reduces the Visibility of Jaggies


Jaggies become less visible as the sensor or image resolution increases. The crops below are from pictures of
a flower against a blue sky taken with digital cameras with different resolutions[1]. The low resolution
cameras show very visible jaggies. As we increase the camera resolution from A to D, the steps become
almost invisible in crop D. But they are still present when the image is enlarged, as shown in crop E.

A.
76,800 pixels

B.
C.
307,200 pixels 1.2 megapixel

D.
5 megapixel

E. Red zone in D
8X enlarged

Anti-aliasing Reduces the Visibility of Jaggies


Digital camera images undergo natural anti-aliasing because the pixels that measure the edges receive
information from both sides of the edge. In this example the pixels that measure the yellow edge of the flower
will also measure some blue sky resulting in values that are somewhere between yellow and blue. This makes
the edges softer than in theoretical example F which has no anti-aliasing.

E. Red zone in D
8X enlarged

F.
No anti-aliasing

If the sensor has a color filter array, the interpolation of the missing information (demosaicing) uses
information of surrounding pixels and will therefore cause additional anti-aliasing.

Sharpening Increases the Visibility of Jaggies


Sharpening will increases edge-contrast (reduce anti-aliasing) and make jaggies more visible, as shown in the
sharpening topic. For the same reason, the jaggies in this rooftop against a bright sky are visible because the
contrast of the image made the edge sharper.

JPEG
The most commonly used digital image format is JPEG (Joint Photographic Experts Group). Universally
compatible with browsers, viewers, and image editing software, it allows photographic images to be
compressed by a factor 10 to 20 compared to the uncompressed original with very little visible loss in image
quality.

The Theory in a Nutshell


In a nutshell, JPEG rearranges the image information into color and detail information, compressing color
more than detail because our eyes are more sensitive to detail than to color, making the compression less
visible to the naked eye. Secondly, it sorts the detail information into fine and coarse detail and discards the
fine detail first because our eyes are more sensitive to coarse detail than to fine detail. This is achieved by

combining several mathematical and compression methods which are beyond the scope of this glossary but
explained in detail in 123di.

A Practical Example
JPEG allows you to make a trade-off between image file size and image quality. JPEG compression divides
the image in squares of 8 x 8 pixels which are compressed independently. Initially these squares manifest
themselves through "hair" artifacts around the edges. Then, as you increase the compression, the squares
themselves will become visible, as shown in the examples below, which are magnified by a factor 2.
100% Quality JPEG is very hard to
distinguish from the uncompressed
original which would typically take up 6
times more storage space.
80% Quality JPEG looks still very good,
especially when bearing in mind that this
crop is 2 times enlarged and that the file
size is typically 10 times smaller than the
uncompressed original. Notice some
deterioration along the edges of the
yellow crayon. Most digital cameras will
use a higher quality level than 80% as
their highest quality JPEG setting.
60% Quality JPEG. If you look carefully,
you will notice some of the JPEG squares
and "hair" artifacts around the edges.
However, the unmagnified crop shows
that the quality is sufficient for websites.
It is a great trade-off
because the file size
is typically 20 times
smaller than the
uncompressed
original.
10% Quality JPEG shows serious image
degradation with very visible 8 x 8 JPEG
squares. The only benefit of this low
quality level is that it illustrates what
JPEG is doing in a more subtle way at
higher quality levels. It is unlikely you
will ever compress this aggressively. The
example also shows that compression is
most visible around the edges.

Practical Tips

When editing an image in several sessions, it is recommended to save the intermediate image in an
uncompressed format such as TIFF or the editing program's native format (e.g. PSD for Adobe
Photoshop or PSP for Paintshop Pro). If you save for instance an image in JPEG, close it, open it
again and save it again in JPEG with the same quality setting, the file size will not reduce further, but
quality will have degraded further. So only compress after all editing is done.
Cameras usually have different JPEG quality settings, such as FINE, NORMAL, BASIC, etc. Unless
you shoot in RAW or TIFF, it is recommended to shoot in the hightest available JPEG quality setting.
Note however that some cameras will compress more than others, even at their highest JPEG quality
setting.

The compression article shows some numerical examples of file sizes.

Moir
If a scene contains areas with repetitive detail which exceeds the resolution of the camera[1], a wavy moir
pattern[2] can appear, as shown in crop A. There is no moir in crop B of an image of the same scene taken
with a camera with a higher resolution. Anti-alias[3] filters reduce or eliminate moir but also reduce image
sharpness.

A. Example of
moir waves.

B. No moir in this crop taken with a higher


resolution camera.

Maze Artifacts
Sometimes, moir can cause the camera's internal image processing to generate "maze" artifacts.

Example of maze
artifacts

Technical Footnotes for Advanced Users


1. (1) When projected onto the sensor.
2. (2) In technical terms this means that the spatial frequency of the subject is higher than the resolution
of the camera which we defined by the Nyquist frequency. This high frequency detail causes lower
harmonics to appear (frequency aliasing) in the form of moir waves.
3. (3) They are named anti-alias filters because they reduce "frequency aliasing" mentioned in the above
footnote. Anti-alias filters tend to soften images and create an "image anti-aliasing" effect.

Noise
The Cause: Sensor Noise
Each pixel in a camera sensor contains one or more light sensitive photodiodes which convert the incoming
light (photons) into an electrical signal which is processed into the color value of the pixel in the final image.

If the same pixel would be exposed several times by the same amount of light, the resulting color values
would not be identical but have small statistical variations, called "noise". Even without incoming light, the
electrical activity of the sensor itself will generate some signal, the equivalent of the background hiss of audio
equipment which is switched on without playing any music. This additional signal is "noisy" because it varies
per pixel (and over time) and increases with the temperature, and will add to the overall image noise. It is
called the "noise floor". The output of a pixel has to be larger than the noise floor in order to be significant
(i.e. to be distinguishable from noise).

The Effect: Image Noise


Noise in digital images is most visible in uniform surfaces (such as blue skies and shadows) as
monochromatic grain, similar to film grain (luminance noise) and/or as colored waves (color noise). As
mentioned earlier, noise increases with temperature. It also increases with sensitivity, especially the color
noise in digital compact cameras (example D below). Noise also increases as pixel size decreases, which is
why digital compact cameras generate much noisier images than digital SLRs. Professional grade cameras
with higher quality components and more powerful processors that allow for more advanced noise removal
algorithms display virtually no noise, especially at lower sensitivities. Noise is typically more visible in the
red and blue channels than in the green channel. This is why the unmagnified red channel crops in the
examples below are better at illustrating the differences in noise levels.
Blue Sky Crop

RGB

Red Channel

Camera Grade

Professional

Prosumer

Prosumer

Prosumer

SLR

SLR

Compact

Compact

Large

Large

Small

Small

ISO

100

200

100

800

Red Ch. St.


Dev.

1.8

2.5

5.6

22.6

Camera Type
Pixel Size

Crop C after
123di noise
reduction.
1.4

The standard deviation measured in a uniform area of an image (in the above examples measured in the red
channel) is a good way to quantify image noise as it is an indication of how much the pixels in that area differ
from the average pixel value in that area. The standard deviation in the noisy examples C and D is much
larger than A, B, and E. Crop E shows that noise reduction can go a long way.

Long Exposure "Stuck Pixels" Noise

Another type of noise, often referred to as "stuck pixels" or "hot pixels" noise, occurs with long exposures (12 seconds or more) and appears as a pattern of colored dots (slightly larger than a single pixel). As explained
in the noise reduction topic, long exposure noise is much less visible in the latest digital cameras.

Posterization
Posterization or Banding

Smooth tonal gradations in the sky.

This exaggerated example shows


"posterization" or "banding" due to a
lack of tones. The histogram is combed
with wide gaps between adjacent tones.

When performing image manipulations such as tonal conversions in low bit environments (e.g. 8bits/channel
mode) only a limited number of tones may be available to describe a certain area of the image, as shown in
this exaggerated example. This causes visible "banding" or "posterization".

RAW
Unlike JPEG and TIFF, RAW is not an abbreviation but literally means "raw" as in "unprocessed". A RAW
file contains the original image information as it comes off the sensor before in-camera processing so you can
do that processing afterwards on your PC with special software.

The RAW Storage and Information Advantages


In the Color Filter Array topic, we explained that each pixel in a
conventional sensor only captures one color. This data is typically
10 or 12 bits per pixel, with 12 bits per pixel currently being most
common. This data can be stored as a RAW file. Alternatively, the
camera's internal image processing engine can interpolate the raw
data to determine the three color channels to output a 24 bit JPEG
or TIFF image.
RAW (10 or 12 bit)

Red Channel (8 bit)

Green Channel (8 bit) Blue Channel (8 bit) JPEG or TIFF (24 bit)

Even though the TIFF file only retains 8 bits/channel of information, it will take up twice the storage space
because it has three 8 bit color channels versus one 12 bit RAW channel. JPEG addresses this issue by
compression, at the cost of image quality. So RAW offers the best of both worlds as it preserves the original
color bit depth and image quality and saves storage space compared to TIFF. Some cameras offer nearly
lossless compressed RAW.

The Flexibility of RAW


In addition, many of the camera settings which were applied to the raw data can be undone when using the
RAW processing software. For instance, sharpening, white balance, levels and color adjustments can be
undone and recalculated based on the raw data. Also, because RAW has 12 bits of available data, you are able
to extract shadow and highlight detail which would have been lost in the 8 bits/channel JPEG or TIFF format.

Disadvantages of RAW
The only drawback is that RAW formats differ between camera manufacturers, and even between cameras, so
dedicated software provided by the manufacturer has to be used. Furthermore, opening and processing RAW
files is much slower than JPEG or TIFF files. To address this issue, some cameras are offering the option to
shoot in RAW and JPEG at the same time. As cameras become faster and memory cards cheaper, this option
has no longer performance or storage issues. It allows you to organize and edit your images in a faster way
with regular software using the JPEGs. But you retain the option to process in RAW those critical images or
images with problems (e.g. white balance or lost shadow and highlight detail). Another trend is that third
party image editing and viewing software packages are becoming RAW compatible with most popular camera
brands and models. An example is Adobe Photoshop CS. However, as stated in my Photoshop CS review, the
way Photoshop processes RAW files can be different from the way the camera manufacturer's software does
it and not all settings may be recognized.

Resolution
Sensor Resolution
The number of effective non-interpolated pixels on a sensor is discussed in the topic about pixels.

Image Resolution
The resolution of a digital image is defined as the number of pixels it contains. A 5 megapixel image is
typically 2,560 pixels wide and 1,920 pixels high and has a resolution of 4,915,200 pixels, rounded off to 5
million pixels. It is recommended to shoot at a resolution which corresponds with the camera's effective pixel
count. As explained in the pixels topic, shooting at higher (interpolated) resolutions (if available as an option)
creates only marginal benefits but takes up more card space. Shooting at lower resolutions only makes sense
if you are running out of card space and/or image quality is not important.

Resolution Charts at dpreview.com: Horizontal and Vertical LPH


We measure resolution using the widely accepted PIMA/ISO 12233 camera resolution test chart. This chart is
excellent, not only for measuring pure horizontal and vertical resolution, but also to test the performance of
the sensor with frequencies at various angles. It also offers a good reference point for comparison of
resolution between cameras. The chart is available for every camera which comes through our test labs, both
in the camera reviews and our extensive products database.

Resolution test chart from the Nikon Coolpix 8700


review. The areas indicated in red are shown as crops
below.

Crop A. The black and white lines can be


distinguished from one another until position
"16", so the Horizontal LPH is 1,600, as
explained below.

Crop B. The black and white lines


can be distinguished from one
another until position "15", so the
Vertical LPH is 1,500.

Horizontal LPH refers to the number of (vertical) lines measured along the horizontal (x) axis or width of
the image. Crop A shows a test pattern consisting of 9 black lines with 8 black white lines in between. From
the crop you can see that below label "16" the 17 lines start to merge and become hard to distinguish from one
another. The crop shows that at label "16", the 17 lines cover a horizontal distance of 26 pixels. Since this
sample image is 2,448 pixels high, the horizontal number of lines per pixel height is therefore 2,448/26*17 or

1,600 LPH. So in general a value of "16" on the resolution chart equates to 1,600 lines per picture height
(LPH).
Likewise, the Vertical LPH refers to the number of (horizontal) lines measured along the vertical (y) axis or
height of the image. Crop B shows that in this example the vertical LPH is around 1,500 LPH.
Because the resolution is "normalized" to the picture height, the results of cameras with different aspect ratios
can be compared easily.
Since we normalize on picture height, the absolute number of (horizontal) lines the camera is able to resolve
along the vertical axis (image height) is equal to the vertical LPH. The absolute number of (vertical) lines the
camera is able to resolve along the horizontal axis (image width) is equal to the horizontal LPH multiplied by
the aspect ratio. In this example, this would work out to be 1,600 x 1.333 = 2,133 since the camera has an
aspect ratio of 4:3.
You will immediately notice that 2,133 x 1,500 or 3,200,000 is much lower than the image resolution of
8,000,000 (3,264 x 2,448). This is because the data collected by a sensors with a color filter array have to be
interpolated and because many cameras have an anti-alias filter. In Foveon sensors, the image resolution is
close to the sensor resolution, as shown in the Sigma SD10 review on this site. Limitations of the optics
required to create an incredibly sharp image on the small sensor area can also affect the image resolution.

Resolution Charts at dpreview.com: 5 Diagonal Lines LPH


Our reviews also state the 5 Diagonal Lines LPH, measured in crop C in this example. Since the chart only
goes up to 1,000 LPH for this camera, the review states 1,000+ as LPH.

Crop C. The black and white 5 diagonal


lines can be distinguished from one
another until position "10", the
maximum of the chart, so the 5
Diagonal Lines LPH is 1,000+ for this
camera.

Resolution Charts at dpreview.com: Absolute and Extinct LPH


The above explanations refer to "Absolute LPH" which is an LPH with clearly defined detail[1]. Our reviews
also state the "Extinct LPH". This is the LPH at which the lines become solid gray. The detail at that level is
beyond the camera's definition. Between the Absolute and Extinct LPHs only some detail can be captured.

Crop D. Around label "18", the black and


white lines merge into solid gray, so the
Vertical Extinct LPH is 1,800 LPH.

Sensitivity (ISO)
Conventional film comes in different sensitivities (ASAs) for different purposes. The lower the sensitivity, the
finer the grain, but more light is needed. This is excellent for outdoor photography, but for low-light
conditions or action photography (where fast shutterspeeds are needed), more sensitive or "fast" film is used
which is more "grainy".
Likewise, digital cameras have an ISO rating indicating their level of sensitivity to light. ISO 100 is the
"normal" setting for most cameras, although some go as low as ISO 50. The sensitivities can be increased to
200, 400, 800, or even 3,200 on high-end digital SLRs. When increasing the sensitivity, the output of the
sensor is amplified, so less light is needed. Unfortunately that also amplifies the undesired noise. Incidentally,
this creates more grainy pictures, just like in conventional photography, but because of different reasons.
It is similar to turning up the volume of a radio with poor reception. Doing so will not only amplify the
(desired) music but also the (undesired) hiss and crackle or "noise". Improvements in sensor technology are
steadily reducing the noise levels at higher ISOs, especially on higher-end cameras. And unlike conventional
film cameras which require a change of film roll or the use of multiple bodies, digital cameras allow you to
instantly and conveniently change the sensitivity depending on the circumstances.

ISO 100

ISO 800

ISO 100 - Red Channel

ISO 800 - Red Channel

The above unmagnified crops of prosumer digital camera images show high levels of color noise at higher
sensitivities. Noise is usually most visible in the red and blue channels.

Sharpening
There are two types of sharpness and it is important not to mix them up. Optical sharpness is defined by the
quality of the lens and the sensor. Software sharpness will create an "optical illusion" of sharpness by making
the edges more contrasty. Software sharpening is of course unable to create detail beyond the camera's
resolution, it will only help to bring out captured detail.

Original

Magnified crop (2X)

Comment

Soft edges before


sharpening

Sharper edges
after sharpening

Over sharpening
results in halos

This simple example shows that normal sharpening creates cleaner edges than the original. Over sharpening
makes the circle look artificially sharp. This is achieved by creating a white external halo (making the light
gray of the background brighter around the circle's edge) and an internal black halo (making the darker gray
of the circle darker around the circle's edge). Because the difference between the white and black halos is
larger than between the gray of the circle and the background, the edge contrast has been increased, creating
the illusion of enhanced sharpness. But the halos are undesirable in photographic images and are extremely
hard to undo, unless you shoot in RAW (see below).

In-camera Sharpening
Digital cameras will, as a part of their default image processing, apply some level of sharpening, to counteract
the effects of the interpolation of colors during the color filter array decoding process (which will soften detail
slightly). Note however that too much in-camera sharpening will create sharpening halos and increase the
visibility of jaggies, noise, and other image artifacts. Prosumer digital cameras and digital SLRs allow users to
control the amount of sharpening applied to an image, or even disable it completely.

Sharpening with Software


If the camera allows you to shoot in RAW, the in-camera sharpening can be undone via software afterwards
on your computer. You can then decide the level of sharpening you want to apply in order to avoid the above
sharpening halos and depending on the purpose. For instance for web or monitor viewing purposes you may
want to apply some sharpening to "pull out" fine details of downsampled images. For printing, sharpening
should be applied with caution to avoid the image looking fake and over-processed.
If you shoot in JPEG it is recommended to apply some in-camera sharpening (e.g. "Low" or "Normal")
because with regular software, it is not so easy to achieve the same sharpening quality level of in-camera
sharpening. One of the reasons is that in-camera sharpening is applied before JPEG compression, while
sharpening on your computer is done after JPEG compression, thereby making the edges of the JPEG
compression squares more visible. If the in-camera sharpening was insufficient, you can still apply some
additional sharpening with software. This is much easier than to undo the effects of over sharpening.

Getting Started
If you are new to digital photography and this website, this page will bring you up to speed.

Which Digital Camera Should You buy?


The "best" digital camera depends on your personal needs such as size, budget, your style of photography,
what you intend to do with the images (e.g. share via email, post on websites, view on a monitor, create
prints, publish them, etc). The "best" pocket size camera to take family snapshots to be shared via e-mail is
obviously different from the "best camera" for professional wildlife photography. The table below will give
you an idea what you type of camera you should be considering and some examples in each category that link
to reviews on this site.
Description of your needs

Suggested
camera type

Some
cameras to
consider

I want an easy to use camera that is as small as


possible, preferably something which looks cool and
trendy, to take snasphots on the go.

Ultra Compact
($)

Canon
PowerShot
S500
Sony DSCW1
Canon
PowerShot
S410
Sony DSCT11
Sony DSCT1

I want an affordable and easy to use camera that is


compact enough to fit in my handbag or briefcase. It
should have sufficient quality for posting images on a
website, view them on a monitor or to create 4"x6"
prints.

Compact
($)

Canon
PowerShot
A80
Canon
PowerShot
S60
Canon
PowerShot
A75
Nikon
Coolpix
5200

I want to create high quality 8" x 10" images, have


manual control over photographic settings, have a good
zoom range, preferably extendable via converters, and
ideally be able to use an external flash. However, I
don't want to carry around a digital SLR with lenses. I
can live with some shutterlag and do not intend to
shoot in RAW all the time.

Prosumer
Compact or
Prosumer SLRLike
($$)

Canon
PowerShot
Pro1
Canon
PowerShot
S1 IS
Panasonic
DMC-FZ10
Canon
PowerShot
G5
Nikon
Coolpix
8700
Sony DSCF828
Olympus C8080 Wide
Zoom
Fujifilm
FinePix
S7000 Z

I want a digital camera with the benefits of a


conventional 35mm film SLR camera: no shutterlag,
interchangeable lenses, an optical viewfinder, full
manual control, external flash, etc. My images will be
viewed on high quality monitors, and I need high
quality 8" x 10" prints. Price is an important factor in
my decision.

Prosumer SLR
($$ + Lenses)

Canon EOS300D
Nikon D70
Canon EOS10D

I want a digital camera with the benefits of a


conventional 35mm film SLR camera: no shutterlag,
interchangeable lenses, an optical viewfinder, full
manual control, external flash, etc. I need the highest
possible resolution and image quality and/or top
performance in terms of startup time, shutterlag, and
frames per second. I intend to use the camera for
professional purposes.

Professional
SLR
($$$ + Lenses)

Canon EOS1Ds
Canon EOS1D Mark II
Nikon D2H

Should You Buy a Digital Compact or a Digital SLR?


The above table should give you an idea as to what type of camera is best for you. Prices of prosumer
compact cameras and prosumer SLRs are currently very similar. You will only have to invest more in the
interchangeable lenses for the SLR. Price aside, the prosumer compacts have the key benefit of being easier to
cary around than an SLR system with lenses. They also offer live preview on the LCD which is often
twistable, allowing to shoot from unusual angles.
Digital SLRs have faster startup times, shorter lag times, and can shoot more frames per second, so you can
"capture the moment". They also allow for longer focal lengths, important for wildlife, sports, and action
photography. Unlike LCD's on compact cameras, the optical viewfinder allows for more accurate framing,
and is unaffected by bright sunlight conditions. The larger sensors on digital SLRs and their ability to shoot in
RAW, will lead to better image quality, even at high sensitivities. Prosumer compacts which allow shooting in
RAW mode tend to be too slow to be practical in most circumstances.

How Many Megapixel Do You Need?


Here is a rough guide as to the resolution you should be looking for:

2 megapixel is sufficient for website publishing


3 megapixel is sufficient for 4" x 6" prints and monitor viewing
5 megapixel is sufficient for 8" x 10" prints
higher resolutions are only needed for professional purposes

Note however that it is not just the number of pixels that is important but also the quality of the pixels itself.

TIFF
TIFF (Tagged Image File Format) is a universal image format that is compatible with most image editing and
viewing programs. It can be compressed in a lossless way, internally with LZW or Zip compression, or
externally with programs like WinZip. While JPEG only supports 8 bits/channel single layer RGB images,
TIFF also supports 16 bits/channel multi-layer CMYK images in PC and Macintosh format. TIFF is therefore
widely used as a final format in the printing and publishing industry.
Many digital cameras offer TIFF output as an uncompressed alternative to compressed JPEG. Due to space
and processing constraints only the 8 bits/channel version is used in digital cameras. Higher-end scanners
offer a 16 bits/channel TIFF option. If available, RAW is a much better alternative for digital cameras than
TIFF.

Tonal Range
The tonal range of a digital image is the number of tones it has to describe the dynamic range. These
conceptual examples show that an image with a large dynamic range can have a narrow tonal range and an
image with a low dynamic image can have a wide tonal range.
Wide Tonal Range

Narrow Tonal Range

High Dynamic Range

Low Dynamic Range

Dynamic Range and Tonal Range of the Sensor


The dynamic range and tonal range of a sensor are related. If a sensor has a dynamic range of say 1000:1
AND it has an ADC of at least 10 bit, it automatically has a wide tonal range. If a sensor with a 10 bit ADC is
able to output about 1,000 different tones, the sensor must have a dynamic range of at least 1000:1. This is
because the sensor is linear and an ADC samples in equal steps.

Dynamic Range and Tonal Range of the Image


Once you apply a tonal curve to the linear sensor data, the dynamic range and tonal range of the image can
vary independently, depending on what tonal curve you apply. The tonal curve can compress the dynamic
range, the tonal range, or both[1].
When shooting in JPEG, the rather contrasty tonal curves applied by the camera may clip shadow and
highlight detail which was present in the RAW data. RAW images preserve the dynamic range captured by
the sensor and allow you to compress the dynamic range and tonal range by applying a proper tonal curve so
that the whole dynamic range is represented on a monitor or print in a way that is pleasing to the eye. This is
similar to the more extreme example below which shows how the larger dynamic range and tonal range of a
32 bit floating point HDR image were compressed.

Dynamic Range and Tonal Range of a Monitor or Printer - Compression


Monitors and printers have a limited dynamic range. Therefore a tonal curve is applied to the linear raw data
to compress the dynamic range so that it fits in the dynamic range of the monitor or printer. The tonal curve is
chosen so that detail is preserved where it is most noticeable. As a result, the image looks pleasing to the
human eye and the perceived dynamic range is higher, even on a limited dynamic range print.

A. Shadow detail captured

B. Highlight detail
captured

C. Shadow and highlight


detail combined; but h:s is
reduced

In this scene, the shadows were about 2,000 times darker than the highlights (11 stops). As explained in the
dynamic range topic, a digital camera could capture either image A or image B. In image A, the highlights are
clipped because the long exposure needed for the shadow detail to be captured, caused the highlight pixels to
overflow. In image B, to prevent the highlights from being clipped, exposure had to be so short that the sensor
did not receive enough photons to capture shadow detail, resulting in clipped shadows.
In Adobe Photoshop CS2 you can combine several exposure bracked images into a high dynamic range
image. How do we represent such a high dynamic range on a limited dynamic range monitor or printer? The
answer is "compression".
In the histogram of images A and B, the red and blue areas show the unclipped shadow and highlight detail
respectively. Compressing both areas by describing them with fewer tones allows both areas to be combined
into a single image that looks pleasing on a monitor or print. The catch is that in the actual scene the values of
the highlights were about 2000 times stronger than the shadows, while on the monitor, and especially on the
print, h:s will be much lower. When viewing Image C on a monitor, it has a high perceived dynamic range
because it looks as if the whole dynamic range of the scene was captured in single exposure by a camera with
a very high dynamic range. Tonal compression is better done in high bit environments as it avoids
posterization.

White Balance
Color Temperature
Most light sources are not 100% pure white but have a certain "color temperature", expressed in Kelvin. For
instance, the midday sunlight will be much closer to white than the more yellow early morning or late
afternoon sunlight. This diagram gives rough averages of some typical light sources.

Type of Light

Color Temperature in
K

Candle Flame

1,500

Incandescent

3,000

Sunrise, Sunset

3,500

Midday Sun, Flash

5,500

Bright Sun, Clear Sky

6,000

Cloudy Sky, Shade

7,000

Blue Sky

9,000

White Balance
Normally our eyes compensate for lighting conditions with different color temperatures. A digital camera
needs to find a reference point which represents white. It will then calculate all the other colors based on this
white point. For instance, if a halogen light illuminates a white wall, the wall will have a yellow cast, while in
fact it should be white. So if the camera knows the wall is supposed to be white, it will then compensate all
the other colors in the scene accordingly.
Most digital cameras feature automatic white balance whereby the camera looks at the overall color of the
image and calculates the best-fit white balance. However these systems are often fooled especially if the scene
is dominated by one color, say green, or if there is no natural white present in the scene as show in this
example.

The auto white balance was unable to


find a white reference, resulting in dull
and artificial colors.

The auto white balance got it right this


time in a very similar scene because it
could use the clouds as its white
reference.

Most digital cameras also allow you to choose a white balance manually, typically sunlight, cloudy,
fluorescent, incandescent etc. Prosumer and SLR digital cameras allow you to define your own white balance
reference. Before making the actual shot, you can focus at an area in the scene which should be white or
neutral gray, or at a white or gray target card. The camera will then use this reference when making the actual
shot.

AE Lock
Automatic Exposure lock is the ability to lock exposure settings (aperture and shutterspeed) calculated by the
camera over a series of images. This setting is useful when shooting images which will be stitched together
into a panorama because stitching is much easier if each image has the same exposure.

Aperture
Aperture refers to the size of the opening in the lens that determines the amount of light falling onto the film
or sensor. The size of the opening is controlled by an adjustable diaphragm of overlapping blades similar to
the pupils of our eyes. Aperture affects exposure and depth of field.
Just like successive shutterspeeds, successive apertures halve the amount of incoming light. To achieve this,
the diaphragm reduces the aperture diameter by a factor 1.4(square root of 2) so that the aperture surface is
halved each successive step as shown on this diagram.

Because of basic optical principles, the absolute aperture sizes and diameters depend on the focal length. For
instance, a 25mm aperture diameter on a 100mm lens has the same effect as a 50mm aperture diameter on a
200mm lens. If you divide the aperture diameter by the focal length, you will arrive at 1/4 in both cases,
independent of the focal length. Expressing apertures as fractions of the focal length is more practical for
photographers than using absolute aperture sizes. These "relative apertures" are called f-numbers or f-stops.
On the lens barrel, the above 1/4 is written as f/4 or F4 or 1:4.
We just learned that the next aperture will have a diameter which is 1.4 times smaller, so the f-stop after f/4
will be f/4 x 1/1.4 or f/5.6. "Stopping down" the lens from f/4 to f/5.6 will halve the amount of incoming light,
regardless of the focal length. You now understand the meaning of the f/numbers found on lenses:

Because f-numbers are fractions of the focal length, "higher" f-numbers represent smaller apertures.

Maximum Aperture or Lens Speed


The "maximum aperture" of a lens is also called its "lens speed". Aperture and shutterspeed are interrelated
via exposure. A lens with a large maximum aperture (e.g. f/2) is called a "fast" lens because the large aperture
allows you to use high (fast) shutterspeeds and still receive sufficient exposure. Such lenses are ideal to shoot
moving subjects in low light conditions.
Zoom lenses specify the maximum aperture at both the wide angle and tele ends, e.g. 28-100mm f/3.5-5.6. A
specification like 28-100mm f/2.8 implies that the maximum aperture is f/2.8 throughout the zoom range.
Such zoom lenses are more expensive and heavy.

Aperture Priority
In "Aperture Priority" mode, the camera allows you to select the aperture over the available range and have
the camera calculate the best shutter speed to expose the image correctly. This is important if you want to
control depth of field or for special effects. Note that because of their high focal length multiplier, a shallow
depth of field is often very hard to achieve with digital compact cameras, even at the largest aperture.

Auto Bracketing
Bracketing is a technique used to take a series of images of the same scene at a variety of different exposures
that "bracket" the metered exposure (or manual exposure). "Auto" simply means the camera will
automatically take these exposures as a burst of 2, 3 or 5 frames with exposure settings of anything between
0.3 and 2.0 EV difference. This can be useful if you're not sure exactly how the shot will turn out or are
worried that the scene has a dynamic range which is wider than the camera can capture. On a digital camera
this can also be used to combine under- and overexposed images together to produce an image that could only
have been taken if the camera had the same dynamic range as the scene, as shown in the example below.
More about this in the tonal range topic.
When setting up for bracketing you can usually select the number of frames to be taken (typically 2, 3 or 5),
the exposure setting and the order in which to take the shots (eg. 0,-,+ or -,0,+ etc.). It is important to note that
the values are exposure compensation values.
The extreme example below was taken with auto bracketing of 5 frames at 1.0 EV in the -,0,+ order. Thus, in
this case without bracketing the camera would simply have shot the frame with an aperture of f/4.0 and a
shutterspeed of 1/160s. The +2.0 EV image was not used in the combination image.

f/7.1, 1/306s, -2.0 EV

f/5.6, 1/224s, -1.0 EV

f/4.0, 1/160s, 0 EV

f/3.1, 1/71s, +1.0 EV

f/2.8, 1/39s, +2.0 EV

Combination of -2, -1, 0,


+1 EV

Some digital cameas also allow white balance auto bracketing.

Exposure
The exposure is the amount of light received by the film or sensor and is determined by how wide you open
the lens diaphragm (aperture) and by how long you keep the film or sensor exposed (shutterspeed). The effect
an exposure has depends on the sensitivity of the film or sensor.

The exposure generated by an aperture, shutterspeed, and sensitivity combination can be represented by its
exposure value "EV". Zero EV is defined by the combination of an aperture of f/1 and a shutterspeed of 1s at
ISO 100[1]. Each time you halve the amount of light collected by the sensor (e.g. by doubling shutterspeed or
by halving the aperture), the EV will increase by 1. For instance, 6 EV represents half the amount of light as 5
EV. High EVs will be used in bright conditions which require a low amount of light to be collected by the
film or sensor to avoid overexposure.
To get a feel, just select the aperture, shutterspeed, and ISO in the interactive EV calculator below, and click
on the "Calculate EV" button to determine the corresponding exposure value.
Aperture
Effect of
doubles
scrolling up aperture
>
(less DOF)
one step

Shutterspeed
doubles
exposure
time

Sensitivity
doubles the
effect
of incoming
light
(more noise)

Select here >


Effect of
scrolling
down ->
one step

Exposure Value[1]
-1 EV
doubles the amount of light[3]
collected by the sensor
13 EV

halves
aperture
(more DOF)

halves
exposure
time

halves the
effect
of incoming
light
(less noise)

+1 EV
halves the amount of light[3]
collected by the sensor

From the above it is clear that a certain exposure value can be achieved by a variety of combinations of
aperture, shutterspeed and sensitivity. For instance if you are shooting at ISO 100 with an aperture of f/8 and a
shutterspeed of 1/125s, doubling the shutterspeed to 1/250 (halving the exposure time) and reducing the fnumber one stop to f/5.6 (doubling the aperture) will lead to the same exposure of 13 EV. Or if you double the
shutterspeed to 1/250s (halve the exposure time) while keeping the aperture unchanged at f/8, you could
double the effect of the incoming light by doubling the sensitivity to ISO 200, thereby keeping the EV
constant at 13 EV. Note that doing so will increase noise levels in digital cameras and film grain in
conventional cameras.
In automatic mode, the camera determines the optimal combination of aperture, shutterspeed, and
sensitivity[4] based on the exposure value determined by the light metering system. A high EV indicates
bright conditions, hence the need for high shutterspeeds, high f-numbers, and/or low sensitivities, to avoid
overexposure. When you change the aperture in aperture priority mode, the camera will adjust the
shutterspeed to keep the EV constant. In shutter priority mode, the camera will adjust the aperture to keep the
EV constant.

Technical Footnotes (for the purists)


1. (1) Strictly speaking, the term "exposure value" is used to represent shutterspeed and aperture
combinations. An exposure value which takes into account the ISO sensitivity is called "Light Value"
or LV and represents the luminance of the scene[2]. For the sake of simplicity, as is the case in this
article, Light Value is often referred to as "exposure value", grouping aperture, shutterspeed and
sensitivity in one familiar variable. This is because in a digital camera it is as easy to change
sensitivity as it is to change aperture and shutterspeed. Many digital cameras even offer an auto-ISO
mode. Although sensitivity will not change the amount of light entering the camera, it changes the
effect of it and is therefore a third variable that can be adjusted to achieve an exposure that matches
what is measured by the camera's light meter. As stated in the article, changes in the sensitivity will
affect the noise levels in the image.
Given the automatic metering systems in current cameras, the absolute EV value is less important
than in the days when people were working with exposure tables. What is more important is to
understand the effect of aperture, shutterspeed, and sensitivity on the exposure (and quality) of the
image.
2. (2) There is also a variant, called "Brightness Value" or BV, used in the APEX system.

3. (3) When sensitiviy is adjusted, it is not the amount of light but the effect of it that is adjusted.
4. (4) In case the "auto-ISO" option is selected.

Exposure Compensation
The camera's metering system will sometimes determine the wrong exposure value needed to correctly expose
the image. This can be corrected by the "EV Compensation" feature found in prosumer and professional
cameras. Typically the EV compensation ranges from -2.0 EV to +2.0 EV with adjustments in steps of 0.5 or
0.3 EV. Some digital SLRs have wider EV compensation ranges, e.g. from -5.0 EV to +5.0 EV.
It is important to understand that increasing the EV compensation by 1 is equivalent to reducing EV by 1
and will therefore double the amount of light. For instance if the camera's automatic mode determined you
should be using an aperture of f/8 and a shutterspeed of 1/125s at ISO 100 (13 EV) and the resulting image
appears underexposed (e.g. by looking at the histogram), applying a +1.0 EV exposure compensation will
cause the camera to use a shutterspeed of 1/60s or an aperture of f/5.6 to allow for more light (12 EV).
Of course, as you become more familiar with your camera's metering system, you can already apply an EV
compensation before the shooting. For instance if your camera tends to clip highlights and you are shooting a
scene with bright clouds, you may want to set the EV compensation to -0.3 or -0.7 EV.

Flash Output Compensation


Flash output compensation is similar to exposure compensation and allows you to preset an adjustment value
for the flash output power. Some digital cameras allow you to set this value using the familiar EV range (+/-2
EV), others simply have "high, normal, and low" settings. This feature is useful to compensate when the
cameras flash metering was not perfect and caused under- or overexposure.

Manual
In "Full Manual" mode, you can set both the aperture and the shutterspeed. This gives you ultimate control
over the exposure. This can be useful to ensure that the same exposure is used for a sequence of shots or when
shooting in special circumstances, e.g. shooting in direct sunlight. Higher-end prosumer digital cameras and
all digital SLRs feature full manual exposure. When in full manual exposure mode, the camera will often
display a simulated exposure meter which will indicate how far over- or underexposed the image is compared
to the exposure value calculated by the camera's metering system. Prosumer digital cameras with live LCD
preview will often simulate the effects of the exposure on the live preview.

Metering
The metering system in a digital camera measures the amount of light in the scene and calculates the best-fit
exposure value based on the metering mode explained below. Automatic exposure is a standard feature in all
digital cameras. All you have to do is select the metering mode, point the camera and press the shutter release.
Most of the time, this will result in a correct exposure.
The metering method defines which information of the scene is used to calculate the exposure value and how
it is determined. Metering modes depend on the camera and the brand, but are mostly variations of the
following three types:

Matrix or Evaluative Metering


This is probably the most complex metering mode, offering the best exposure in most circumstances.
Essentially, the scene is split up into a matrix of metering zones which are evaluated individually. The overall
exposure is based on an algorithm specific to that camera, the details of which are closely guarded by the
manufacturer. Often they are based on comparing the measurements to the exposure of typical scenes.

Center-weighted Average Metering


Probably the most common metering method implemented in nearly every digital camera and the default for
those digital cameras which don't offer metering mode selection. This method averages the exposure of the
entire frame but gives extra weight to the center and is ideal for portraits.

Spot (Partial) Metering


Spot metering allows you to meter the subject in the center of the frame (or on some cameras at the selected
AF point). Only a small area of the whole frame is metered and the exposure of the rest of the frame is
ignored. This type of metering is useful for brightly backlit, macro, and moon shots.

Metering
The metering system in a digital camera measures the amount of light in the scene and calculates the best-fit
exposure value based on the metering mode explained below. Automatic exposure is a standard feature in all
digital cameras. All you have to do is select the metering mode, point the camera and press the shutter release.
Most of the time, this will result in a correct exposure.
The metering method defines which information of the scene is used to calculate the exposure value and how
it is determined. Metering modes depend on the camera and the brand, but are mostly variations of the
following three types:

Matrix or Evaluative Metering


This is probably the most complex metering mode, offering the best exposure in most circumstances.
Essentially, the scene is split up into a matrix of metering zones which are evaluated individually. The overall
exposure is based on an algorithm specific to that camera, the details of which are closely guarded by the
manufacturer. Often they are based on comparing the measurements to the exposure of typical scenes.

Center-weighted Average Metering


Probably the most common metering method implemented in nearly every digital camera and the default for
those digital cameras which don't offer metering mode selection. This method averages the exposure of the
entire frame but gives extra weight to the center and is ideal for portraits.

Spot (Partial) Metering


Spot metering allows you to meter the subject in the center of the frame (or on some cameras at the selected
AF point). Only a small area of the whole frame is metered and the exposure of the rest of the frame is
ignored. This type of metering is useful for brightly backlit, macro, and moon shots.

Remote Capture
Remote capture software allows a computer to remotely fire a digital camera connected to it. Two key
benefits are that images can be stored directly onto the computer's hard disk and that images can be
immediately previewed on the computer monitor instead of on the small LCD of the camera.

Shutterspeed
The shutterspeed determines how long the film or sensor is exposed to light. Normally this is achieved by a
mechanical shutter between the lens and the film or sensor which opens and closes for a time period
determined by the shutterspeed. For instance, a shutter speed of 1/125s will expose the sensor for 1/125th of a
second. Electronic shutters act in a similar way by switching on the light sensitive photodiodes of the sensor

for as long as is required by the shutterspeed. Some digital cameras feature both electronic and mechanical
shutters.
Shutterspeeds are expressed in fractions of seconds, typically as (approximate) multiples of 1/2, so that each
higher shutterspeed halves the exposure by halving the exposure time: 1/2s, 1/4s, 1/8s, 1/15s, 1/30s, 1/60s,
1/125s, 1/250s, 1/500s, 1/1000s, 1/2000s, 1/4000s, 1/8000s, etc. Long exposure shutterspeeds are expressed in
seconds, e.g. 8s, 4s, 2s, 1s.
The optimal shutterspeed depends on the situation. A useful rule of thumb is to shoot with a shutterspeed
above 1/(focal length) to avoid blurring due to camera shake. Below that speed a tripod or image stabilization
is needed. If you want to "freeze" action, e.g. in sports photography, you will typically need shutterspeeds of
1/250s or more. But not all action shots need high shutterspeeds. For instance, keeping a moving car in the
center of the viewfinder by panning your camera at the same speed of the car allows for lower shutterspeeds
and has the benefit of creating a background with a motion blur.

This image was shot at


Motion blur created by tracking the car with the
1/500s, freezing the splashing camera and shooting at 1/125s. The motion blur
of the waves.
and speed effects were further enhanced using
techniques described in my interactive e-book.
Prosumer and professional cameras provide shutter priority exposure mode, allowing you to vary the
shutterspeed while keeping exposure constant.

Shutter Priority
In "Shutter Priority" mode, you can select the shutterspeed over the available range and have the camera
calculate the best aperture to expose the image correctly. Shutter speed priority is often used to create special
effects such as blurred water on a river/waterfall or to freeze action in action scenes as illustrated in the
shutterspeed topic of this glossary.

Time Lapse
Cameras with a time lapse feature can be programmed to automatically shoot a number of frames over a
period of time or with a certain time interval between each frame. For instance, a camera on a tripod in time
lapse mode could be set up to shoot frames of a flower opening or a bird building a nest. Some cameras
feature a built-in time lapse mode; others allow you to set up time lapse as part of a Remote Capture
application. This requires the camera to be connected to a computer.

Anti-shake

Another approach to image stabilization is to make the CCD move so that it compensates for the camera
movement as implemented in the Konica Minolta DiMAGE A2. The sensor is mounted onto a platform which
moves in the opposite way as the movement of the camera, which is determined by motion detectors.
According to Konica Minolta, this "anti-shake" system gives you an additional 3 stops. For example if you
would require a shutterspeed of 1/1000s to shoot a particular scene, you should be able to shoot at only 1/125s
(8 times slower) with anti-shake enabled. This is very useful when shooting moving subjects in low light
conditions by panning and/or when using long focal lengths.

Anti-shake system implemented in the


Konica Minolta DiMAGE A2

Click for a movie of the Anti-shake


system in action (exaggerated motion) 2.1 MB

Aspect Ratio
The width divided by the height of an image or "aspect ratio" is usually expressed as two integers, e.g.
width/height = 1.5 is expressed as width:height = 3:2.

3:2 aspect ratio of 35mm film, 4:3 aspect ratio of most


6"x4" prints, and most digital computer monitors and
SLRs
digital compact cameras

Barrel Distortion
Barrel distortion is a lens effect which causes images to be spherised or "inflated". Barrel distortion is
associated with wide angle lenses and typically occurs at the wide end of a zoom lens. The use of converters
often amplifies the effect. It is most visible in images with perfectly straight lines, especially when they are
close to the edge of the image frame. See also the opposite effect, pincushion distortion.

Barrel distortion inflates the square

Example of Barrel
Distortion

Barrel Distortion at dpreview.com


We measure barrel distortion in our reviews as the amount a reference line is bent as a percentage of picture
height. For most consumer digital cameras this figure is normally around 1%.

Correcting Barrel Distortion


Barrel distorted images from virtually any digital camera can be corrected easily using software, e.g. Adobe
Photoshop CS2.

Chromatic Aberration
Chromatic Aberration in a Single Lens
Chromatic aberration or "color fringing" is caused by the camera lens not focusing different wavelengths of
light onto the exact same focal plane (the focal length for different wavelengths is different) and/or by the lens
magnifying different wavelengths differently. These types of chromatic aberration are referred to as
"Longitudinal Chromatic Aberration" and "Lateral Chromatic Aberration" respectively and can occur
concurrently. The amount of chromatic aberration depends on the dispersion of the glass.

Longitudinal or Axial Chromatic


Aberration
Focal length varies with color
wavelength

Lateral or Transverse Chromatic


Aberration
Magnification varies with color
wavelength

Chromatic aberration is visible as color fringing around contrasty edges and occurs more frequently around
the edges of the image frame in wide angle shots.

Example of cyan and red fringing

Achromatic / Apochromatic Doublets

Special lens systems (achromatic or apochromatic doublets) using two or more pieces of glass with different
refractive indexes can reduce or eliminate this problem. However, not even these lens systems are completely
perfect and still can lead to visible chromatic aberrations, especially at full wide angle.

"Purple Fringing" and Microlenses


Although the above chromatic aberrations can be purple in color under certain circumstances, "Purple
Fringing" usually refers to a typical digital camera phenomenon that is caused by the microlenses. In
simplified terms purple fringing is "chromatic aberration at microlens level". As a consequence, purple
fringing is visible throughout the image frame, unlike normal chromatic aberration. Edges of contrasty
subjects suffer most, especially if the light comes from behind them, as shown in the example below.
Blooming tends to increase the visibility of purple fringing.

Example of purple fringing

Circle of Confusion

This term usually brings up "circles of confusion" around people's eyes. But this does not need to be the case
as it is actually rather simple. Depth of field defines the distance range where things have an acceptable level
of sharpness. Although sharpness is very subjective, it is in general based on an 8" x 10" print viewed from a
one feet distance. You can, for instance, define that an 8" x 10" print is sharp until you can still distinguish 4
lines per mm. That would represent dots of 0.25mm each or 100 dots per inch (DPI), a fair approximation.
Other areas of the image would of course be sharper. In other words, 0.25mm or 250 (micron) is the cut-off
point where we decide things are no longer sharp and is called the Maximum Permissible Circle of
Confusion. An 8" x 10" print measures 203mm x 254mm and has a diagonal of 325mm, while 35mm film
measures 36mm x 24mm and has a diagonal of 43.27mm or 7.5 times smaller. Since 35mm film needs to be
enlarged 7.5 times to obtain an 8" x 10" print from it, the diameter of the Maximum Permissible Circle of
Confusion must be 7.5 times smaller or 0.25/7.5 = 0.033mm. If you use 8" x 10" large format film, then the
CoC remains at 0.25mm as the information on the negative does not need to be enlarged to create an 8" x 10"
print.

Converters
Prosumer cameras typically allow the zoom range to be extended via converters. Converters are add-on lens
adapters which expand the picture angle or make it more narrow. For instance, fitting a 0.8X wide angle
converter on a 35mm lens will result in a 28mm picture angle. A 2.0X telephoto converter on a 100mm lens
will give the picture angle of a 200mm lens. Converters often cannot be used across the whole range of a
zoom lens and sometimes only at the end of the zoom range because they would introduce vignetting. Also,
the internal flash may no longer work properly because the converter will cast a shadow and/or the flash
sensor is covered by the converter.

Depth of Field
Depth of field (DOF) is a term which refers to the areas of the photograph both in front and behind the main
focus point which remain "sharp" (in focus). Depth of field is affected by the aperture, subject distance, focal
length, and film or sensor format.
A larger aperture (smaller f-number, e.g. f/2) has a shallow depth of field. Anything behind or in front of the
main focus point will appear blurred. A smaller aperture (larger f-number, e.g. f/11) has a greater depth of
field. Objects within a certain range behind or in front of the main focus point will also appear sharp.

This setup was used to produce the example


below. A picture was taken of three postcards
0.7m apart using a 70mm telephoto lens which
was focused on the first card.

As you can see, at a large aperture of f/2.4 only the first card is in focus,
while at f/8 the middle card is sharp and the distant card is almost sharp.
Click on the image for a larger version.
Coming closer to the subject (reducing subject distance) will reduce depth of field, while moving away from
the subject will increase depth of field.
Lenses with shorter focal lengths produce images with larger DOF. For instance, a 28mm lens at f/5.6
produces images with a greater depth of field than a 70mm lens at the same aperture.

Depth of Field Calculator


This depth of field calculator allows you to have a better understanding of the various factors that affect depth
of field. For digital cameras, it is important to use the actual focal length and NOT the 35mm equivalent
focal length determined by the focal length multiplier.
Film format or sensor size
Lens actual focal length
Selected aperture
Subject distance

(meters)

Hyperfocal distance for this


lens/aperture combination
Near limit of acceptable
sharpness
Far limit of acceptable
sharpness
Total depth of field

Focal Length
The focal length of a lens is defined as the distance in mm from the optical center of the lens to the focal
point, which is located on the sensor or film if the subject (at infinity) is "in focus". The camera lens projects
part of the scene onto the film or sensor. The field of view (FOV) is determined by the angle of view from the
lens out to the scene and can be measured horizontally or vertically. Larger sensors or films have wider FOVs
and can capture more of the scene. The FOV associated with a focal length is usually based on the 35mm film
photography, given the popularity of this format over other formats.

In 35mm photography, lenses with a focal length of 50mm are called "normal" because they work without
reduction or magnification and create images the way we see the scene with our naked eyes (same picture
angle of 46).
Wide angle lenses (short focal length) capture more because they have a wider picture angle, while tele lenses
(long focal length) have a narrower picture angle. Below are some typical focal lengths:
Typical focal lengths and their 35mm format designations
< 20mm
Super Wide Angle
24mm - 35mm
Wide Angle
50mm
Normal Lens
80mm - 300mm
Tele
> 300mm
Super Tele
A change in focal length allows you to come closer to the subject or to move away from it and has therefore
an indirect effect on perspective. Some digital cameras suffer from barrel distortion at the wide angle end and
from pincushion distortion at the tele end of their zoom ranges.

35mm Equivalent Focal Length


Focal lengths of digital cameras with a sensor smaller than the surface of a 35mm film can be converted to
their 35mm equivalent using the focal length multiplier.

Optical Zoom (X times zoom) and Digital Zoom


Optical zoom = maximum focal length / minimum focal length
For instance, the optical zoom of a 28-280mm zoom lens is 280mm/28mm or 10X. This means that the size of
a subject projected on the film or sensor surface will be ten times larger at maximum tele (280mm) than at
maximum wide angle (28mm). Optical zoom should not be confused with digital zoom.

Focal Length Multiplier


Vincent Bockaert, 123di.com

Many digital SLRs have sensors smaller than the sensitive area of 35mm film. Typically the sensor diagonal
is 1.5 times smaller than the diagonal of 35mm film.

The FLM of a typical 6


megapixel digital SLR is
43.3/28.1 or 1.54X
As a consequence, a sensor smaller than a 35mm film frame captures only the middle portion of the
information projected by the lens into the 35mm film frame area, resulting in a "cropped field of view". A
35mm film camera would require a lens with a longer focal length to achieve the same field of view. Hence
the term Focal Length Multiplier (FLM). The FLM is equal to the diagonal of 35mm film (43.3mm) divided
by the diagonal of the sensor. Let's now discuss two cases.
Case 1 - Digital SLR and 35mm film camera use lenses with the same
FOCAL LENGTH.

Information projected by a 200mm lens


onto the 35mm film frame area.

The sensor with FLM of 1.5X captures


only part of the information projected
by the 200mm lens into the 35mm film
area. This results in a "cropped field of
view", equivalent to the field of view of
a 200 x 1.5 = 300mm lens on a 35mm
film camera (see Case 2). The absolute
size of the bird projected onto the
sensor is the same as on the 35mm film
because the focal length is still 200mm.

A 35mm film camera would require a lens with a focal length of 300mm to achieve the same field of view, as
explained in Case 2 below.
Case 2 - Digital SLR and 35 mm film camera use lenses to achieve the same
FOV.

Information projected by a 300mm lens


onto the 35mm film frame area.

Information projected by a 200mm lens


onto the sensor with FLM of 1.5X. The
Field of View is the same as the 300
mm lens on the 35mm camera. The
absolute size of the bird projected onto
the sensor is smaller compated to the
35mm film because a lens with shorter
focal length is used (different
magnification).

So a 200mm lens on a digital SLR with FLM of 1.5X will have the field of view of a 300mm lens on a 35mm
film camera which would be heavier and more expensive. Also, because the 35mm equivalent field of views
are achieved with shorter focal lengths, depth of field is larger[1]. This advantage on the tele end becomes a
disadvantage on the wide range end. For instance, a 19mm lens fitted onto a digital SLR with FLM of 1.5X
will only generate the field of view of a 28mm lens fitted on a 35mm film camera.

"Digital" SLR Lenses


Most digital SLRs are able to use conventional 35mm lenses. However, such lenses are designed to create an
image circle that covers a 35mm film frame and are therefore larger and heavier than necessary for sensors
which are smaller than a 35mm film frame. "Digital" lenses (e.g. Canon Short Back Focus Lenses, Nikon DX
Lenses, Olympus 4/3" System) are lighter because their image circles only cover the sensor area.

Footnote on Digital Compact Cameras


Digital compact cameras are fitted with lenses with short focal lengths to create 35mm equivalent field of
views on their small sensor surfaces. Typically the sensor diagonal is 4 times smaller than the diameter of
35mm film. A 7mm lens fitted on such a camera will have the same field of view of a 7mm x 4 or 28mm lens
on a 35mm film camera. Just like the digital lenses for digital SLRs, these lenses are designed to generate
image circles to cover the smaller sensor. This allows these lenses to be much smaller and cheaper to
manufacture Because of the very small focal lengths used, the depth of field is much larger[1] than digital
SLRs or 35mm film cameras with the same field of view.

Technical Footnotes (only relevant to advanced users)


1. (1) Assuming the aperture and subject distance remain constant, the increase in depth of field (DOF)
due to the reduction in focal length is partially offset by the reduction in the maximum permissible
Circle of Confusion (CoC). For a smaller format (e.g. a sensor with FLM of 1.5X is a smaller format
than 35mm film), the maximum permissible CoC is smaller[2], so DOF will be smaller. However,
this reduction in DOF is smaller than the increase in DOF caused by the reduction in focal length, so
overall DOF will increase, and more so with larger FLMs. You can verify this by using the depth of
field calculator on this site.
2. (2) In Case 2 it is easy to understand that if you print an 8" x 10" of both images, the optical
information collected by the smaller sensor has to be enlarged more than the optical information
collected by the 35mm film, so the maximum permissible CoC for the sensor is smaller.

Image Stabilization
Higher-end binoculars and zoom or telephoto lenses for SLR cameras often come with image stabilization. It
is also available in digital video cameras with large zooms. Digital cameras with large zoom lenses also come
with image stabilization or variants such as anti-shake.
Image stabilization helps to steady the image projected back into the camera by the use of a "floating" optical
element often connected to a fast spinning gyroscope which helps to compensate for high frequency
vibration (hand shake for example) at these long focal lengths. Canon EF SLR lenses with image stabilization
have a IS suffix after their name, Nikon uses the VR "Vibration Reduction" suffix on their image stabilised
Nikkor lenses.
Typically, image stabilization can help you take handheld shots almost two stops slower than with image
stabilization off. For example if you would require a shutterspeed of 1/500s to shoot a particular scene, you
should be able to shoot at only 1/125s (4 times slower) with image stabilization. This is very useful when
shooting moving subjects in low light conditions by panning and/or when using long focal lengths.
Important footnote: The above "optical" image stabilization is different from the "digital" image stabilization
found in some digital video cameras. "Digital" image stabilization only makes sense for digital video as it
pixel shifts the image frames to create a more stable video image.

Lenses
Most digital compact cameras have non-interchangeable zoom lenses which have been designed to work with
a specific sensor size. Some prosumer models allow to extend the zoom range via converters. Because of the
small sensor sizes, the lenses used in digital compact cameras have to be of much higher optical quality than
glass which would be "acceptable" on a 35mm camera. This is less of an issue with digital SLRs with because
their sensors are much larger.

Typical sensor size of 3, 4, and


5 megapixel digital compact
cameras

Typical sensor size of 6


megapixel digital SLRs

See also focal length.

Macro
In strict photographic terms, "macro" means the optical ability to produce a 1:1 or higher magnification of an
object on the film or sensor. For instance if you photograph a flower with an actual diagonal of 21.6 mm so
that it fills the 35mm film frame (43.3mm diagonal), the flower gets magnified with a ratio of 43.3 to 21.6 or
2:1, or with a magnification of 2X. Macro photography typically deals with magnifications between 1:1 and
50:1 (1X to 50X), while close up photography ranges from 1:1 to 1:10 (1X to 1/10X).

From the above it is easy to understand that digital cameras with sensors smaller than 35mm film have better
macro capabilities. Indeed, a digital compact camera with a focal length multiplier of 4X can capture the
above flower of 21.6mm diameter with a magnification of only 1:2 (close-up) instead of the 2:1 (macro)
required with the 35mm camera. In other words, macro results are achieved with (easier) close-up
photography.
On digital cameras there is often a Macro Focus mode which switches the auto focus system to attempt to
focus on subjects much closer to the lens.
We measure macro ability (of cameras with non-interchangeable lenses) in our reviews as the ability of the
lens to get the best possible frame coverage. So a camera which can fill the frame with a subject that is 20mm
wide has better macro capabilities than one which can only capture a 40mm wide subject.
Generation after generation, Nikon Coolpix digital cameras delivered the 'best in class' macro performance
without add-on lenses.

Perspective
If you photograph a subject with a tele lens and want it to have the same size on the film or sensor when
photographing it with a wide angle lens, you would have to move closer to the subject. Because this would
cause the perspective to change, lenses with different focal lengths are said to "have" a different perspective.
Note however that changing the focal length without changing the subject distance will not change
perspective, as shown in the example below.
A.
Scene taken with a 33mm
wide angle.

B.
Cropped area indicated in
image A, taken with a
33mm wide angle.

C.
Scene taken with 80mm tele,
with the camera in the same
position as in image A
(same subject distance).
Note that the perspective is
the same as in image B,
taken with a 33mm wide
angle.
D.
Scene taken with a 33mm
wide angle after coming
closer to the subjects so that
the width of the two front
tiles covers the width of the
frame, just like in image C.
The perspective is clearly

different and the distance


between the subjects appears
larger than in image C.
Images B and C show that changing the focal length while keeping the subject distance constant hasjust
like croppingno effect on perspective.
Image D shows that changing the subject distance while holding the focal length constant will change
perspective.
Images C and D show that a tele compresses perspective (makes subjects look closer to one another), while a
wide angle exaggerates perspective (makes subjects look more separated) compared to the "normal" way we
see things with the naked eye. As mentioned earlier, this change in perspective is a direct consequence of the
change in subject distance and thus only an indirect consequence of the change in focal length. Indeed, a wide
angle lens allows you to capture subjects from nearby, while a tele lens allows you to capture distant subjects.

Picture Angle
The field of view is determined by the angle of view from the lens out to the scene and can be measured
horizontally or vertically. Because the aspect ratio differs between formats, the more universal picture angle,
measured along the diagonal of the scene is often used. A shorter focal length (such as a 28mm wide angle)
produces a wider picture angle, while a longer focal length (such as a 200mm tele) produces a narrower
picture angle. In 35mm photography, a 50mm lens is called a normal lens because it produces roughly the
same picture angle as the human eye (about 46).

The picture angle is measured diagonally


The example below shows the difference between two focal lengths, 30mm and 100mm.

30mm wide angle

100mm tele has more narrow field of


view (indicated in red in the wide angle
image)

Pincushion Distortion
Pincushion distortion is a lens effect which causes images to be pinched at their center. Pincushion distortion
is associated with tele lenses and typically occurs at the tele end of a zoom lens. The use of converters often
amplifies the effect. It is most visible in images with perfectly straight lines, especially when they are close to
the edge of the image frame. See also the opposite effect, barrel distortion.

Pincushion distortion deflates the square

Example of
Pincushion Distortion

Pincushion Distortion at dpreview.com


We measure pincushion distortion in our reviews as the amount a reference line is bent as a percentage of
picture height. For most consumer digital cameras, pincushion distortion is lower than barrel distortion with
0.6% being a typical value.

Correcting Pincushion Distortion


Pincushion distorted images from virtually any digital camera can be corrected easily using software, e.g.
Adobe Photoshop CS2.

Subject Distance
Subject distance is the distance between the camera (lens) and the main subject. Varying the subject distance
will change perspective. Also, varying the subject distance with the same aperture will produce a different
depth of field.

Vignetting
Zoom lenses, especially the lower end ones, can sometimes suffer from vignetting. The barrel or sides of the
lens become visible, resulting in dark corners in the image as shown in this example. The use of converters
can also result in vignetting.

Example of vignetting

Storage Comparison
In this article, we will compare magnetic and optical storage.

Storage Cost
Medium
Type

CD

DVD

Optical Optical

200 GB Hard
disk
Hard disk Magnetic

Cost (US$)[1]

0.25

0.75

100

Capacity (GigaByte)

0.68

4.4

186

Number of six megapixel JPEG images[2]

250

1,500

70,000

80

500

23,000

1,000

1,500

700

320

660

230

Number of six megapixel uncompressed RAW


images[3]
Number of six megapixel JPEG images per US$[2]
Number of six megapixel uncompressed RAW
images per US$[3]

Since US$1 allows you to store hundreds of images, deciding whether or not you want to keep an image is not
worth your time. So I just back up and keep everything I shoot. Then in a later stage I then create a second
series with the best images. Image management will be discussed in a future article.

Magnetic Versus Optical Storage


.
Cost

Magnetic

Optical

Low, but higher than optical

Low

Compatibility

High High for CDs, lower for DVDs

Data Stability

Relatively stable, but less than Stable, but lifetime depends on


optical
quality

Accidental Deletion of
data

High[4]

Low as it is read-only[5]

Environmental
Resistance

Sensitive to shocks and power


surges

Sensitive to scratches

Data Recovery

Excellent but can cost time and


money

Virtually impossible

Very convenient

Requires burning

Easy

Requires re-burning

One 200 GB disk is relatively


compact

275 CDs or even 45 DVDs are


bulky.

Backup Convenience
Reorganizing data
Traveling Convenience

So if you make at least one magnetic and at least one optical backup, you are combining the benefits of both
media.

Technical Footnotes
1. (1) Estimation only. Prices vary a lot depending on where you buy, the quantity, the quality, the
brand, and the speed rating, and in case of DVD whether it is DVD+R or DVD-R.
2. (2) Estimated, depending on the compression and content of the image.
3. (3) Estimated, depending on the content of the image and the camera. Compressed RAW will allow
for about 60 to 70% more images.
4. (4) Can be reduced by setting the file attributes to "Read-Only".
5. (5) As explained earlier, risks are higher for multi-session burning.

Storage Issues
If you value your digital images, you should have a proper backup system in place. In this article, we will
discuss storage issues with Magnetic Storage and Optical Storage (discussed in separate glossary entries) and
some backup tips so that you can enjoy your images not only in the short term, but also much further into the
future.

Data Stability
Just like magnets become weaker over time, the magnetic properties of a hard disk will diminish in the very
long term and can be affected by environmental factors such as strong magnetic fields. The materials which
are used to make CDs and DVDs decay over time and the problem is that even minor changes in the data can
make the whole disk unreadable. Different brands and different grades of optical media advertise different life
spans. It also depends on how they are stored, how well you take care of them, how often they are read, etc.
But regardless of the above lifetime issues, even a simple scratch can render your CD or DVD unreadable.
Hard disks are high precision mechanical devices spinning at high speeds, typically 7,200 rpm. So there is
always the possibility of failure due to a shock or a power surge. But even without a physical failure, it is

possible that you suddenly lose the content of your whole hard disk, e.g. due to corruption of the file structure.
So it is important to have multiple backups because data recovery is tricky, as explained below.

Data Recovery
Once a CD or DVD is damaged or corrupted, it is very unlikely that you will be able to recover anything.
Chances of data recovery from hard disks are usually very good, but by no means guaranteed. Also, data
recovery can be time consuming and expensive. So if you store your images on a hard disk, you should have
at least one extra copy on another independent hard disk or on a CD/DVD. By "independent" hard disk I mean
an external hard disk which is only connected to your computer when you do the backup. Two internal hard
disks provide insufficient protection because both can be affected in case of damage due to lightning or loss
of data due to a virus attack. Also avoid storing images in the root directory of a partition as that significantly
lowers chances of data recovery via software.

Data Removal
Sometimes you may have the opposite problem: getting rid of your images permanently, e.g. destroy old
backups or cleanup up your hard disk before you sell your computer. CDs or DVDs are easy to destroy, but
securely erasing data from your hard disk is not as straightforward as it seems. There are plenty of affordable
recovery programs which can recover data from a formatted hard disk. Formatting the hard disk, then copying
dummy data to the hard disk until full capacity, followed by a format will prevent software based data
recovery and should be sufficient for most of us[1].

Long Term Storage: "Migrate, Consolidate, and Refresh"


If we think in terms of decades instead of years, certain media will become useless in terms of capacity, or
incompatible, or both. A typical example is the floppy disk which can barely store a single 2 megapixel JPEG
image and few computers still come with a floppy drive.
In the nineties, I used 80 MB "magneto-optical" disks [2]. My magneto-optical drive only had drivers up to
Windows 98, so I recently migrated these onto my hard disk via an older computer which still had a parallel
port and Windows 98.
Some of the CDs burned with older burners are no longer recognized by newer drives.
So to avoid compatibility issues, it is advisable to migrate your data to newer media. Since capacities of
magnetic and optical storage are constantly increasing, you can at the same time consolidate your data. For
instance, 500 floppies can be consolidated into a single CD, 58 magneto-optical 80 MB disks can be
consolidated into a single DVD, 275 CDs can fit on a single 200 GB hard disk, etc. So once or twice a decade
you will have to migrate and consolidate your old and small capacity media to the new and larger magnetic or
optical media that become available. This has the additional benefit of "refreshing" your data to overcome the
earlier mentioned issue of long term data stability.

Backup Tips
1. Always maintain at least two independent copies of your images, for instance:
o - one magnetic and one optical (recommended)
o - two magnetic (e.g. internal and external (disconnected) hard disk)
o - two optical
2. To have even more peace of mind, consider:
o - two independent magnetic backups and one optical backup, or
o - one magnetic and two optical backups
3. As a protection against unfortunate incidents such as fire, tornados, floods, etc., store one of your
backups in a different location.
4. Be careful with multi-session CDs or DVDs and make sure you verify the data.
5. When buying a new system, "migrate and consolidate" your data to new and larger capacity media.
This will at the same time "refresh" your data.

Technical Footnotes
1. (1) Advanced note: (expensive) hardware based recovery techniques used in forensics and
intelligence agencies can reconstruct the overwritten data based on physical differences between areas
which have been "zero" for a long time and areas which have been "one" for a long time and which
were only recently changed into a "zero" via overwriting and erasing. More sophisticated erasing
programs with multiple and random write-and-erase cycles will make even hardware based recovery
impossible.
2. (2) They have characteristics of optical and magnetic storage and are very reliable. Currently they
come in higher capacities but are less frequently used, rather expensive, and require a dedicated
reader/writer.

Magnetic Storage - Hard disks

The building blocks of digital images are "bits", which can either be "zero" or "one". Magnetic storage
devices such as hard disks distinguish a "one" from a "zero" by changing the magnetic properties of the disk
in that location. The great thing about hard disks is that their capacities are constantly increasing while prices
are constantly dropping. Two hundred gigabyte[1] hard disks (3.5" IDE 7200rpm with 8 MB cache) currently
retail under US$100. Such hard disks can hold about 70,000 six megapixel JPEG or 23,000 six megapixel
uncompressed RAW images. That's about 700 JPEG or 230 RAW images per dollar.
Portable housings exist with built-in international 100-240V power supply, USB 2.0 (Hi-speed), and FireWire
IEEE1394 connections, costing between US$50 to US$100. To avoid overheating, get a case with a large
diameter cooling fan below the hard disk. Note that the market is gradually moving towards SATA (Serial
ATA) drives and away from the older and slower IDE drives. However, the choice of external housings for
SATA drives is still limited.
If you store images on your computer, it is recommended to store them on a dedicated partition (or if you
have multiple hard disks, then ideally on a different physical drive) than your operating system (e.g. use C:\
for the operating system and software, and D:\ or E:\ for your images). This ensures that if your operating
system crashes and you need to reinstall it, your images will be preserved.

Technical Footnote
1. (1) Usually "200" stands for 200 billion bytes, which is equivalent to 186 gigabytes.

Optical Storage - CDs and DVDs


Optical storage media such as CDs and DVDs are polymer disks which contain the "ones" and "zeros" as
"pits" and "lands" that vary the strength of the drive's laser beam as it passes through the polymer and gets
bounced back to the receiver via the "mirror", which is at the back of the printed top surface.

CDs
Writable CDs (CD-Rs) come in 650 and 700 MB versions. An average CD-R costs 25 cents and can hold
about 250 six megapixel JPEG or 80 six megapixel uncompressed RAW images, so about 1,000 JPEG or 320
RAW images per dollar.

DVDs
Single layer DVDs[1] currently have a maximum capacity of 4.38 GB[2], about 6 times more than a CD.
DVD drives can read CDs but not the other way around. A key benefit of CDs is that they are very universal.
DVDs come in DVD-R, DVD+R, DVD-RW, and DVD+RW formats. Not all DVD drives recognize all
formats but nowadays most DVD burners support at least -R and +R. It is not recommended to use -RW or
+RW for long term archiving purposes as they are rewritable (and pricier). The -R and +R read-only formats
prevent accidental overwriting.

Burning Tips
If you burn for example 300 MB of images onto a blank 700 MB CD-R and "close" the disk, you will not be
able to add information in the future. However, if you select the "multi-session" option in your CD-writer
software, you can add additional information in subsequent sessions in the future until the CD is full. Note
that this is not without risks. Sometimes adding a new session can render previous sessions inaccessible. This
is especially true if previous sessions were created using another CD-writer and/or burning software. One of
many reasons to have more than one backup copy.
Most burning software packages have a "verification" option. This will considerably lengthen the burning
session but is safer because the software will verify that the data on your hard disk corresponds to that on the
CD/DVD. Burning errors are not uncommon, especially if you have been using other applications during the
burning process.
Note that the "X" read and write speed specifications of DVD-writers are lower than those on CD-writers.
However, for a CD-writer, 1X stands for 150 KB of data per second, while for a DVD-writer it is 1,385 KB/s
or 9.2 times more. So an 8X DVD-writer will write as much data per second as a 74X CD-writer (8 x 1,385
KB/s = 74 x 150 KB/s).

Technical Footnotes
1. (1) Double layer DVDs allow for 8.5 Billion Bytes but are less compatible.
2. (2) 4.38 GigaByte or 4.7 Billion Bytes (which is what is usually printed on the packaging).

You might also like