You are on page 1of 72

What is Remote Sensing?

We perceive the surrounding world through our five senses. Some senses (touch and taste) require contact of our sensing organs with the objects. However, we acquire much information about our surrounding through the senses of sight and hearing which do not require close contact between the sensing organs and the external objects. In another word, we are performing Remote Sensing all the time. Generally, Remote sensing refers to the activities of recording/observing/perceiving ( ensing) s objects or events at far away (remote) places. In remote sensing, the sensors are not in direct contact with the objects or events being observed. The information needs a physical carrier to travel from the objects/events to the sensors through an intervening medium. The electromagnetic radiation is normally used as an information carrier in remote sensing. The output of a remote sensing system is usually an image representing the scene being observed. A further step of image analysis and interpretation is required in order to extract useful information from the image. The human visual system is an example of a remote sensing system in this general sense. In a more restricted sense, remote sensing usually refers to the

technology of acquiring information about the earth's surface (land and ocean) and atmosphere using sensors onboard airborne (aircraft, balloons) or spaceborne (satellites, space shuttles) platforms.

Satellite Remote Sensing


In this CD, you will see many remote sensing images around Asia acquired by earth observation satellites. These remote sensing satellites are equipped with sensors looking down to the earth. They are the "eyes in the sky" constantly observing the earth as they go round in predictable orbits.

Effects of Atmosphere
In satellite remote sensing of the earth, the sensors are looking through a layer of atmosphere separating the sensors from the Earth's surface being observed. Hence, it is essential to understand the effects of atmosphere on the electromagnetic radiation travelling from the Earth to the sensor through the atmosphere. The atmospheric constituents cause wavelength dependent absorption and scattering of radiation. These effects degrade the quality of images. Some of the atmospheric effects can be corrected before the images are subjected to further analysis and interpretation. A consequence of atmospheric absorption is that certain wavelength bands in the electromagnetic spectrum are strongly absorbed and effectively blocked by the atmosphere. The wavelength regions in the electromagnetic spectrum usable for remote sensing are determined by their ability to penetrate atmosphere. These regions are known as the atmospheric transmission windows. Remote sensing systems are often designed to operate within one or more of the atmospheric windows. These windows exist in the microwave region, some wavelength bands in the infrared, the entire visible region and part of the near ultraviolet regions. Although the atmosphere is practically transparent to x -rays and gamma rays, these radiations are not normally used in remote sensing of the earth.

Optical and Infrared Remote Sensing


In Optical Remote Sensing, optical sensors detect solar radiation reflected or scattered from the earth, forming images resembling photographs taken by a camera high up in space. The wavelength region usually extends from the visible and near infrared (commonly abbreviated as VNIR) to the short-wave infrared (SWIR).

Different materials such as water, soil, vegetation, buildings and roads reflect visible and infrared light in different ways. They have different colours and brightness when seen under the sun. The interpretation of optical images require the knowledge of the spectral reflectance signatures of the various materials (natural or man-made) covering the surface of the earth.

There are also infrared sensors measuring the thermal infrared radiation emitted from the earth, from which the land or sea surface temperature can be derived.

Microwave Remote Sensing


There are some remote sensing satellites which carry passive or active microwave sensors. The active sensors emit pulses of microwave radiation to illuminate the areas to be imaged. Images of the earth surface are formed by measuring the microwave energy scattered by the ground or sea back to the sensors. These satellites carry their own "flashlight" emitting microwaves to illuminate their targets. The images can thus be acquired day and night. Microwaves have an additional advantage as they can penetrate clouds. Images can be acquired even when there are clouds covering the earth surface. A microwave imaging system which can produce high resolution image of the Earth is the synthetic aperture radar (SAR). The intensity in a SAR image depends on the amount of microwave backscattered by the target and received by the SAR antenna. Since the physical mec anisms h

responsi le for t is backscatter is different for microwave, compared to visible/infrared radiation, t e i i of SAR images requir

i
Remote sensing images are normall in . In order to t e form of i i l i extract useful information from t e images, i i techniques the image may be employed to to help visual interpretation, and to or the image if the image has been subjected to geometric distortion, blurring or degradation by other factors. There are many image analysis techniques available and the methods used depend on the requirements of the specific problem concerned. In many cases, image i and l i i i algorithms are used to delineate different areas in an image into i l . The resulting product is a i of the study area. This thematic map can be combined with other databases of the test area for further analysis and utilization.

i i l i i j ill l .T

:T i i x l

The human visual system is an example of a remote sensing system in the general sense. The in this example are the two types of photosensitive cells, known as the and the , at the retina of the eyes. The cones are responsible for colour vision. There are three types of cones, each being sensitive to one of the red, green, and blue regions of the visible spectrum. Thus, it is not coincidental that the modern computer display monitors make use of the same three primary colours to generate a multitude of colours for displaying colour images. The cones are insensitive under low light illumination condition, when their jobs are taken over by the rods. The rods are sensitive only to the total light intensity. Hence, everything appears in shades of grey when there is insufficient light.

As the objects/events being observed are located far away from the eyes, the information needs a carrier to travel from the object to the eyes. In this case, the i i i is i . The objects l / the the visible light, a part of the l ambient light falling onto them. Part of the scattered light is intercepted by the eyes, forming an i on the retina after passing through the optical system of the eyes. The signals l i generated at the retina are carried via the nerve fibres to the brain, the i CPU of the visual system. These signals are processed and interpreted at the brain, with the aid of previous experiences. When operating in this mode, the visual system is an example of a "P i i " system which depends on an external source of energy to operate. We all know that this system won't work in darkness. However, we can still see at night if we provide our own source of illumination by carrying a flashlight and shining the beam towards the object we i ", by supplying our want to observe. In this case, we are performing " i own source of energy for illuminating the objects.

i i ill i

i :T i i i .

The Planet Earth

The planet E is the third planet in the l located at a mean distance of about 1.50 x 108 km from the sun, with a mass of 5.97 x 1024 kg. Descriptions of the shape of the earth have evolved from the flat earth model, spherical model to the currently accepted ellipsoidal model

derived from accurate ground surveying and satellite measurements. A number of reference elli i have been defined for use in identifying the three dimensional coordinates (i.e. position in space) of a point on or above the earth surface for the purpose of surveying, mapping and navigation. The reference ellipsoid in the rl Geodetic System 1984 (WGS-84) commonly used in satellite Global Positioning System (GPS) has the following parameters:
y y

Eq atorial Radi s = 6378.1370 km Polar Radi s = 6356.7523 km

The earth's crust is the outermost layer of the earth's land surface. About 29.1% of the earth's crust area is above sea level. The rest is covered by water. A layer of gaseous atmosphere envelopes the earth's surface.

Atmosphere

The Earth's Atmosphere

The earth's surface is covered by a layer of atmosphere consisting of a mixture of gases and other solid and liquid particles. The gaseous materials extend to several hundred kilometers in altitude, though there is no well defined boundary for the upper li mit of the atmosphere. The first 80 km of the atmosphere contains more than 99% of the total mass of the earth's atmosphere.

Vertical Structure of the Atmosphere

The vertical profile of the atmosphere is divided into four layers: troposphere, stratosphere, mesosphere and thermosphere. The tops of these layers are known as the tropopause, stratopause, mesopause and thermopause, respectively.
y

y y

Troposphere: This layer is characteri ed by a decrease in temperature with respect to height, at a rate of about 6.5C per kilometer, up to a height of about 10 km. All the weather activities (water vapour, clouds, precipitation) are confined to this layer. A layer of aerosol particles normally exists near to the earth surface. The aerosol concentration decreases nearly exponentially with height, with a characteristic height of about 2 km. Stratosphere: The temperature at the lower 20 km of the stratosphere is approximately constant, after which the temperature increases with height, up to an altitude of about 50 km. Ozone exists mainly at the stratopause. The troposphere and the stratosphere together account for more than 99% of the total mass of the atmosphere. Mesosphere: The temperature decreases in this layer from an altitude of about 50 km to 85 km. Thermosphere: This layer extends from about 85 km upward to several hundred kilometers. The temperature may range from 500 K to 2000 K. The gases exist mainly in the form of thin plasma, i.e. they are ioni ed due to bombardment by solar ultraviolet radiation and energetic cosmic rays.

The term upper atmosphere usually refers to the region of the atmosphere above the troposphere. Many remote sensing satellites follow the near polar sun-synchronous orbits at a height around 800 km, which is well above the thermopause.

The Earth's Atmosphere

The earth's surface is covered by a layer of atmosphere consisting of a mixture of gases and other solid and liquid particles. The gaseous materials extend to several hundred kilometers in altitude, though there is no well defined boundary for the upper li mit of the atmosphere. The first 80 km of the atmosphere contains more than 99% of the total mass of the earth's atmosphere.

Vertical Structure of the Atmosphere

The vertical profile of the atmosphere is divided into four layers: troposphere, stratosphere, mesosphere and thermosphere. The tops of these layers are known as the tropopause, stratopause, mesopause and thermopause, respectively.
y

Troposphere: This layer is characterized by a decrease in temperature with respect to height, at a rate of about 6.5C per kilometer, up to a height of about 10 km. All the weather activities (water vapour, clouds, precipitation) are confined to this layer. A layer of aerosol particles normally exists near to the earth surface. The aerosol concentration decreases nearly exponentially with height, with a characteristic height of about 2 km.

y y

Stratosphere: The temperature at the lower 20 km of the stratosphere is approximately constant, after which the temperature increases with height, up to an altitude of about 50 km. Ozone exists mainly at the stratopause. The troposphere and the stratosphere together account for more than 99% of the total mass of the atmosphere. Mesosphere: The temperature decreases in this layer from an altitude of about 50 km to 85 km. Thermosphere: This layer extends from about 85 km upward to several hundred kilometers. The temperature may range from 500 K to 2000 K. The gases exist mainly in the form of thin plasma, i.e. they are ioni ed due to bombardment by solar ultraviolet radiation and energetic cosmic rays.

The term upper atmosphere usually refers to the region of the atmosphere above the troposphere. Many remote sensing satellites follow the near polar sun-synchronous orbits at a height around 800 km, which is well above the thermopause.

Atmospheric onstituents
The atmosphere consists of the following components:
y

Permanent Gases They are gases present in nearly constant concentration, with little spatial variation. About 78% by volume of the atmosphere is nitrogen while the life sustaining oxygen occupies 21%. The remaining one percent consistsof the inert gases, carbon dioxide and other gases. Gases with Variable oncentration The concentration of these gases may vary greatly over space and time. They consist of water vapour, o one, nitrogeneous and sulphurous compounds. Solid and li uid particulates Other than the gases, the atmosphere also contains solid and liquid particles such as aerosols, water droplets and ice crystals. These particles may congregate to form clouds and haze.

Electromagnetic Radiation

Electromagnetic Waves

Electromagnetic waves are energy transported through space in the form of periodic disturbances of electric and magnetic fields. All electromagnetic waves travel through space at

the same speed, c = 2.99792458 x 10 8 m/s, commonly known as the speed of light. An electromagnetic wave is characterized by a frequency and a wavelength. These two quantities are related to the speed of light by the equation, speed of light = frequency x wavelength The frequency (and hence, the wavelength) of an electromagnetic wave depends on its source. There is a wide range of frequency encountered in our physical world, ranging from the low frequency of the electric waves generated by the power transmission lines to the very high frequency of the gamma rays originating from the atomic nuclei. This wide frequency range of electromagnetic waves constitute the Electromagnetic Spectrum.

The Electromagnetic Spectrum

The electromagnetic spectrum can be divided into several wavelength (frequency) regions, among which only a narrow band from about 400 to 700 nm is visible to the human eyes. Note that there is no sharp boundary between these regions. The boundaries shown in the above figures are approximate and there are overlaps between two adjacent regions. Wavelength units: 1 mm = 1000 m; 1 m = 1000 nm.
y y

Radio Waves: 10 cm to 10 km wavelength. Microwaves: 1 mm to 1 m wavelength. The microwaves are further divided into different frequency (wavelength) bands: (1 GH = 109 H ) o P band: 0.3 - 1 GHz (30 - 100 cm) o L band: 1 - 2 GHz (15 - 30 cm) o S band: 2 - 4 GHz (7.5 - 15 cm) o C band: 4 - 8 GHz (3.8 - 7.5 cm) o X band: 8 - 12.5 GHz (2.4 - 3.8 cm) o Ku band: 12.5 - 18 GHz (1.7 - 2.4 cm) o K band: 18 - 26.5 GHz (1.1 - 1.7 cm) o Ka band: 26.5 - 40 GHz (0.75 - 1.1 cm)

Back to Spectrum
y

Infrared: 0.7 to 300 m wavelength. This region is further divided into the following bands: o Near Infrared (NIR): 0.7 to 1.5 m. o Short Wavelength Infrared (SWIR): 1.5 to 3 m. o Mid Wavelength Infrared (MWIR): 3 to 8 m. o Long Wanelength Infrared (LWIR): 8 to 15 m. o Far Infrared (FIR): longer than 15 m. The NIR and SWIR are also known as the Reflected Infrared, referring to the main infrared component of the solar radiation reflected from the earth's surface. The MWIR and LWIR are the Thermal Infrared. Back to Spectrum

Visible Light: This narrow band of electromagnetic radiation extends from about 400 nm (violet) to about 700 nm (red). The various colour components of the visible spectrum fall roughly within the following wavelength regions: o Red: 610 - 700 nm o Orange: 590 - 610 nm o Yellow: 570 - 590 nm o Green: 500 - 570 nm o Blue: 450 - 500 nm o Indigo: 430 - 450 nm o Violet: 400 - 430 nm Back to Spectrum

y y

Ultraviolet: 3 to 400 nm X-Rays and Gamma Rays

Photons
According to quantum physics, the energy of an electromagnetic wave is quantized, i.e. it can only exist in discrete amount. The basic unit of energy for an electromagnetic wave is called a photon. The energy E of a photon is proportional to the wave frequency f, E=hf where the constant of proportionality h is the Planck's Constant, h = 6.626 x 10-34 J s.
Atmospheric Effects

Effects of Atmosphere

When electromagnetic radiation travels through the atmosphere, it may be absorbed or scattered by the constituent particles of the atmosphere. Molecular absorption converts the radiation energy into excitation energy of the molecules. Scattering redistribu the energy of the tes incident beam to all directions. The overall effect is the removal of energy from the incident radiation. The various effects of absorption and scattering are outlined in the following sections.

Atmospheric Transmission Windows


Each type of molecule has its own set of absorption bands in various parts of the electromagnetic spectrum. As a result, only the wavelength regions outside the main absorption bands of the atmospheric gases can be used for remote sensing. These regions are known as the Atmospheric Transmission Windows.

The wavelength bands used in remote sensing systems are usually designed to fall within these windows to minimi e the atmospheric absorption effects. These windows are found in the visible, near-infrared, certain bands in thermal infrared and the microwave regions.

Effects of Atmospheric Absorption on Remote Sensing Images


Atmospheric absorption affects mainly the visible and infrared bands. Optical remote sensing depends on solar radiation as the source of illumination. Absorption reduces the solar radiance within the absorption bands of the atmospheric gases. The reflected radiance is also

attenuated after passing through the atmosphere. This attenuation is wavelength dependent. Hence, atmospheric absorption will alter the apparent spectral signature of the target being observed.

Effects of Atmospheric Scattering on Remote Sensing Images


Atmospheric scatterring is important only in the visible and near infrared regions. Scattering of radiation by the constituent gases and aerosols in the atmosphere causes degradation of the remotely sensed images. Most noticeably, the solar radiation scattered by the atmosphere towards the sensor without first reaching the ground produces a ha y appearance of the image. This effect is particularly severe in the blue end of the visible spectrum due to the stronger Rayleigh Scattering for shorter wavelength radiation. Furthermore, the light from a target outside the field of view of the sensor may be scattered into the field of view of the sensor. This effect is known as the adjacency effect. Near to the boundary between two regions of different brightness, the adjacency effect results in an increase in the apparent brightness of the darker region while the apparent brightness of the brighter region is reduced. Scattering also produces blurring of the targets in remotely sensed images due to spreading of the reflected radiation by scattering, resulting in a reduced resolution image.

Absorption of Radiation

Absorption by Gaseous Molecules


The energy of a gaseous molecule can exist in various forms:
y

y y

Translational Energy: Energy due to translational motion of the centre of mass of the molecule. The average translational kinetic energy of a molecule is equal to kT/2 where k is the Boltzmann's constant and T is the absolute temperature of the gas. Rotational Energy: Energy due to rotation of the molecule about an axis through its centre of mass. Vibrational Energy: Energy due to vibration of the component atoms of a molecule about their equilibrium positions. This vibration is associated with stretching of chemical bonds between the atoms. Electronic Energy: Energy due to the energy states of the electrons of the molecule.

The last three forms are quantized, i.e. the energy can change only in discrete amount, known as the transitional energy. A photon of electromagnetic radiation can be absorbed by a molecule when its frequency matches one of the available transitional energies.

Ultraviolet Absorption
Absorption of ultraviolet (UV) in the atmosphere is chiefly due to electronic transitions of the atomic and molecular oxygen and nitrogen. Due to the ultraviolet absorption, some of the oxygen and nitrogen molecules in the upper atmosphere undergo photochemical dissociation to

become atomic oxygen and nitrogen. These atoms play an important role in the absorption of solar ultraviolet radiation in the thermosphere. The photochemical dissociation of oxygen is also responsible for the formation of the ozone layer in the stratosphere.

Ozone Layers
Ozone in the stratosphere absorbs about 99% of the harmful solar UV radiation shorter than 320 nm. It is formed in three-body collisions of atomic oxygen (O) with molecular oxygen (O2) in the presence of a third atom or molecule. The ozone molecules also undergo photoch emical dissociation to atomic O and molecular O2. When the formation and dissociation processes are in equilibrium, ozone exists at a constant concentration level. However, existence of certain atoms (such as atomic chlorine) will catalyse the dissociation of O3 back to O2 and the ozone concentration will decrease. It has been observed by measurement from space platforms that the ozone layers are depleting over time, causing a small increase in solar ultraviolet radiation reaching the earth. In recent years, increasing use of the flurocarbon compounds in aerosol sprays and refrigerant results in the release of atomic chlorine into the upper atmosphere due to photochemical dissociation of the fluorocarbon compounds, contributing to the depletion of the ozone layers.

Visible Region
There is little absorption of the electromagnetic radiation in the visible part of the spectrum.

Infrared Absorption
The absorption in the infrared (IR) region is mainly due to rotational and vibrational transitions of the molecules. The main atmospheric constituents responsible for infrared absorption are water vapour (H2 O) and carbon dioxide (CO2) molecules. The water and carbon dioxide molecules have absorption bands centred at the wavelengths from near to long wave infrared (0.7 to 15 m). In the far infrared region, most of the radiation is absorbed by the atmosphere.

Microwave Region
The atmosphere is practically transparent to the microwave radiation.
Scattering of Electromagnetic Radiation by Atmosphere

Scattering of Electromagnetic Radiation


Scattering of electromagnetic radiation is caused by the interaction of radiation with matter resulting in the reradiation of part of the energy to other directions not along the path of the incidint radiation. Scattering effectively removes energy from the incident beam. Unlike absorption, this energy is not lost, but is redistributed to other directions.

Both the gaseous and aerosol components of the atmosphere cause scattering in the atmosphere.

Scattering by gaseous molecules


The law of scattering by air molecules was discovered by Rayleigh in 1871, and hence this scattering is named Rayleigh Scattering. Rayleigh scattering occurs when the size of the particle responsible for the scattering event is much smaller than the wavelength of the radiation. The scattered light intensity is inversely proportional to the fourth power of the wavelength. Hence, blue light is scattered more than red light. This phenomenon explains why the sky is blue and why the setting sun is red. The scattered light intensity in Rayleigh scattering for unpolarized light is proportional to (1 + cos2 s) where s is the scattering angle, i.e. the angle between the directions of the incident and scattered rays.

Scattering by Aerosols
Scattering by aerosol particles depends on the shapes, sizes and the materials of the particles. If the size of the particle is similar to or larger than the radiation wavelength, the scattering is named Mie Scattering. The scattering intensity and its angular distribution may be calculated numerically for a spherical particle. However, for irregular particles, the calculation can become very complicated. In general, the scattered radiation in Mie scattering is mainly confined within a small angle about the forward direction. The radiation is said to be very strongly forward scattered.
Airborne Remote Sensing

In airborne remote sensing, downward or sideward looking sensors are mounted on an aircraft to

obtain images of the earth's surface. An advantage of airborne remote sensing, compared to satellite remote sensing, is the capability of offering very high spatial resolution images (20 cm or less). The disadvantages are low coverage area and high cost per unit area of ground coverage. It is not cost-effective to map a large area using an airborne remote sensing system. Airborne remote sensing missions are often carried out as one-time operations, whereas earth observation satellites offer the possibility of continuous monitoring of the earth. Analog aerial photography, videography, and digital photography are commonly used in airborne remote sensing. Synthetic Aperture Radar imaging is also carried out on airborne platforms. Analog photography is capable of providing high spatial resolution. The interpretation of analog aerial photographs is usually done visually by experienced analysts. The photographs may be digitized using a scanning device for computer-assisted analysis. Digital photography permits real-time transmission of the remotely sensed data to a ground station for immediate analysis. The digital images can be analysed and interpreted with the aid of a computer.

A high resolution aerial photograph over a forested area. The canopy of each individual tree can be clearly seen. This type of very high resolution imagery is useful in identification of tree types and in assessing the conditions of the trees.

Another example of a high resolution aerial photograph over a residential area.

Spaceborne Remote Sensing

IKONOS 2

SPOT 1, 2, 4

EROS A1

TERRA OrbView 2 (SeaStar)

NOAA 12, 14, 16

ERS 1, 2

RADARSAT 1

The receiving ground station at CRISP receives data from these satellites

In spaceborne remote sensing, sensors are mounted on-board a spacecraft (space shuttle or satellite) orbiting the earth. At present, there are several remote sensing satellites providing imagery for research and operational applications. Spaceborne remote sensing provides the following advantages:
y y y y y

Large area coverage; Frequent and repetitive coverage of an area of interest; Quantitative measurement of ground features using radiometrically calibrated sensors; Semiautomated computerised processing and analysis; Relatively lower cost per unit area of coverage.

Satellite imagery has a generally lower resolution compared to aerial photography. However, very high resolution imagery (up to 1-m resolution) is now commercially available to civilian users with the successful launch of the IKONOS-2 satellite in September 24, 1999.

Satellite Orbits
A satellite follows a generally elliptical orbit around the earth. The time taken to complete one revolution of the orbit is called the orbital period. The satellite traces out a path on the earth surface, called its ground track, as it moves across the sky. As the earth below is rotating, the satellite traces out a different path on the ground in each subsequent cycle. Remote sensing satellites are often launched into special orbits such that the satellite repeats its path after a fixed time interval. This time interval is called the repeat cycle of the satellite.

Geostationary Orbits

Geostationary Orbit: The satellite appears stationary with respect to the Earth's surface.

If a satellite follows an orbit parallel to the equator in the same direction as the earth's rotation and with the same period of 24 hours, the satellite will appear stationary with respect to the earth surface.

This orbit is a geostationary orbit. Satellites in the geostationary orbits are located at a high altitude of 36,000 km. These orbits enable a satellite to always view the same area on the earth. A large area of the earth can also be covered by the satellite. The geostationary orbits are commonly used by meteorological satellites.

Near Polar Orbits


A near polar orbit is one with the orbital plane inclined at a small angle with respect to the earth's rotation axis. A satellite following a properly designed near polar orbit passes close to the poles and is able to cover nearly the whole earth surface in a repeat cycle.

Sun Synchronous Orbits

A near-polar sun synchronous orbit Earth observation satellites usually follow the sun synchronousorbits. A sun synchronous orbit is a nearpolar orbit whose altitude is such that the satellite will alwayspass over a location at a given latitude at the same local solartime. In this way, the same solarillumination condition (except for seasonal variation) can be achieved for the images of a given location taken by the satellite.

Remote Sensing Satellites


Several remote sensing satellites are currently available, providing imagery suitable for various types of applications. Each of these satellite-sensor platform is characterised by the wavelength bands employed in image acquisition, spatial resolution of the sensor, the coverage area and the temporal coverge, i.e. how frequent a given location on the earth surface can be imaged by the imaging system. In terms of the spatial resolution, the satellite imaging systems can be classified into:
y y y y

Low resolution systems (approx. 1 km or more) Medium resolution systems (approx. 100 m to 1 km) High resolution systems (approx. 5 m to 100 m) Very high resolution systems (approx. 5 m or less)

In terms of the spectral regions used in data acquisition, the satellite imaging systems can be classified

into:
y y y

Optical imaging systems (include visible, near infrared, and shortwave infrared systems) Thermal imaging systems Synthetic aperture radar (SAR) imaging systems

Optical/thermal imaging systems can be classified according to the number of spectral bands used:
y y y y

Monospectral or panchromatic (single wavelength band, "black-and-white", grey-scale image) systems Multispectral (several spectral bands) systems Superspectral (tens of spectral bands) systems Hyperspectral (hundreds of spectral bands) systems

Synthetic aperture radar imaging systems can be classified according to the combination of frequency bands and polarization modes used in data acquisition, e.g.:
y y y y

Single frequency (L-band, or C-band, or X-band) Multiple frequency (Combination of two or more frequency bands) Single polarization (VV, or HH, or HV) Multiple polarization (Combination of two or more polarization modes)

Descriptions of some of the operational and planned remote sensing sattelite platforms and sensors are provided in the appendix of this tutorial.

Digital Image

Analog and Digital Images


An image is a two-dimensional representation of objects in a real scene. Remote sensing images are representations of parts of the earth surface as seen from space. The images may be analog or digital. Aerial photographs are examples of analog images while satellite images acquired using electronic sensors are examples of digital images.

A digital image is a two-dimensional array of pixels. Each pixel has an intensity value (represented by a digital number) and a location address (referenced by its row and column numbers).

Pixels
A digital image comprises of a two dimensional array of individual picture elements called pixels arranged in columns and rows. Each pixel represents an area on the Earth's surface. A pixel has an intensity value and a location address in the two dimensional image. The intensity value represents the measured physical quantity such as the solar radiance in a given wavelength band reflected from the ground, emitted infrared radiation or backscattered radar intensity. This value is normally the average value for the whole ground area covered by the pixel. The intensity of a pixel is digitised and recorded as a digital number. Due to the finite storage capacity, a digital number is stored with a finite number of bits (binary digits). The number of bits determine the radiometric resolution of the image. For example, an 8-bit digital number ranges from 0 to 255 (i.e. 28 - 1), while a 11-bit digital number ranges from 0 to 2047. The detected intensity value needs to be scaled and quantized to fit within this range of value. In a Radiometrically Calibrated image, the actual intensity value can be derived from the pixel digital number. The address of a pixel is denoted by its row and column coordinates in the two-dimensional image. There is a one-to-one correspondence between the column-row address of a pixel and the geographical coordinates (e.g. Longitude, latitude) of the imaged location. In order to be useful, the exact geographical location of each pixel on the ground must be derivable from its row and column indices, given the imaging geometry and the satellite orbit parameters.

"A Push-Broom" Scanner: This type of imaging system is commonly used in optical remote sensing satellites such as SPOT. The imaging system has a linear detector array (usually of the CCD type) consisting of a number of detector elements (6000 elements in SPOT HRV). Each detector element projects an "instantaneous field of view (IFOV)" on the ground. The signal recorded by a detector element is proportional to the total radiation collected within its IFOV. At any instant, a row of pixels are formed. As the detector array flies along its track, the row of pixels sweeps along to generate a twodimensional image.

Multilayer Image
Several types of measurement may be made from the ground area covered by a single pixel. Each type of measurement forms an image which carry some specific information about the area. By "stacking" these images from the same area together, a multilayer image is formed. Each component image is a layer in the multilayer image. Multilayer images can also be formed by combining images obtained from different sensors, and other subsidiary data. For example, a multilayer image may consist of three layers from a SPOT multispectral image, a layer of ERS synthetic aperture radar image, and perhaps a layer consisting of the digital elevation map of the area being studied.

An illustration of a multilayer image consisting of five component layers.

Multispectral Image
A multispectral image consists of a few image layers, each layer represents an image acquired at a particular wavelength band. For example, the SPOT HRV sensor operating in the multispectral mode detects radiations in three wavelength bands: the green (500 - 590 nm), red (610 - 680 nm) and near infrared (790 - 890 nm) bands. A single SPOT multispectral scene consists of three intensity images in the three wavelength bands. In this case, each pixel of the scene has three intensity values corresponding to the three bands. A multispectral IKONOS image consists of four bands: Blue, Green, Red and Near Infrared, while a landsat TM multispectral image consists of seven bands: blue, green, red, near-IR bands, two SWIR bands, and a thermal IR band.

Superspectral Image
The more recent satellite sensors are capable of acquiring images at many more wavelength bands. For example, the MODIS sensor on-board the NASA's TERRA satellite consists of 36 spectral bands, covering the wavelength regions ranging from the visible, near infrared, shortwave infrared to the thermal infrared. The bands have narrower bandwidths, enabling the finer spectral characteristics of the targets to be captured by the sensor. The term "superspectral" has been coined to describe such sensors.

Hyperspectral Image
A hyperspectral image consists of about a hundred or more contiguous spectral bands. The characteristic spectrum of the target pixel is acquired in a hyperspectral image. The precise spectral information contained in a hyperspectral image enables better characterisation and identification of targets. Hyperspectral images have potential applications in such fields as precision agriculture (e.g. monitoring the types, health, moisture status and maturity of crops), coastal management (e.g. monitoring of phytoplanktons, pollution, bathymetry changes).

Currently, hyperspectral imagery is not commercially available from satellites. There are experimental satellite-sensors that acquire hyperspectral imagery for scientific investigation (e.g. NASA's Hyperion sensor on-board the EO1 satellite, CHRIS sensor onboard ESA's PRABO satellite).

An illustration of a hyperspectral image cube. The hyperspectral image data usually consists of over a hundred contiguous spectral bands, forming a three-dimensional (two spatial dimensions and one spectral dimension) image cube. Each pixel is associated with a complete spectrum of of the imaged area. The high spectral resolution of hyperspectral images enables better identificaiton of the land covers.

Spatial Resolution
Spatial resolution refers to the size of the smallest object that can be resolved on the ground. In a digital image, the resolution is limited by the pixel size, i.e. the smallest resolvable object cannot be smaller than the pixel size. The intrinsic resolution of an imaging system is determined primarily by the instantaneous field of view (IFOV) of the sensor, which is a measure of the ground area viewed by a single detector element in a give instant in time. n However this intrinsic resolution can often be degraded by other factors which introduce blurring of the image, such as improper focusing, atmospheric scattering and target motion. The pixel size is determined by the sampling distance. A "High Resolution" image refers to one with a small resolution size. Fine details can be seen in a high resolution image. On the other hand, a "Low Resolution" image is one with a large resolution size, i.e. only coarse features can be observed in the image.

A low resolution MODIS scene with a wide coverage. This image was received by CRISP's ground station on 3 March 2001. The intrinsic resolution of the image was approximately 1 km, but the image shown here has been resampled to a resolution of about 4 km. The coverage is more than 1000 km from east to west. A large part of Indochina, Peninsular Malaysia, Singapore and Sumatra can be seen in the image. (Click on the image to display part of it at a resolution of 1 km.)

A browse image of a high resolution SPOT scene. The multispectral SPOT scene has a resolution of 20 m and covers an area of 60 km by 60 km. The browse image has been resampled to 120 m pixel size, and hence the resolution has been reduced. This scene shows Singapore and part of the Johor State of Malaysia.

Part of a high resolution SPOT scene shown at the full resolution of 20 m. The image shown here covers an area of approximately 4.8 km by 3.6 km. At this resolution, roads, vegetation and blocks of buildings can be seen.

Part of a very high resolution image acquired by the IKONOS satellite. This true-colour image was obtained by merging a 4-m multispectral image with a 1-m panchromatic image of the same area acquired simultaneously. The effective resolution of the image is 1 m. At this resolution, individual trees, vehicles, details of buildings, shadows and roads can be seen. The image shown here covers an area of about 400 m by 400 m. A very high spatial resolution image usually has a smaller area of coverage. A full scene of an IKONOS image has a coverage area of about 10 km by 10 km.

Spatial Resolution and Pixel Size


The image resolution and pixel size are often used interchangeably. In realiaty, they are not equivalent. An image sampled at a small pixel size does not necessarily has a high resolution. The following three images illustrate this point. The first image is a SPOT image of 10 m pixel size. It was derived by merging a SPOT panchromatic image of 10 m resolution with a SPOT multispectral image of 20 m resolution. The merging procedure "colours" the panchromtic image using the colours derived from the multispectral image. The effective resolution is thus determined by the resolution of the panchromatic image, which is 10 m. This image is further processed to degrade the resolution while maintaining the same pixel size. The next two images are the blurred versions of the image with larger resolution size, but still digitized at the same pixel size of 10 m. Even though they have the same pixel size as the first image, they do not have the same resolution.

10 m resolution, 10 m pixel size

30 m resolution, 10 m pixel size

80 m resolution, 10 m pixel size

The following images illustrate the effect of pixel size on the visual appearance of an area. The first image is a SPOT image of 10 m pixel size derived by merging a SPOT panchromatic image with a SPOT multispectral image. The subsequent images show the effects of digitizing the same area with larger pixel sizes.

Pixel Size = 10 m Image Width = 160 pixels, Height = 160 pixels

Pixel Size = 20 m Image Width = 80 pixels, Height = 80 pixels

Pixel Size = 40 m Image Width = 40 pixels, Height = 40 pixels

Pixel Size = 80 m Image Width = 20 pixels, Height = 20 pixels

Radiometric Resolution
Radiometric Resolution refers to the smallest change in intensity level that can be detected by the sensing system. The intrinsic radiometric resolution of a sensing system depends on the signal to noise ratio of the detector. In a digital image, the radiometric resolution is limited by the number of discrete quantization levels used to digitize the continuous intensity value. The following images illustrate the effects of the number of quantization levels on the digital image. The first image is a SPOT panchromatic image quantized at 8 bits (i.e. 256 levels) per pixel. The subsequent images show the effects of degrading the radiometric resolution by using fewer quantization levels.

8-bit quanti ation (256 levels) 6-bit quanti ation (64 levels)

4-bit quanti ation (16 levels)

3-bit quanti ation (8 levels)

2-bit quanti ation (4 levels)

1-bit quanti ation (2 levels)

Digiti ation using a small number of quanti ation levels does not affect very much the visual quality of the image. Even 4-bit quanti ation (16 levels) seems acceptable in the examples shown. However, if the image is to be subjected to numerical analysis, the accuracy of analysis will be compromised if few quanti ation levels are used.

Part of the running track in this IKONOS image is under cloud shadow. The IKONOS uses 11-bit digiti ation during image acquisition. The high radiometric resolution enables features under shadow to be recovered.

The features under cloud shadow are recovered by applying a simple contrast and brightness enhancement technique.

Data Volume
The volume of the digital data can potentially be large for multispectral data, as a given area is covered in many different wavelength bands. For example, a 3-band multispectral SPOT image covers an area of about 60 x 60 km2 on the ground with a pixel separation of 20 m. So there are about 3000 x 3000 pixels per image. Each pixel intensity in each band is coded using an 8-bit (i.e. 1 byte) digital number, giving a total of about 27 million bytes per image. In comparison, the panchromatic data has only one band. Thus, panchromatic systems are normally designed to give a higher spatial resolution than the multispectral system. For example, a SPOT panchromatic scene has the same coverage of about 60 x 60 km2 but the pixel size is 10 m, giving about 6000 x 6000 pixels and a total of about 36 million bytes per image. If a multispectral SPOT scene is digitized also at 10 m pixel size, the data volume will be 108 million bytes. For very high spatial resolution imagery, such as the one acquired by the IKONOS satellite, the data volume is even more significant. For example, an IKONOS 4-band multispectral image at 4-m pixel size covering an area of 10 km by 10 km, digitized at 11 bits (stored at 16 bits), has a data volume of 4 x 2500 x 2500 x 2 bytes, or 50 million bytes per image. A 1-m resolution panchromatic image covering the same area would have a data volume of 200 million bytes per image. The images taken by a remote sensing satellite is transmitted to Earth through telecommunication. The bandwidth of the telecommunication channel sets a limit to the data volume for a scene taken by the imaging system. Ideally, it is desirable to have a high spatial resolution image with many spectral bands covering a wide area. In reality, depending on the intended application, spatial resolution may have to be compromised to accommodate a larger number of spectral bands, or a wide area coverage. A small number of spectral bands or a smaller area of coverage may be accepted to allow high spatial resolution imaging.

Optical Remote Sensing

Optical remote sensing makes use of visible, near infrared and short-waveinfrared sensors to form images of the earth's surface by detecting thesolar radiation reflected from targets on the ground. Different materials reflect and absorb differently at different wavelengths. Thus, the targets can be differentiated by their spectral reflectance signatures in the remotely sen sed images. Optical remote sensing systems are classified into the following types, depending on the number of spectral bands used in the imaging process.
y

Panchromatic imaging system: The sensor is a single channel detector sensitive to radiation within a broad wavelength range. If the wavelength range coincide with the visible range, then the resulting image resembles a "black-and-white" photograph taken from space. The physical quantity being measured is the apparent brightness of the targets. The spectral information or "colour" of the targets is lost. Examples of panchromatic imaging systems are: o IKONOS PAN o SPOT HRV-PAN Multispectral imaging system: The sensor is a multichannel detector with a few spectral bands. Each channel is sensitive to radiation within a narrow wavelength band. The resulting image is a multilayer image which contains both the brightness and spectral (colour) information of the targets being observed. Examples of multispectral systems are: o LANDSAT MSS o LANDSAT TM o SPOT HRV-XS o IKONOS MS Superspectral Imaging Systems: A superspectral imaging sensor has many more spectral channels (typically >10) than a multispectral sensor. The bands have narrower bandwidths, enabling the finer spectral characteristics of the targets to be captured by the sensor. Examples of superspectral systems are: o MODIS o MERIS Hyperspectral Imaging Systems: A hyperspectral imaging system is also known as

an "imaging spectrometer". it acquires images in about a hundred or more contiguous spectral bands. The precise spectral information contained in a hyperspectral image enables better characterisation and identification of targets. Hyperspectral images have potential applications in such fields as precision agriculture (e.g. monitoring the types, health, moisture status and maturity of crops), coastal management (e.g. monitoring of phytoplanktons, pollution, bathymetry changes). An example of a hyperspectral system is: o Hyperion on EO1 satellite

Solar Irradiation
Optical remote sensing depends on the sun as the sole source of illumination. The solar irradiation spectrum above the atmosphere can be modeled by a black body radiation spectrum having a source temperature of 5900 K, with a peak irradiation located at about 500 nm wavelength. Physical measurement of the solar irradiance has also been performed using ground based and spaceborne sensors. After passing through the atmosphere, the solar irradiation spectrum at the ground is modulated by the atmospheric transmission windows. Significant energy remains only within the wavelength range from about 0.25 to 3 m.

Solar Irradiation Spectra above the atmosphere and at sea-level.

Spectral Reflectance Signature

When solar radiation hits a target surface, it may be transmitted, absorbed or reflected. Different materials reflect and absorb differently at different wavelengths. The reflectance spectrum of a material is a plot of the fraction of radiation reflected as a function of the incident wavelength and serves as a unique signature for the material. In principle, a material can be identified from its spectral reflectance signature if the sensing system has sufficient spectral resolution to distinguish its spectrum from those of other materials. This premise provides the basis for multispectral remote sensing. The following graph shows the typical reflectance spectra of five materials: clear water, turbid water, bare soil and two types of vegetation.

Reflectance Spectrum of Five Types of Landcover

The reflectance of clear water is generally low. However, the reflectance is maximum at the blue end of the spectrum and decreases as wavelength increases. Hence, clear water appears dark-bluish. Turbid water has some sediment suspension which increases the reflectance in the red end of the spectrum, accounting for its brownish appearance. The reflectance of bare soil generally depends on its composition. In the example shown, the reflectance increases monotonically with increasing wavelength. Hence, it should appear yellowish-red to the eye. Vegetation has a unique spectral signature which enables it to be distinguished readily from other types of land cover in an optical/near-infrared image. The reflectance is low in both the blue and red regions of the spectrum, due to absorption by chlorophyll for photosynthesis. It has a peak at the green region which gives rise to the green colour of vegetation. In the near infrared (NIR) region, the reflectance is much higher than that in the visible band due to the cellular structure in the leaves. Hence, vegetation can be identified by the high NIR but generally low visible reflectances. This property has been used in early reconnaisance

missions during war times for "camouflage detection". The shape of the reflectance spectrum can be used for identification of vegetation type. For example, the reflectance spectra of vegetation 1 and 2 in the above figures can be distinguished although they exhibit the generally characteristics of high NIR but low visible reflectances. Vegetation 1 has higher reflectance in the visible region but lower reflectance in the NIR region. For the same vegetation type, the reflectance spectrum also depends on other factors such as the leaf moisture content and health of the plants. The reflectance of vegetation in the SWIR region (e.g. band 5 of Landsat TM and band 4 of SPOT 4 sensors) is more varied, depending on the types of plants and the plant's water content. Water has strong absorption bands around 1.45, 1.95 and 2.50 m. Outside these absorption bands in the SWIR region, reflectance of leaves generally increases when leaf liquid water content decreases. This property can be used for identifying tree types and plant conditions from remote sensing images. The SWIR band can be used in detecting plant drought stress and delineating burnt areas and fire-affected vegetation. The SWIR band is also sensitive to the thermal radiation emitted by intense fires, and hence can be used to detect active fires, especially during night-time when the background interference from SWIR in reflected sunlight is absent.

Typical Reflectance Spectrum of Vegetation. The labelled arrows indicate the common wavelength bands used in optical remote sensing of vegetation: A: blue band, B: green band; C: red band; D: near IR band; E: short-wave IR band

Interpretation of Optical Images

Interpreting Optical Remote Sensing Images

Four main types of information contained in an optical image are often utili ed for image interpretation:
y y y y

Radiometric Information (i e brightness, intensity, tone), Spectral Information (i e colour, hue), Te tural Information, Geometric and onte tual Information

They are illustrated in the following examples.

Panchromatic Images
A panchromatic image consists of only one band. It is usually displayed as a grey scale image, i.e. the displayed brightness of a particular pixel is proportional to the pixel digital number which is related to the intensity of solar radiation reflected by the targets in the pixel and detected by the detector. Thus, a panchromatic image may be similarly interpreted as a black -and-white aerial photograph of the area. The Radiometric Information is the main information type utili ed in the interpretation.

A panchromatic image extracted from a SPOT panchromatic scene at a ground resolutionof 10 m. The ground coverage is about 6.5 km (width) by 5.5 km (height). The urban area at the bottom left and a clearing near the top of the image hav e high reflected intensity,while the vegetated areas on the right part of the image are generally dark. Roads and blocksof buildings in the ur ban area are visible. A river flowing through the vegetated area, cutting across the top right corner of the image can be seen. The river appears bright due to sediments while the sea at the bottom edge of the image appears dark.

Multispectral Images
A multispectral image consists of several bands of data. For visual display, each band of the image may be displayed one band at a time as a grey scale image, or in combination of three bands at a time as a colour composite image. Interpretation of a multispectral colour composite image will require the knowledge of the spectral reflectance signature of the targets in the scene. In this case, the spectral information content of the image is utili ed in the interpretation. The following three images show the three bands of a multispectral image extracted from a SPOT multispectral scene at a ground resolution of 20 m. The area covered is the same as that shown in the above panchromatic image. Note that both the XS1 (green) and XS2 (red) bands look almost identical to the panchromatic image shown above. In contrast, the vegetated areas now appear bright in theXS3 (near infrared) band due to high reflectance of leaves in the near infrared wavelength region. Several shades of grey can be identified for the vegetated areas, corresponding to different types of vegetation. Water mass (both the river and the sea) appear dark in the XS3 (near IR) band.

SPOT XS1 (green band)

SPOT XS2 (red band)

SPOT XS3 (Near IR band)

Colour Composite Images


In displaying a colour composite image, three primary colours (red, green and blue) are used. When these three colours are combined in various proportions, they produce different colours in the visible spectrum. Associating each spectral band (not necessarily a visible band) to a separate primary colour results in a colour composite image.

Many colours can be formed by combining the three primary colours (Red, Green, Blue) in various proportions

True Colour Composite


If a multispectral image consists of the three visual primary colour bands (red, green, blue), the three bands

may be combined to produce a "true colour" image. For example, the bands 3 (red band), 2 (green band) and 1 (blue band) of a LANDSAT TM image or an IKONOS multispectral image can be assigned respectively to the R, G, and B colours for display. In this way, the colours of the resulting colour composite image resemble closely what would be observed by the human eyes.

A 1-m resolution true-colour IKONOS image.

False Colour Composite


The display colour assignment for any band of a multispectral image can be done in an entirely arbitrary manner. In this case, the colour of a target in the displayed image does not have any resemblance to its actual colour. The resulting product is known as a false colour composite image. There are many possible schemes of producing false colour composite images. However, some scheme may be more suitable for detecting certain objects in the image. A very common false colour composite scheme for displaying a SPOT multispectral image is shown below: R = XS3 (NIR band) G = XS2 (red band) B = XS1 (green band) This false colour composite scheme allows vegetation to be detected readily in the image. In this type of false colour composite images, vegetation appears in different shades of red depending on the types and conditions of the vegetation, since it has a high reflectance in the NIR band (as shown in the graph of spectral reflectance signature).

Clear water appears dark-bluish (higher green band reflectance), while turbid water appears cyan (higher red reflectance due to sediments) compared to clear water. Bare soils, roads and buildings may appear in various shades of blue, yellow or grey, depending on their composition.

False colour composite multispectral SPOT image: Red: XS3; Green: XS2; Blue: XS1

Another common false colour composite scheme for displaying an optical image with a short-wave infrared (SWIR) band is shown below: R = SWIR band (SPOT4 band 4, Landsat TM band 5) G = NIR band (SPOT4 band 3, Landsat TM band 4) B = Red band (SPOT4 band 2, Landsat TM band 3) An example of this false colour composite display is shown below for a SPOT 4 image.

False colour composite of a SPOT 4 multispectral image including the SWIR band: Red: SWIR band; Green: NIR band; Blue: Red band. In this display scheme, vegetation appears in shades of green. Bare soils and clearcut areas appear purplish or magenta. The patch of bright red area on the left is the location of active fire s. A smoke plume originating from the active fire site appears faint bluish in colour.

False colour composite of a SPOT 4 multispectral image without displaying the SWIR band: Red: NIR band; Green: Red band; Blue: Green band. Vegetation appears in sh ades of red. The smoke plume appears bright bluish white.

Natural Colour Composite

For optical images lacking one or more of the three visual primary colour bands (i.e. red, green and blue), the spectral bands (some of which may not be in the visible region) may be combined in such a way that the appearance of the displayed image resembles a visible colour photograph, i.e. vegetation in green, water in blue, soil in brown or grey, etc. Many people refer to this composite as a "true colour" composite. However, this term is misleading since in many instances the colours are only simulated to look similar to the "true" colours of the targets. The term "natural colour" is preferred. The SPOT HRV multispectral sensor does not have a blue band. The three bands, XS1, XS2 and XS3 correspond to the green, red, and NIR bands respectively. But a reasonably good natural colour composite can be produced by the following combination of the spectral bands: R = XS2 G = (3 XS1 + XS3)/4 B = (3 XS1 - XS3)/4

where

R,

and

are

the

display

colour

channels.

Natural colour composite multispectral SPOT image: Red: XS2; Green: 0.75 XS2 + 0.25 XS3; Blue: 0.75 XS2 - 0.25 XS3

Vegetation Indices
Different bands of a multispectral image may be combined to accentuate the vegetated areas. One such combination is the ratio of the near-infrared band to the red band. This ratio is known as the Ratio Vegetation Index (RVI) RVI = NIR/Red Since vegetation has high NIR reflectance but low red reflectance, vegetated areas will have higher RVI values compared to non-vegetated aeras. Another commonly used vegetation index is the Normalised Difference Vegetation Index (NDVI) computed by

NDVI = (NIR - Red)/(NIR + Red)

Normalised Difference Vegetation Index (NDVI) derived from the above SPOT image

In the NDVI map shown above, the bright areas are vegetated while the nonvegetated areas (buildings, clearings, river, sea) are generally dark. Note that the trees lining the roads are clearly visible as grey linear features against the dark background. The NDVI band may also be combined with other bands of the multispectral image to form a colour composite image which helps to discriminate different types of vegetation. One such example is shown below. In this image, the display colour assignment is: R = XS3 (Near IR band) G = (XS3 - XS2)/(XS3 + XS2) (NDVI band) B = XS1 (green band)

NDVI Colour Composite of the SPOT image:Red: XS3; Green: NDVI; Blue: XS1.

At least three types of vegetation can be discriminated in this colour composite image: green, bright yellow and golden yellow areas. The green areas consist of dense trees with closed canopy. The bright yellow areas are covered with shrubs or less dense trees. The golden yellow areas are covered with grass. The non vegetated areas appear in dark blue and magenta.

Textural Information
Texture is an important aid in visual image interpretation, especially for high spatial resolution imagery. An example is shown below. It is also possible to characterize the textural features numerically, and algorithms for computer-aided automatic descrimination of different textures in an image are available. This is an IKONOS 1-m resolution pan-sharpened color image of an oil palm plantation. The image is 300 m across. Even though the general colour is green throughout, three distinct land cover types can be identified from the image texture. The triangular patch at the bottom left corner is the oil palm plantation with matured palm trees. Individual trees can be seen. The predominant texture is the regular pattern formed by the tree crowns. Near to the top of the image, the trees are closer together, and the tree canopies merge together, forming another distinctive textural pattern. This area is probably inhibated by shrubs or abandoned trees with tall undergrowths and shrubs in between the trees. At the bottom right corner, colour is more homogeneous, indicating that it is probably an open field with short grass.

Geometric and Contexture Information


Using geometric and contextual features for image interpretation requires some a-priori information about the area of interest. The "interpretational keys" commonly employed are: shape, size, pattern, location, and association with other familiar features.

Contextual and geometric information plays an important role in the interpretation of very high resolution imagery. Familiar features visible in the image, such as the buildings, roadside trees, roads and vehicles, make interpretation of the image st raight forward.

This is an IKONOS image of a container port, evidenced by the presence of ships, cranes, and regular rows of rectangular containers. The port is probably not operating at its maximum capacity, as empty spaces can be seen in between the containers.

This SPOT image shows an oil palm plantation adjacent to a logged over forest in Riau, Sumatra. The image area is 8.6 km by 6 .4 km. The rectangular grid pattern seen here is a main characteristic of large scale oil palm plantations in this region.

This SPOT image shows land clearing being carried out in a logged over forest. The dark red regions are the remaining forests. Tracks can be seen intruding into the forests, implicating some logging activities in the forests. The logging tracks are also seen in the cleared areas (dark greenish areas). It is obvious that the land clearing activities are carried out with the aid of fires. A smoke plume can be seen emanating from a site of active fires.

Infrared Remote Sensing

Infrared remote sensing makes use of infrared sensors to detect infrared radiation emitted from the Earth's surface. The middle-wave infrared (MWIR) and long-wave infrared (LWIR) are within the thermal infrared region. These radiations are emitted from warm objects such as the Earth's surface. They are used in satellite remote sensing for measurements of the earth's land and sea surface temperature. Thermal infrared remote sensing is also often used for detection of forest fires.

Black Body Radiation

Thermal emission from a surface at various temperatures, modeled by the Planck's equation for an ideal black body. The two bands around 3.8 m (e.g. AVHRR band 3) and 10 m (e.g. AVHRR band 4) commonly available in infrared remote sensing satellite sensors are marked in the figure.

The amount of thermal radiation emitted at a particular wavelength from a warm object depends on its temperature. If the earth's surface is regarded as a blackbody emitter, its apparent temperature (known as the brightness temperature) and the spectral radiance are related by the Planck's blackbody equation, plotted in the above figure for several temperatures. For a surface at a brightness temperature around 300 K, the spectral radiance peaks at a wavelength around 10 m. The peak wavelength decreases as the brightness temperature increases. For this reason, most satellite sensors for measurement of the earth surface temperature have a band detecting infrared radiation around 10 m. Besides the measurement of regular surface temperature, infrared sensors can be used for detection of forest fires or other warm/hot objects. For typical fire temperatures from about 500 K (smouldering fire) to over 1000 K (flaming fire), the radiance versus wavelength curves peak at around 3.8 m. Sensors such as the NOAA-AVHRR, ERS-ATSR and TERRA-MODIS are equipped with this band that can be used for detection of fire hot spots.

This is a true-colour image (at 500 m resolution) acquired by MODIS on 9 July 2001, over the Sumatra and Peninsular Malaysia area. Hot spots detected by the MODIS thermal infrared bands are indicated as red dots in the image. Smoke plumes can be seen spreading northwards from the fire area towards the Northern part of Peninsular Malaysia.

50-km resolution Global Sea Surface Temperature (SST) Field for the period 11 to 14 August 2001 derived from NOAA AVHRR thermal infrared data. Occurrence of abnormal climatic conditions such as the El-Nino can be predicted by observations of the SST anomaly, i.e. the deviation of the daily SST from the mean SST. (Credit: NOAA/NESDIS)

Microwave Remote Sensing

Electromagnetic radiation in the microwave wavelength region is used in remote sensing to provide useful information about the Earth's atmosphere, land and ocean. A microwave radiometer is a passive device which records the natural microwave emission from the earth. It can be used to measure the total water content of the atmosphere within its

field of view. A radar altimeter sends out pulses of microwave signals and record the signal scattered back from the earth surface. The height of the surface can be measured from the time delay of the return signals. A wind scatterometer can be used to measure wind speed and direction over the ocean surface. it sends out pulses of microwaves along several directions an records the magnitude of the d signals backscattered from the ocean surface. The magnitude of the backscattered signal is related to the ocean surface roughness, which in turns is dependent on the sea surface wind condition, and hence the wind speed and di ection can be derived. orne platforms to generate r high resolution images of the earth surface using microwave energy.

Synthetic Aperture Radar (SAR)


In synthetic aperture radar (SAR) imaging, microwave pulses are transmitted by an antenna towards the earth surface. The microwave energy scattered back to the spacecraft is measured. The SAR makes use of the radar principle to form an image by utilising t e time delay of the h backscattered signals.

A radar pulse is transmitted from the antenna to the ground

The radar pulse is scattered by the ground targets back to the antenna

In real aperture radar imaging, the ground resolution is limited by the si e of the microwave beam sent out from the antenna. Finer details on the ground can be resolved by using a narrower beam. The beam width is inversely proportional to the si e of the antenna, i.e. the longer the antenna, the narrower the beam.

The microwave beam sent out by the antenna illuminates an area on the ground (known as the antenna's "footprint"). In radar imaging, the recorded signal strength depends on the microwave energy backscattered from the ground targets inside this footprint. Increasing the length of the antenna will decrease the width of the footprint.

It is not feasible for a spacecraft to carry a very long antenna which is required for high resolution imaging of the earth surface. To overcome this limitation, SAR capitalises on the motion of the space craft to emulate a large antenna (about 4 km for the ERS SAR) from the small antenna (10 m on the ERS satellite) it actually carries on board.

Imaging geometry for a typical strip-mapping synthetic aperture radar imaging system. The antenna's footprint

sweeps out a strip parallel to the direction of the satellite's ground track.

Interaction between Microwaves and Earth's Surface


When microwaves strike a surface, the proportion of energy scattered back to the sensor depends on many factors:
y y y y

Physical factors such as the dielectric constant of the surface materials which also depends strongly on the moisture content; Geometric factors such as surface roughness, slopes, orientation of the objects relative to the radar beam direction; The types of landcover (soil, vegetation or man-made objects). Microwave frequency, polarisation and incident angle.

Click here to read more about microwave fre uency, polarisation and incident angle in SAR imaging

All-Weather Imaging
Due to the cloud penetrating property of microwave, SAR is able to acquire "cloud -free" images in all weather. This is especially useful in the tropical regions which are frequently under cloud covers throughout the year. Being an active remote sensing dev it is also capable of nightice, time operation.

SAR Imaging - Frequenc , Polarisation and Incident Angle

Microwave Fre uency


The ability of microwave to penetrate clouds, precipitation, or land surface cover depends on its frequency. Generally, the penetration power increases for longer wavelength (lower frequency). The SAR backscattered intensity generally increases with the surface roughness. However, "roughness" is a relative quantity. Whether a surface is considered rough or not depends on the length scale of the measuring instrument. If a meter-rule is used to measure surface roughness, then any surface fluctuation of the order of 1 cm or less will be considered smoot . On the other h hand, if a surface is examined under a microscope, then a fluctuation of the order of a fraction of a millimiter is considered very rough. In SAR imaging, the reference length scale for surface roughness is the wavelength of the microwave. If the surface fluctuation is less than the microwave wavelength, then the surface is considered smooth. For example, little radiation is backscattered from a surface with a fluctuation of the order of 5 cm if a L-band (15 to 30 cm wavelength) SAR is used and the surface will appear dark. However, the same surface will

appear bright due to increased backscattering in a X-band (2.4 to 3.8 cm wavelength) SAR image.

The land surface appears smooth to a long wavelength radar. Little radiation is backscattered from the surface.

The same land surface appears rough to a short wavelength radar. The surface appears bright in the radar image due to increased backscattering from the surface.

Both the ERS and RADARSAT SARs use the C band microwave while the JERS SAR uses the L band. The C band is useful for imaging ocean and ice features. However, it also finds numerous land applications. The L band has a longer wavelength and is more penetrating than the C band. Hence, it is more useful in forest and vegetation study as it is able to penetrate deeper into the vegetation canopy.

The short wavelength radar interacts mainly with the top layer of the forest canopy while the longer wavelength radar is able to penetrate deeper into the canopy to undergo multiple scattering between the canopy, trunks and soil.

Microwave Polarisation in Synthetic Aperture Radar


The microwave polarisation refers to the orientation of the electric field vector of the transmitted beam with respect to the horizontal direction. If the electric field vector oscillates along a direction parallel to the horizontal direction, the beam is said to be "H" polarised. On the other hand, if the electric field vector oscillates along a direction perpendicular to the horizontal direction, the beam is "V" polarised. Microwave Polarisation: If the electric field vector oscillates along the horizontal direction, the wave is H polarised. If the electric field vector oscillates perpendicular to the horizontal direction, the wave is V polarised. After interacting with the earth surface, the polarisation state may be altered. So the backscattered microwave energy usually has a mixture of the two polarisation states. The SAR sensor may be designed to detect the H or the V component of the backscattered radiation. Hence, there are four possible polarisation configurations for a SAR system: "HH", "VV", "HV" and "VH" depending on the polarisation states of the transmitted and received microwave signals. For example, the SAR onboard the ERS satellite transmits V polarised and receives only the V polarised microwave pulses, so it is a "VV" polarised SAR. In comparison, the SAR onboard the RADARSAT satellite is a "HH" polarised SAR.

Incident Angles

The incident angle refers to the angle between the incident radar beam and the direction perpendicular to the ground surface. The interaction between microwaves and the surface depends on the incident angle of the radar pulse on the surface. ERS SAR has a constant incident angle of 23o at the scene centre. RADARSAT is the first spaceborne SAR that is equipped with multiple beam modes enabling microwave imaging at different incident angles and resolutions. The incident angle of 23o for the ERS SAR is optimal for detecting ocean waves and other ocean surface features. A larger incident angle may be more suitable for other applications. For example, a large incident angle will increase the contrast between the forested and clearcut areas. Acquisition of SAR images of an area using two different incident angles will also enable the construction of a stereo image for the area.
nterpreting SAR Images

SAR Images
Synthetic Aperture Radar(SAR) images can be obtained from satellites such as ERS, JERS and RADARSAT. Since radar interacts with the ground features in ways different from the optical radiation, special care has to be taken when interpreting radar images. An example of a ERS SAR image is shown below together with a SPOT multispectral natural colour composite image of the same area for comparison.

ERS SAR image (pixel si e=12.5 m)

SPOT Multispectral image in Natural Colour (pixel size=20 m)

The urban area on the left appears bright in the SAR image while the vegetated areas on the right have intermediate tone. The clearings and water (sea and river) appear dark in the image. These features will be explained in the following sections. The SAR image was acquired in September 1995 while the SPOT image was acquired in February 1994. Additional clearings can be seen in the SAR image.

Speckle Noise
Unlike optical images, radar images are formed by coherent interaction of the transmitted microwave with the targets. Hence, it suffers from the effects of speckle noise which arises from coherent summation of the signals scattered from ground scatterers distributed randomly within each pixel. A radar image appears more noisy than an optical image. The speckle noise is sometimes suppressed by applying a speckle removal filter on the digital image before display and further analysis.

This image is e tracted from the above SAR image, showing the clearing areas between the river and the coastline The image appears "grainy" due to the presence of speckles

This image shows the effect of applying a speckle removal filter to the SAR image The vegetated areas and the clearings now appear more homogeneous

Backscattered Radar Intensity


A single radar image is usually displayed as a grey scale image, such as the one shown above. The intensity of each pixel represents the proportion of microwave backsc attered from that area on the ground which depends on a variety of factors: types, si es, shapes and orientations of the scatterers in the target area; moisture content of the target area; frequency and polarisation of the radar pulses; as well as the incident angles of the radar beam. The pixel intensity values are often converted to a physical quantity called the backscattering coefficient or normalised radar cross-section measured in decibel (dB) units with values ranging from +5 dB for very bright objects to -40 dB for very dark surfaces.

Interpreting SAR Images


Interpreting a radar image is not a straightforward task. It very often requires some familiarity with the ground conditions of the areas imaged. As a useful rule of thumb, the higher the backscattered intensity, the rougher is the surface being imaged . Flat surfaces such as paved roads, runways or calm water normally appear as dark areas in a radar image since most of the incident radar pulses are specularly reflected away. Specular Reflection A smooth surface acts like a mirror for the incident radar pulse Most of the incident radar energy is reflected away according to the law of specular reflection, i e the angle of reflection is e ual to the angle of incidence Very little energy is scattered back to the radar sensor

Diffused Reflection A rough surface reflects the incident radar pulse in all directions Part of the radar energy is scattered back to the radar sensor The amount of energy backscattered depends on the properties of the target on the ground

Calm sea surfaces appear dark in SAR images. However, rough sea surfaces may appear bright especially when the incidence angle is small. The presence of oil films smoothen out the sea surface. Under certain conditions when the sea surface is sufficiently rough, oil films can be detected as dark patches against a bright background.

A ship (bright target near the bottom left corner) is seen discharging oil into the sea in this ERS SAR image

Trees and other vegetations are usually moderately rough on the wavelength scale. Hence, they appear as moderately bright features in the image. The tropical rain forests have a characteristic backscatter coefficient of between -6 and -7 dB, which is spatially homogeneous and remains stable in time. For this reason, the tropical rainforests have been used as calibrating targets in performing radiometric calibration of SAR images. Very bright targets may appear in the image due to the corner-reflector or double-bounce effect where the radar pulse bounces off the hori ontal ground (or the sea) towards the target,

and then reflected from one vertical surface of the target back to the sensor. Examples of such targets are ships on the sea, high-rise buildings and regular metallic objects such as cargo containers. Built-up areas and many man-made features usually appear as bright patches in a radar image due to the corner reflector effect.

Corner Reflection When two smooth surfaces form a right angle facing the radar beam, the beam bounces twice off the surfaces and most of the radar energy is reflected back to the radar sensor

This SAR image shows an area of the sea near a busy port Many ships can be seen as bright spots in this image due to corner reflection The sea is calm, and hence the ships can be easily detected against the dark background

The brightness of areas covered by bare soil may vary from very dark to very bright depending on its roughness and moisture content. Typically, rough soil appears bright in the image. For similar soil roughness, the surface with a higher moisture content w appear brighter. ill

Dry Soil: Some of the incident radar energy is able to penetrate into the soil surface, resulting in less backscattered intensity.

Wet Soil: The large difference in electrical properties between water and air results in higher backscattered radar intensity.

Flooded Soil: Radar is specularly reflected off the water surface, resulting in low backscattered intensity. The flooded area appears dark in the SAR image.

Multitemporal SAR images


If more than one radar images of the same area acquired at different time are available, they can be combined to give a multitemporal colour composite image of the area. For example, if three images are available, then one image can be assigned to the Red, the second to the Green and the third to the Blue colour channels for display. This technique is especially useful in detecting landcover changes over the period of image acquisition. The areas where no change in landcover occurs will appear in grey while areas with landcover changes will appear as colourful patches in the image.

This image is an example of a multitemporal colour composite SAR image. The area shown is part of the rice growing areas in the Mekong River delta, Vietnam, near the towns of Soc Trang and Phung Hiep. Three SAR images acquired by the ERS satellite during 5 May, 9 June and 14 July in 1996 are assigned to the red, green and blue channels respectively for display. The colourful areas are the rice growing areas, where the landcovers change rapidly during the rice season. The greyish linear features are the more permanent trees lining the canals. The grey patch near the bottom of the image is wetland forest. The two towns appear as bright white spots in this image. An area of depression flooded with water during this season is visible as a dark region.

Image Processing and Analysis

Image Processing and Analysis


Many image processing and analysis techniques have been developed to aid the interpretation of remote sensing images and to extract as much information as possible from the images. The choice of specific techniques or algorithms to use depends on the goals of each individual project. In this section, we will examine some procedures commonly used in analysing/interpreting remote sensing images.

Pre-Processing
Prior to data analysis, initial processing on the raw data is usually carried out to correct for an y distortion due to the characteristics of the imaging system and imaging conditions. Depending on the user's requirement, some standard correction procedures may be carried out by the ground station operators before the data is delivered to the end -user. These procedures include radiometric correction to correct for uneven sensor response over the whole image and geometric correction to correct for geometric distortion due to Earth's rotation and other imaging conditions (such as oblique viewing). The image may also be transformed to conform to a specific map projection system. Furthermore, if accurate geographical location of an area on the image needs to be known, ground control points (GCP's) are used to register the image to a precise map (geo-referencing).

Image Enhancement
In order to aid visual interpretation, visual appearance of the objects in the image can be improved by image enhancement techniques such as grey level stretching to improve the contrast and spatial filtering for enhancing the edges. An example of an enhancement procedure is shown here.

Multispectral SPOT image of the same area shown in a previous section, but acquired at a later date. Radiometric and geometric corrections have been done. The image has also been transformed to conform to a certain map projection (UTM projection). This image is displayed without any further enhancement.

In the above unenhanced image, a bluish tint can be seen all-over the image, producing a hazy apapearance. This hazy appearance is due to scattering of sunlight by atmosphere into the field of view of the sensor. This effect also degrades the contrast between different landcovers. It is useful to examine the image Histograms before performing any image enhancement. The xaxis of the histogram is the range of the available digital numbers, i.e. 0 to 255. The y-axis is the number of pixels in the image having a given digital number. The histograms of the three bands of this image is shown in the following figures.

Histogram of the XS3 (near infrared) band (displayed in red).

Histogram of the XS2 (red) band (displayed in green).

Histogram of the XS1 (green) band (displayed in blue).

Note that the minimum digital number for each band is not zero. Each histogram is shifted to the right by a certain amount. This shift is due to the atmospheric scattering component adding to the actual radiation reflected from the ground. The shift is particular large for the XS1 band compared to the other two bands due to the higher contribution from Rayleigh scattering for the shorter wavelength. The maximum digital number of each band is also not 255. The sensor's gain factor has been adjusted to anticipate any possibility of encountering a very bright object. Hence, most of the pixels in the image have digital numbers well below the maximum value of 255. The image can be enhanced by a simple linear grey-level stretching. In this method, a level threshold value is chosen so that all pixel values below this threshold are mapped to zero. An upper threshold value is also chosen so that all pixel values above this threshold are mapped to 255. All other pixel values are linearly interpolated to lie between 0 and 255. The lower and upper thresholds are usually chosen to be values close to the minimum and maximum pixel values of the image. The Grey-Level Transformation Table is shown in the following graph.

Grey-Level Transformation Table for performing linear grey level stretching of the three bands of the image. Red line: XS3 band; Green line: XS2 band; Blue line: XS1 band.

The result of applying the linear stretch is shown in the following image. Note that the hazy appearance has generally been removed, except for some parts near to the top of the image. The contrast between different features has been improved.

Multispectral SPOT image after enhancement by a simple linear greylevel stretching.

Image Classification
Different landcover types in an image can be discriminated usingsome image classification algorithms using spectral features, i.e. the brightness and "colour" information contained in each pixel. The classification procedures can be "supervised" or"unsupervised". In supervised classification, the spectral features of some areas of known landcover types are extracted from the image. These areas are known as the "training areas". Every pixel in the whole image is then classified as belonging to one of the classes depending on how close its spectral features are to the spectral features of the training areas. In unsupervised classification, the computer program automatically groups the pixels in the image into separate clusters, depending on their spectral features. Each cluster will then be assigned a landcover type by the analyst. Each class of landcover is referred to as a "theme"and the product of classification is known as a "thematicmap". The following image shows an example of a thematic map. This map was derived from the multispectral SPOT image of the test area shown in a previous section using an unsupervised classification algorithm.

SPOT multispectral image of the test area

Thematic map derived from the SPOT image using an unsupervised classification algorithm.

A plausible assignment of landcover types to the thematic classes is shown in the following table. The accuracy of the thematic map derived from remote sensing images should be verified by field observation. Class No. (Colour in Map) 1 (black) 2 (green) 3 (yellow) 4 (orange) 5 (cyan) 6 (blue) 7 (red) 8 (white)

Landcover Type Clear water Dense Forest with closed canopy Shrubs, Less dense forest Grass Bare soil, built-up areas Turbid water, bare soil, built-up areas bare soil, built-up areas bare soil, built-up areas

The spectral features of these Landcover classes can be exhibited in two graphs shown below. The first graph is a plot of the mean pixel values of the XS3 (near infrared) band versus the XS2 (red) band for each class. The second graph is a plot of the mean pixel values of the XS2 (red) versus XS1 bands. The standard deviations of the pixel values for each class is also shown.

Scatter Plot of the mean pixel values for each landcover class.

In the scatterplot of the class means in the XS3 and XS2 bands, the data points for the non vegetated landcover classes generally lie on a straight line passing through the origin. This line is called the "soil line". The vegetated landcover classes lie above the soil line due to the higher reflectance in the near infrared region (XS3 band) relative to the visible region. In the XS2 (visible red) versus XS1 (visible green) scatterplot, all the data points generally lie on a straight line. This plot shows that the two visible bands are very highly correlated. The vegetated areas and clear water are generally dark while the other nonvegetated landcover classes have varying brightness in the visible bands.

Spatial Feature E traction


In high spatial resolution imagery, details such as buildings and roads can be seen. The amount of details depend on the image resolution. In very high resolution imagery, even road markings, vehicles, individual tree crowns, and aggregates of people can b seen clearly. Pixel-based e methods of image analysis will not work successfully in such imagery. In order to fully exploit the spatial information contained in the imagery, image processing and analysis algorithms utilising the textural, contextual and g eometrical properties are required. Such algorithms make use of the relationship between neighbouring pixels for information extraction. Incorporation of a-priori information is sometimes required. A multi-resolutional approach (i.e. analysis at different spatial scales and combining the resoluts) is also a useful strategy when dealing with very high resolution imagery. In this case, pixel based method can be used in the lower resolution mode and merged with the contextual and textural method at higher reso lutions.

Building height can be derived from a single image using a simple geometric method if shadows of the buildings can be located in the image. For example, the building height of the building shown here can be determined by measuring the distance between a point on the top of the building and the corresponding point of the shadow on the ground, using a simple geometric relation. In this case, the solar illumination direction the satellite sensor viewing direction need to be known.

Oil palm trees in an IKONOS image

Detected trees (white dots) superimposed on the image.

Individual trees in very high resolution imagery can be detected based on the tree crown's intensity profile. An automated technique for detecting and counting oil palm trees in IKONOS images based on differential geometry concepts of edge and curvature has been developed at CRISP.

Measurement of Bio-geophysical Parameters


Specific instruments carried on-board the satellites can be used to make measurements of the biogeophysical parameters of the earth. Some of the examples are: atmospheric water vapour content, stratospheric ozone, land and sea surface temperature, sea water chlorophyll concentration, forest biomass, sea surface wind field, tropospheric aerosol, etc. Specific satellite missions have been launched to continuously monitor the global variations of these environmental parameters that may show the causes or the effects of global climate change and the impacts of human activities on the environment.

Geographical Information System (GIS)


Different forms of imagery such as optical and radar images provide complementary information aboutthe landcover. More detailed information can be derived by combining several different types of images. For example, radar image can form oneof the layers in combination with the visible and near infraredlayers when performing classification.

The thematic information derived fromthe remote sensing images are often combined with other auxiliary datato form the basis for a Geographic Information System (GIS). AGIS is a database of different layers, where each layer containsinformation about a specific aspect of the same area which isused for analysis by the resource scientists.

End of Tutorial
Image Processing and Analysis

You might also like